Researchers with the University of Zurich (UZH) and ETH Zurich have detailed the development of a novel recurrent neural network that can reconstruct ultra-high-speed videos from data captured by event cameras. Unlike convention cameras, which capture data as individual image frames, event cameras are ‘bio-inspired vision sensors’ that continuously capture movements via pixel-level brightness changes.

Event camera sensors perceive and record the world in a manner similar to how human vision works. Information is continuously recorded, meaning there’s no loss of data that would result from capturing the scene as individual frames. According to UZH researchers, event cameras offer multiple benefits over traditional cameras, including latency measured in microseconds, a complete lack of motion blur, and very high dynamic range.

A figured presented in the researchers' paper highlighting how fine details in an image were preserved while presenting what they refer to as ‘bleeding edges.’

However, and unlike traditional cameras, the resulting output is a sequence of asynchronous events instead of actual intensity images (frames). Traditional vision algorithms can’t be used on the camera event output; something researchers have addressed with their newly detailed novel recurrent network.

Until now, reconstruction of the camera events into intensity images depended on ‘hand-crafted priors and strong assumptions about the imaging process as well as the statistics of natural images,’ according to the researchers. The newly developed approach differs, instead of reconstructing the images directly from the data.

In describing the fruits of their work, the team says:

Our quantitative experiments show that our network surpasses state-of-the-art reconstruction methods by a large margin in terms of image quality (> 20%), while comfortably running in real-time. We show that the network can synthesize high framerate videos (> 5,000 frames per second) of high-speed phenomena (e.g., a bullet hitting an object) and can provide high dynamic range reconstructions in challenging lighting conditions. As an additional contribution, we demonstrate the effectiveness of our reconstructions as an intermediate representation for event data.

The team has released its reconstruction code and pre-trained model on GitHub to aid future research into the technology. Event cameras may one day be used for capturing ultra-high-speed footage, as well as very high dynamic range videos.