An event camera, also known as a neuromorphic camera, silicon retina or dynamic vision sensor, is an
imaging sensor
An image sensor or imager is a sensor that detects and conveys information used to make an image. It does so by converting the variable attenuation of light waves (as they pass through or reflect off objects) into signals, small bursts of cur ...
that responds to local changes in brightness. Event cameras do not capture images using a
shutter as
conventional (frame) cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur, and staying silent otherwise.
Functional description
Event camera pixels independently respond to changes in brightness as they occur.
Each pixel stores a reference brightness level, and continuously compares it to the current brightness level. If the difference in brightness exceeds a threshold, that pixel resets its reference level and generates an event: a discrete packet that contains the pixel address and timestamp. Events may also contain the polarity (increase or decrease) of a brightness change, or an instantaneous measurement of the illumination level.
Thus, event cameras output an asynchronous stream of events triggered by changes in scene illumination.
Event cameras have microsecond temporal resolution, 120 dB dynamic range, and less
under/overexposure and
motion blur
Motion blur is the apparent streaking of moving objects in a photograph or a sequence of frames, such as a film or animation. It results when the image being recorded changes during the recording of a single exposure, due to rapid movement or lo ...
than frame cameras. This allows them to track object and camera movement (
optical flow
Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent veloci ...
) more accurately. They yield grey-scale information. Initially (2014), resolution was limited to 100 pixels. A later entry reached 640x480 resolution in 2019. Because individual pixels fire independently, event cameras appear suitable for integration with asynchronous computing architectures such as
neuromorphic computing
Neuromorphic engineering, also known as neuromorphic computing, is the use of electronic circuits to mimic neuro-biological architectures present in the nervous system. A neuromorphic computer/chip is any device that uses physical artificial ne ...
. Pixel independence allows these cameras to cope with scenes with brightly and dimly lit regions without having to average across them.
*Indicates temporal resolution since human eyes and event cameras do not output frames.
Types
Temporal contrast sensors (such as DVS
(Dynamic Vision Sensor) or sDVS
(sensitive-DVS)) produce events that indicate polarity (increase or decrease in brightness), while temporal image sensors
indicate the instantaneous
intensity
Intensity may refer to:
In colloquial use
*Strength (disambiguation)
*Amplitude
* Level (disambiguation)
* Magnitude (disambiguation)
In physical sciences
Physics
*Intensity (physics), power per unit area (W/m2)
*Field strength of electric, ma ...
with each event. The DAVIS
(Dynamic and Active-pixel Vision Sensor) contains a global shutter
active pixel sensor
An active-pixel sensor (APS) is an image sensor where each pixel sensor unit cell has a photodetector (typically a pinned photodiode) and one or more active transistors. In a metal–oxide–semiconductor (MOS) active-pixel sensor, MOS field-effec ...
(APS) in addition to the dynamic vision sensor (DVS) that shares the same photosensor array. Thus, it has the ability to produce image frames alongside events. Many event cameras additionally carry an
inertial measurement unit
An inertial measurement unit (IMU) is an electronic device that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometer ...
(IMU).
Retinomorphic sensors
Another class of event sensors are so-called ''retinomorphic'' sensors. While the term retinomorphic has been used to describe event sensors generally, in 2020 it was adopted as the name for a specific sensor design based on a resistor and photosensitive
capacitor
A capacitor is a device that stores electrical energy in an electric field by virtue of accumulating electric charges on two close surfaces insulated from each other. It is a passive electronic component with two terminals.
The effect of ...
in series. These capacitors are distinct from photocapacitors, which are used to store
solar energy
Solar energy is radiant light and heat from the Sun that is harnessed using a range of technologies such as solar power to generate electricity, solar thermal energy (including solar water heating), and solar architecture. It is an essenti ...
, and are instead designed to change capacitance under illumination. They charge/discharge slightly when the capacitance is changed, but otherwise remain in equilibrium. When a photosensitive capacitor is placed in series with a
resistor
A resistor is a passive two-terminal electrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active el ...
, and an input voltage is applied across the circuit, the result is a sensor that outputs a voltage when the light intensity changes, but otherwise does not.
Unlike other event sensors (typically a photodiode and some other circuit elements), these sensors produce the signal inherently. They can hence be considered a single device that produces the same result as a small circuit in other event cameras. Retinomorphic sensors have to-date only been studied in a research environment.
Algorithms
Image reconstruction
Image reconstruction from events has the potential to create images and video with high dynamic range, high temporal resolution and reduced motion blur. Image reconstruction can be achieved using temporal smoothing, e.g.
high-pass
A high-pass filter (HPF) is an electronic filter that passes signals with a frequency higher than a certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. The amount of attenuation for each frequency de ...
or complementary filter.
Alternative methods include
optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
and gradient estimation followed by
Poisson integration.
Spatial convolutions
The concept of spatial event-driven convolution was postulated in 1999
(before the DVS), but later generalized during EU project CAVIAR
(during which the DVS was invented) by projecting event-by-event an arbitrary
convolution kernel
In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ( and ) that produces a third function (f*g) that expresses how the shape of one is modified by the other. The term ''convolution'' ...
around the event coordinate in an array of integrate-and-fire pixels.
Extension to multi-kernel event-driven convolutions
allows for event-driven deep
convolutional neural networks
In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of artificial neural network (ANN), most commonly applied to analyze visual imagery. CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Netwo ...
.
Motion detection and tracking
Segmentation and
detection of moving objects viewed by an event camera can seem to be a trivial task, as it is done by the sensor on-chip. However, these tasks are difficult, because events carry little information
and do not contain useful visual features like texture and color.
These tasks become further challenging given a moving camera,
because events are triggered everywhere on the image plane, produced by moving objects and the static scene (whose apparent motion is induced by the camera’s ego-motion). Some of the recent approaches to solving this problem include the incorporation of motion-compensation models and traditional
clustering algorithms.
Potential applications
Potential applications include object recognition, autonomous vehicles, and robotics.
The US military is considering infrared and other event cameras because of their lower power consumption and reduced heat generation.
See also
*
Neuromorphic engineering
Neuromorphic engineering, also known as neuromorphic computing, is the use of electronic circuits to mimic neuro-biological architectures present in the nervous system. A neuromorphic computer/chip is any device that uses physical artificial ne ...
*
Retinomorphic sensor
*
Rolling shutter
Rolling shutter is a method of image capture in which a still picture (in a still camera) or each frame of a video (in a video camera) is captured not by taking a snapshot of the entire scene at a single instant in time but rather by scanning ...
References
{{Reflist
Science of photography
Image sensors