Light field microscopy (LFM) is a scanning-free 3-dimensional (3D) microscopic imaging method based on the theory of
light field
The light field is a vector function that describes the amount of light flowing in every direction through every point in space. The space of all possible '' light rays'' is given by the five-dimensional plenoptic function, and the magnitude of e ...
. This technique allows sub-second (~10 Hz) large volumetric imaging (
0.1 to 1 mmsup>3) with ~1 μm spatial resolution in the condition of weak scattering and semi-transparence, which has never been achieved by other methods. Just as in traditional
light field rendering, there are two steps for LFM imaging: light field capture and processing. In most setups, a
microlens
A microlens is a small lens, generally with a diameter less than a millimetre (mm) and often as small as 10 micrometres (µm). The small sizes of the lenses means that a simple design can give good optical quality but sometimes unwanted effects ...
array is used to capture the light field. As for processing, it can be based on two kinds of representations of light propagation: the
ray optics
Geometrical optics, or ray optics, is a model of optics that describes light propagation in terms of '' rays''. The ray in geometrical optics is an abstraction useful for approximating the paths along which light propagates under certain circumstan ...
picture
and the
wave optics
In physics, physical optics, or wave optics, is the branch of optics that studies interference, diffraction, polarization, and other phenomena for which the ray approximation of geometric optics is not valid. This usage tends not to include effe ...
picture.
The Stanford University Computer Graphics Laboratory published their first prototype LFM in 2006
and has been working on the cutting edge since then.
Light field generation
A light field is a collection of all the rays flowing through some free space, where each ray can be parameterized with four variables. In many cases, two 2D coordinates–denoted as
&
–on two parallel planes with which the rays intersect are applied for parameterization. Accordingly, the intensity of the 4D light field can be described as a scalar function:
, where
is the distance between two planes.
LFM can be built upon the traditional setup of a wide-field fluorescence microscope and a standard
CCD camera or
sCMOS
sCMOS (''scientific Complementary Metal–Oxide–Semiconductor'') are a type of CMOS image sensor (CIS). These sensors are commonly used as components in specific observational scientific instruments, such as microscopes and telescopes. sCMOS i ...
.
A light field is generated by placing a microlens array at the intermediate image plane of the
objective
Objective may refer to:
* Objective (optics), an element in a camera or microscope
* ''The Objective'', a 2008 science fiction horror film
* Objective pronoun, a personal pronoun that is used as a grammatical object
* Objective Productions, a Brit ...
(or the rear focal plane of an optional relay lens) and is further captured by placing the camera sensor at the rear focal plane of the microlenses. As a result, the coordinates of the microlenses
conjugate with those on the object plane (if additional relay lenses are added, then on the front focal plane of the objective)
; the coordinates of the pixels behind each microlens
conjugate with those on the objective plane
. For uniformity and convenience, we shall call the plane
the original focus plane in this article. Correspondingly,
is the focal length of the microlenses (i.e., the distance between microlens array plane and the sensor plane).
In addition, the apertures and the focal-length of each lens and the dimensions of the sensor and microlens array should all be properly chosen to ensure that there is neither overlap nor empty areas between adjacent subimages behind the corresponding microlenses.
Realization from the ray optics picture
This section mainly introduces the work of Levoy ''et al''., 2006.
Perspective views from varied angles
Owing to the conjugated relationships as mentioned above, any certain pixel
behind a certain microlens
corresponds to the ray passing through the point
towards the direction
. Therefore, by extracting the pixel
from all subimages and stitching them together, a perspective view from the certain angle is obtained:
. In this scenario, spatial resolution is determined by the number of microlenses; angular resolution is determined by the number of pixels behind each microlens.
Tomographic views based on synthetic refocusing
Step 1: Digital refocusing
Synthetic focusing uses the captured light field to compute the photograph focusing at any arbitrary section. By simply summing all the pixels in each subimage behind the microlens (equivalent to collecting all radiation coming from different angles that falls on the same position), the image is focused exactly on the plane that conjugates with the microlens array plane:
,
where
is the angle between the ray and the normal of the sensor plane, and
if the origin of the coordinate system of each subimage is located on the principal optic axis of the corresponding microlens. Now, a new function can defined to absorb the effective projection factor
into the light field intensity
and obtain the actual radiance collection of each pixel:
.
In order to focus on some other plane besides the front focal plane of the objective, say, the plane whose conjugated plane is
away from the sensor plane, the conjugated plane can be moved from
to
and reparameterize its light field back to the original one at
:
.
Thereby, the refocused photograph can be computed with the following formula:
.
Consequently, a focal stack is generated to recapitulate the instant 3D imaging of the object space. Furthermore, tilted or even curved focal planes are also synthetically possible. In addition, any reconstructed 2D image focused at an arbitrary depth corresponds to a 2D slice of a 4D light field in the
Fourier domain
In physics, electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time. Put simply, a time-domain graph shows how a sig ...
, where the algorithm complexity can be reduced from
to
.
Step 2: Point spread function measurement
Due to diffraction and defocus, however, the focal stack
differs from the actual intensity distribution of voxels
, which is really desired. Instead,
is a convolution of
and a point spread function (PSF):
Thus, the 3D shape of the PSF has to be measured in order to subtract its effect and to obtain voxels' net intensity. This measurement can be easily done by placing a fluorescent bead at the center of the original focus plane and recording its light field, based on which the PSF's 3D shape is ascertained by synthetically focusing on varied depth. Given that the PSF is acquired with the same LFM setup and digital refocusing procedure as the focal stack, this measurement correctly reflects the angular range of rays captured by the objective (including any falloff in intensity); therefore, this synthetic PSF is actually free of noise and aberrations. The shape of the PSF can be considered identical everywhere within our desired
field of view
The field of view (FoV) is the extent of the observable world that is seen at any given moment. In the case of optical instruments or sensors it is a solid angle through which a detector is sensitive to electromagnetic radiation.
Human ...
(FOV); hence, multiple measurements can be avoided.
Step 3: 3D deconvolution
In the Fourier domain, the actual intensity of voxels has a very simple relation with the focal stack and the PSF:
,
where
is the operator of the
Fourier transform. However, it may not be possible to directly solve the equation above, given the fact that the aperture is of limited size, resulting in the PSF being
bandlimited
Bandlimiting is the limiting of a signal's frequency domain representation or spectral density to zero above a certain finite frequency.
A band-limited signal is one whose Fourier transform or spectral density has bounded support.
A bandli ...
(i.e., its Fourier transform has zeros). Instead, an iterative algorithm called ''constrained iterative deconvolution'' in the spatial domain is much more practical here:
#
;
#
.
This idea is based on constrained gradient descent: the estimation of
is improved iteratively by calculating the difference between the actual focal stack
and the estimated focal stack
and correcting
with the current difference (
is constrained to be non-negative).
Fourier Slice Photography
The formula of
can be rewritten by adopting the concept of the Fourier Projection-Slice Theorem.
[Ng, R. (2005). Fourier slice photography. In ''ACM SIGGRAPH 2005 Papers'' (pp. 735-744).] Because the photography operator
can be viewed as a shear followed by projection, the result should be proportional to a dilated 2D slice of the 4D Fourier transform of a light field. Precisely, a refocused image can be generated from the 4D Fourier spectrum of a light field by extracting an 2D slice, applying an inverse 2D transform, and scaling. Before the proof, we first introduce some operators:
# Integral Projection Operator:
# Slicing operator:
# Photography Change of Basis: Let
denote an operator for a change of basis of an 4-dimensional function so that
, with
.
# Fourier Transform Operator: Let
denote the N-dimensional Fourier transform operator.
By these definitions, we can rewrite