HOME

TheInfoList




The light field is a
vector function A vector-valued function, also referred to as a vector function, is a function (mathematics), mathematical function of one or more variable (mathematics), variables whose range of a function, range is a set of multidimensional vector (mathematics ...

vector function
that describes the amount of
light Light or visible light is electromagnetic radiation within the portion of the electromagnetic spectrum that is visual perception, perceived by the human eye. Visible light is usually defined as having wavelengths in the range of 400–700 nan ...

light
flowing in every direction through every point in space. The space of all possible
light rays In optics a ray is an idealized geometrical model of light, obtained by choosing a curve that is perpendicular to the wavefronts of the actual light, and that points in the direction of energy transfer, energy flow. Rays are used to model the pr ...
is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by the
radiance In radiometry Radiometry is a set of techniques for measuring Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events. The scope and application of measurement are ...
.
yush Sharma Yush, Yoosh or Yoush ( fa, يوش) may refer to: * Yush, Mazandaran * Yush, South Khorasan * Kristal Yush (born 1982), American female hammer thrower * ''Yush'', a 1994 novel by Jamaican born British author Victor Headley {{disambiguation, ...
was the first to propose (in an
1846 Events January–March * January 5 – The United States House of Representatives votes to stop sharing the Oregon Country with the United Kingdom of Great Britain and Ireland, United Kingdom. * January 13 – The Milan–Venice railway's ...
lecture entitled "Thoughts on Ray Vibrations") that light should be interpreted as a field, much like the magnetic fields on which he had been working for several years. The phrase ''light field'' was coined by Andrey Gershun in a classic paper on the radiometric properties of light in three-dimensional space (1936).


The 5D plenoptic function

If the concept is restricted to geometric
optics Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of optical instruments, instruments that use or Photodetector, detect it. Optics usually describes t ...

optics
—i.e., to incoherent light and to objects larger than the wavelength of light—then the fundamental carrier of light is a
ray Ray may refer to: Science and mathematics * Ray (geometry) In geometry, the notion of line or straight line was introduced by ancient mathematicians to represent straight objects (i.e., having no curvature) with negligible width and depth. ...
. The measure for the amount of light traveling along a ray is
radiance In radiometry Radiometry is a set of techniques for measuring Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events. The scope and application of measurement are ...
, denoted by ''L'' and measured in
watt The watt (symbol: W) is a unit of power Power typically refers to: * Power (physics) In physics, power is the amount of energy transferred or converted per unit time. In the International System of Units, the unit of power is the watt, equa ...

watt
s ''(W)'' per
steradian The steradian (symbol: sr) or square radian is the SI unit The International System of Units, known by the international abbreviation SI in all languages and sometimes pleonastically as the SI system, is the modern form of the metric sys ...

steradian
''(sr)'' per meter squared ''(m2)''. The steradian is a measure of
solid angle In geometry, a solid angle (symbol: ) is a measure of the amount of the field of view from some particular point that a given object covers. That is, it is a measure of how large the object appears to an observer looking from that point. The poi ...

solid angle
, and meters squared are used here as a measure of cross-sectional area, as shown at right. The radiance along all such rays in a region of three-dimensional space illuminated by an unchanging arrangement of lights is called the plenoptic function (Adelson 1991). The plenoptic illumination function is an idealized function used in
computer vision Computer vision is an interdisciplinary scientific field that deals with how computer A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations automatically. Modern computers can perform ge ...
and
computer graphics Computer graphics deals with generating images with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, cell phone and computer displays, and many specialized applications. A great dea ...

computer graphics
to express the image of a scene from any possible viewing position at any viewing angle at any point in time. It is never actually used in practice computationally, but is conceptually useful in understanding other concepts in vision and graphics (Wong 2002). Since rays in space can be parameterized by three coordinates, ''x'', ''y'', and ''z'' and two angles ''θ'' and ''ϕ'', as shown at left, it is a five-dimensional function, that is, a function over a five-dimensional
manifold In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities a ...

manifold
equivalent to the product of 3D
Euclidean space Euclidean space is the fundamental space of classical geometry. Originally, it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean spaces of any nonnegative integer dimension (mathematics), dimens ...
and the
2-sphere A sphere (from Greek#REDIRECT Greek Greek may refer to: Greece Anything of, from, or related to Greece Greece ( el, Ελλάδα, , ), officially the Hellenic Republic, is a country located in Southeast Europe. Its population is appr ...
. Like Adelson, Gershun defined the light field at each point in space as a 5D function. However, he treated it as an infinite collection of vectors, one per direction impinging on the point, with lengths proportional to their radiances. Integrating these vectors over any collection of lights, or over the entire sphere of directions, produces a single scalar value—the ''total irradiance'' at that point, and a resultant direction. The figure at right, reproduced from Gershun's paper, shows this calculation for the case of two light sources. In computer graphics, this vector-valued function of
3D space Three-dimensional space (also: 3D space, 3-space or, rarely, tri-dimensional space) is a geometric setting in which three values (called parameter A parameter (), generally, is any characteristic that can help in defining or classifying a ...
is called the ''vector irradiance field'' (Arvo, 1994). The vector direction at each point in the field can be interpreted as the orientation one would face a flat surface placed at that point to most brightly illuminate it.


Higher dimensionality

One can consider time,
wavelength In physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which relates to the order of nature, or, in other words, to the regular su ...

wavelength
, and
polarization Polarization or polarisation may refer to: In the physical sciences *Polarization (waves), the ability of waves to oscillate in more than one direction, in particular polarization of light, responsible for example for the glare-reducing effect of ...
angle as additional variables, yielding higher-dimensional functions.


The 4D light field

In a plenoptic function, if the region of interest contains a
concave Concave means curving in or hollowed inward, as opposed to convex. Concave may refer to: * Concave function In , a concave function is the of a . A concave function is also ously called concave downwards, concave down, convex upwards, convex c ...
object (think of a cupped hand), then light leaving one point on the object may travel only a short distance before being blocked by another point on the object. No practical device could measure the function in such a region. However, if we restrict ourselves to locations outside the
convex hull In geometry Geometry (from the grc, γεωμετρία; ' "earth", ' "measurement") is, with , one of the oldest branches of . It is concerned with properties of space that are related with distance, shape, size, and relative position o ...

convex hull
(think shrink-wrap) of the object, i.e. in free space, then we can measure the plenoptic function by taking many photos using a digital camera. Moreover, in this case the function contains redundant information, because the radiance along a ray remains constant from point to point along its length, as shown at left. In fact, the redundant information is exactly one dimension, leaving us with a four-dimensional function (that is, a function of points in a particular four-dimensional
manifold In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities a ...

manifold
). Parry Moon dubbed this function the ''photic field'' (1981), while researchers in computer graphics call it the ''4D light field'' (Levoy 1996) or ''Lumigraph'' (Gortler 1996). Formally, the 4D light field is defined as radiance along rays in empty space. The set of rays in a light field can be parameterized in a variety of ways, a few of which are shown below. Of these, the most common is the two-plane parameterization shown at right (below). While this parameterization cannot represent all rays, for example rays parallel to the two planes if the planes are parallel to each other, it has the advantage of relating closely to the analytic geometry of perspective imaging. Indeed, a simple way to think about a two-plane light field is as a collection of perspective images of the ''st'' plane (and any objects that may lie astride or beyond it), each taken from an observer position on the ''uv'' plane. A light field parameterized this way is sometimes called a ''light slab''.


Sound analog

The analog of the 4D light field for sound is the ''sound field'' or ''wave field,'' as in wave field synthesis, and the corresponding parametrization is the Kirchhoff-Helmholtz integral, which states that, in the absence of obstacles, a sound field over time is given by the pressure on a plane. Thus this is two dimensions of information at any point in time, and over time a 3D field. This two-dimensionality, compared with the apparent four-dimensionality of light, is because light travels in rays (0D at a point in time, 1D over time), while by Huygens–Fresnel principle, a sound wave front can be modeled as spherical waves (2D at a point in time, 3D over time): light moves in a single direction (2D of information), while sound simply expands in every direction. However, light travelling in non-vacuous media may scatter in a similar fashion, and the irreversibility or information lost in the scattering is discernible in the apparent loss of a system dimension.


Image Refocusing

Because light field provides spatial and angular information, we can alter the position of focal planes after exposure by image post-processing, which is often referred to as refocusing. The principal of refocusing is obtaining conventional 2-D photographs from a light field through the integral transform. The transform takes a lightfield as its input and generates a photograph focused on a determined plane. Assume we use L_(s,t,u,v) to represent a 4-D light field that records light rays traveling from position (u,v) on the first plane to position (s,t) on the second plane, where F is the distance between two planes. A 2-D photograph at any depth \alpha F can be obtained from the following integral transform:Ng, R. (2005). Fourier slice photography. In ''ACM SIGGRAPH 2005 Papers'' (pp. 735-744). \mathcal_\left[L_\right](s,t)=\iint L_F(u(1-1/\alpha)+s/\alpha,v(1-1/\alpha)+t/\alpha,u,v)~dudv, or more concisely, \mathcal_\left[L_\right](\boldsymbol)=\frac \int L_\left(\boldsymbol\left(1-\frac\right)+\frac, \boldsymbol\right) d \boldsymbol, where \boldsymbol=(s,t), \boldsymbol=(u,v), and \mathcal_\left[\cdot\right] is often referred to as Photography operator. In practice, this formula cannot be directly used because a plenoptic camera usually captures discrete samples of the lightfield L_(s,t,u,v), and hence resampling (or interpolation) is needed to compute L_\left(\boldsymbol\left(1-\frac\right)+\frac, \boldsymbol\right). Another problem is high computation complexity. To compute an N\times N 2-D photograph from an N\times N\times N\times N 4-D light field, the complexity of the formula is O(N^4).


Fourier Slice Photography

One way to reduce the complexity of computation is to adopt the concept of Projection-slice theorem, Fourier slice theorem: The photography operator \mathcal_\left[\cdot\right] can be viewed as a shear followed by projection, the result should be proportional to a dilated 2-D slice of the 4-D Fourier transform of a light field. More precisely, a refocused image can be generated from the 4-D Fourier spectrum of a light field by extracting an 2-D slice, applying an inverse 2-D transform, and scaling. The asymptotic complexity of the algorithm is O(N^2 \operatornameN). For more information, se
Fourier Slice Photography


Discrete Focal Stack Transform

Another way to efficiently compute 2-D photographs is to adopt discrete focal stack transform (DFST). More precisely, DFST is designed to generate a collection of refocused 2-D photographs, or so-called Focus stacking, Focal Stack. This method can be implemeted by fast Fractional Fourier transform, fractional fourier transform (FrFT). The discrete photography operator \mathcal_\left[\cdot\right] is defined as follows for a lightfield L_(\boldsymbol ,\boldsymbol ) sampled in a 4-D grid \boldsymbol = \Delta s \tilde, \tilde=-\boldsymbol _,...,\boldsymbol _, \boldsymbol = \Delta u \tilde, \tilde=-\boldsymbol _,...,\boldsymbol _: \mathcal_[L](\boldsymbol)= \sum_^ L(\boldsymbol q+\boldsymbol, \boldsymbol) \Delta \boldsymbol, \quad \Delta \boldsymbol=\Delta u\Delta v, \quad q=\left(1-\frac\right) Because (\boldsymbol q+\boldsymbol, \boldsymbol) is usually not on the 4-D grid, DFST adopts trigonometric interpolation to compute the non-grid values. The whole algorithm consists of six steps: # Sample the light field L_(\boldsymbol ,\boldsymbol ) with the sampling period \Delta s and \Delta u and get the discretized light field L^d_(\boldsymbol ,\boldsymbol ). # Padding L^d_(\boldsymbol ,\boldsymbol ) with zeros such that the signal length is enough for FrFT without aliasing. # For every \boldsymbol , compute the Discrete Fourier transform of L^d_(\boldsymbol ,\boldsymbol ), and get the result R1. # For every focal length \alpha F, compute the Fractional Fourier transform, fractional fourier transform of R1, where the order of the transform depends on \alpha, and get the result R2. # Compute the inverse Discrete Fourier transform of R2. # Remove the marginal pixels of R2 so that each 2-D photograph has the size (2_+1) \times (2_+1)


Ways to create light fields

Light fields are a fundamental representation for light. As such, there are as many ways of creating light fields as there are computer programs capable of creating images or instruments capable of capturing them. In computer graphics, light fields are typically produced either by rendering (computer graphics), rendering a 3D model or by photographing a real scene. In either case, to produce a light field views must be obtained for a large collection of viewpoints. Depending on the parameterization employed, this collection will typically span some portion of a line, circle, plane, sphere, or other shape, although unstructured collections of viewpoints are also possible (Buehler 2001). Devices for capturing light-field photography, light fields photographically may include a moving handheld camera or a robotically controlled camera (Levoy 2002), an arc of cameras (as in the bullet time effect used in ''The Matrix''), a dense array of cameras (Kanade 1998; Yang 2002; Wilburn 2005), light-field camera, handheld cameras (Ren Ng, Ng 2005; Georgiev 2006; Marwah 2013), microscopes (Levoy 2006), or other optical system (Bolles 1987). How many images should be in a light field? The largest known light field (o
Michelangelo's statue of Night
contains 24,000 1.3-megapixel images. At a deeper level, the answer depends on the application. For light field rendering (see the Application section below), if you want to walk completely around an opaque object, then of course you need to photograph its back side. Less obviously, if you want to walk close to the object, and the object lies astride the ''st'' plane, then you need images taken at finely spaced positions on the ''uv'' plane (in the two-plane parameterization shown above), which is now behind you, and these images need to have high spatial resolution. The number and arrangement of images in a light field, and the resolution of each image, are together called the "sampling" of the 4D light field. Analyses of ''light field sampling'' have been undertaken by many researchers; a good starting point is Chai (2000). Also of interest is Durand (2005) for the effects of occlusion, Ramamoorthi (2006) for the effects of lighting and reflection, and Ren Ng, Ng (2005) and Zwicker (2006) for applications to plenoptic cameras and 3D displays, respectively.


Applications

''Computational imaging'' refers to any image formation method that involves a digital computer. Many of these methods operate at visible wavelengths, and many of those produce light fields. As a result, listing all applications of light fields would require surveying all uses of computational imaging in art, science, engineering, and medicine. In computer graphics, some selected applications are: * Illumination engineering: Gershun's reason for studying the light field was to derive (in closed form if possible) the illumination patterns that would be observed on surfaces due to light sources of various shapes positioned above these surface. An example is shown at right. A more modern study is (Ashdown 1993). :The branch of optics devoted to illumination engineering is nonimaging optics (Chaves 2015; Winston 2005). It extensively uses the concept of flow lines (Gershun's flux lines) and vector flux (Gershun's light vector). However, the light field (in this case the positions and directions defining the light rays) is commonly described in terms of phase space and Hamiltonian optics. * Light field rendering: By extracting appropriate 2D slices from the 4D light field of a scene, one can produce novel views of the scene (Levoy 1996; Gortler 1996). Depending on the parameterization of the light field and slices, these views might be Perspective projection, perspective, Orthographic projection (geometry), orthographic, crossed-slit (Zomet 2003), general linear cameras (Yu and McMillan 2004), multi-perspective (Rademacher 1998), or another type of projection. Light field rendering is one form of Image-Based Modeling And Rendering, image-based rendering. * Synthetic aperture photography: By integrating an appropriate 4D subset of the samples in a light field, one can approximate the view that would be captured by a camera having a finite (i.e., non-pinhole) aperture. Such a view has a finite depth of field. By shearing or warping the light field before performing this integration, one can focus on different fronto-parallel (Isaksen 2000) or oblique (Vaish 2005) planes in the scene. If a digital camera was able to capture the light field (Ren Ng, Ng 2005), its photographs would allow being refocused after they are taken. * 3D display: By presenting a light field using technology that maps each sample to the appropriate ray in physical space, one obtains an autostereoscopy, autostereoscopic visual effect akin to viewing the original scene. Non-digital technologies for doing this include integral photography, Volumetric display, parallax panoramagrams, and holography; digital technologies include placing an array of lenslets over a high-resolution display screen, or projecting the imagery onto an array of lenslets using an array of video projectors. If the latter is combined with an array of video cameras, one can capture and display a time-varying light field. This essentially constitutes a 3D television system (Javidi 2002; Matusik 2004). * Brain imaging: Neural activity can be recorded optically by genetically encoding neurons with reversible fluorescent markers e.g. GCaMP that indicate the presence of calcium ions in real time. Since light field microscopy captures full volume information in a single frame, it is possible to monitor neural activity in many individual neurons randomly distributed in a large volume at video framerate (Grosenick, 2009, 2017; Perez, 2015). A quantitative measurement of neural activity can even be done despite optical aberrations in brain tissue and without reconstructing a volume image (Pegard, 2016), and be used to monitor activity in thousands of neurons in a behaving mammal (Grosenick, 2017). * Generalized Scene Reconstruction: Generalized Scene Reconstruction (GSR) is a method of 3D reconstruction from multiple images wherein a scene model representing a generalized light field and a relightable matter field is created (Leffingwell, 2018). The light field represents light flowing in every direction through every point in the scene. The matter field represents the light interaction properties of matter occupying every point in the scene. GSR can be performed using approaches including Neural Radiance Fields (NeRFs) (Mildenhall, 2020) and Inverse Light Transport (Leffingwell, 2018). Image generation and predistortion of synthetic imagery for holographic stereograms is one of the earliest examples of computed light fields, anticipating and later motivating the geometry used in Levoy and Hanrahan's work (Halle 1991, 1994). Modern approaches to light field display explore co-designs of optical elements and compressive computation to achieve higher resolutions, increased contrast, wider fields of view, and other benefits (Wetzstein 2012, 2011; Lanman 2011, 2010). * Glare reduction: Glare (vision), Glare arises due to multiple scattering of light inside the camera's body and lens optics and reduces image contrast. While glare has been analyzed in 2D image space (Talvala 2007), it is useful to identify it as a 4D ray-space phenomenon (Raskar 2008). By statistically analyzing the ray-space inside a camera, one can classify and remove glare artifacts. In ray-space, glare behaves as high frequency noise and can be reduced by outlier rejection. Such analysis can be performed by capturing the light field inside the camera, but it results in the loss of spatial resolution. Uniform and non-uniform ray sampling could be used to reduce glare without significantly compromising image resolution (Raskar 2008).


See also

* Light-field camera * Angle–sensitive pixel * Lytro * Reflectance paper * Raytrix * Dual photography


Notes


References


Theory

* Adelson, E.H., Bergen, J.R. (1991)
"The Plenoptic Function and the Elements of Early Vision"
In ''Computation Models of Visual Processing'', M. Landy and J.A. Movshon, eds., MIT Press, Cambridge, 1991, pp. 3–20. * Arvo, J. (1994)
"The Irradiance Jacobian for Partially Occluded Polyhedral Sources"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 335–342. * Bolles, R.C., Baker, H. H., Marimont, D.H. (1987)
"Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion"
''International Journal of Computer Vision'', Vol. 1, No. 1, 1987, Kluwer Academic Publishers, pp 7–55. * Faraday, M.

''Philosophical Magazine'', S.3, Vol XXVIII, N188, May 1846. * Gershun, A. (1936). "The Light Field", Moscow, 1936. Translated by P. Moon and G. Timoshenko in ''Journal of Mathematics and Physics'', Vol. XVIII, MIT, 1939, pp. 51–151. * Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M. (1996)
"The Lumigraph"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 43–54. * Levoy, M., Hanrahan, P. (1996)
"Light Field Rendering"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 31–42. * Moon, P., Spencer, D.E. (1981). ''The Photic Field'', MIT Press. * Wong, T.T., Fu, C.W., Heng, P.A., Leung C.S. (2002)
"The Plenoptic-Illumination Function"
''IEEE Trans. Multimedia'', Vol. 4, No. 3, pp. 361-371.


Analysis

* G. Wetzstein, I. Ihrke, W. Heidrich (2013
"On Plenoptic Multiplexing and Reconstruction"
''International Journal of Computer Vision (IJCV)'', Volume 101, Issue 2, pp. 384–400. * Ramamoorthi, R., Mahajan, D., Belhumeur, P. (2006)

''ACM TOG''. * Zwicker, M., Matusik, W., Durand, F., Pfister, H. (2006)
"Antialiasing for Automultiscopic 3D Displays"
''Eurographics Symposium on Rendering, 2006''. * Ng, R. (2005)
"Fourier Slice Photography"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 735–744. * Durand, F., Holzschuch, N., Soler, C., Chan, E., Sillion, F. X. (2005)
"A Frequency Analysis of Light Transport"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 1115–1126. * Chai, J.-X., Tong, X., Chan, S.-C., Shum, H. (2000)

''Proc. ACM SIGGRAPH'', ACM Press, pp. 307–318. * Halle, M. (1994
"Holographic Stereograms as Discrete imaging systems"
in ''SPIE Proc. Vol. #2176: Practical Holography VIII'', S.A. Benton, ed., pp. 73–84. * Yu, J., McMillan, L. (2004)
"General Linear Cameras"
''Proc. ECCV 2004'', Lecture Notes in Computer Science, pp. 14–27.


Light field cameras

* Marwah, K., Wetzstein, G., Bando, Y., Raskar, R. (2013)
"Compressive Light Field Photography using Overcomplete Dictionaries and Optimized Projections"
''ACM Transactions on Graphics (SIGGRAPH)''. * Liang, C.K., Lin, T.H., Wong, B.Y., Liu, C., Chen, H. H. (2008)
"Programmable Aperture Photography:Multiplexed Light Field Acquisition"
''Proc. ACM SIGGRAPH''. * Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J. (2007)
"Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing"
''Proc. ACM SIGGRAPH''. * Georgiev, T., Zheng, C., Nayar, S., Curless, B., Salesin, D., Intwala, C. (2006)
"Spatio-angular Resolution Trade-offs in Integral Photography"
''Proc. EGSR 2006''. * Kanade, T., Saito, H., Vedula, S. (1998)

Tech report CMU-RI-TR-98-34, December 1998. * Levoy, M. (2002)
Stanford Spherical Gantry
* Levoy, M., Ng, R., Adams, A., Footer, M., Horowitz, M. (2006)
"Light Field Microscopy"
''ACM Transactions on Graphics'' (Proc. SIGGRAPH), Vol. 25, No. 3. * Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P. (2005)
"Light Field Photography with a Hand-Held Plenoptic Camera"
''Stanford Tech Report'' CTSR 2005-02, April, 2005. * Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Levoy, M., Horowitz, M. (2005)
"High Performance Imaging Using Large Camera Arrays"
''ACM Transactions on Graphics'' (Proc. SIGGRAPH), Vol. 24, No. 3, pp. 765–776. * Yang, J.C., Everett, M., Buehler, C., McMillan, L. (2002)
"A Real-Time Distributed Light Field Camera"
''Proc. Eurographics Rendering Workshop 2002''.
"The CAFADIS camera"


Light field displays

* Wetzstein, G., Lanman, D., Hirsch, M., Raskar, R. (2012)
"Tensor Displays: Compressive Light Field Display using Multilayer Displays with Directional Backlighting"
'' ACM Transactions on Graphics (SIGGRAPH)'' * Wetzstein, G., Lanman, D., Heidrich, W., Raskar, R. (2011)
"Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays"
'' ACM Transactions on Graphics (SIGGRAPH)'' * Lanman, D., Wetzstein, G., Hirsch, M., Heidrich, W., Raskar, R. (2011)
"Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs"
''ACM Transactions on Graphics (SIGGRAPH Asia)'' * Lanman, D., Hirsch, M. Kim, Y., Raskar, R. (2010)
"HR3D: Glasses-free 3D Display using Dual-stacked LCDs High-Rank 3D Display using Content-Adaptive Parallax Barriers"
'' ACM Transactions on Graphics (SIGGRAPH Asia)'' * Matusik, W., Pfister, H. (2004)
"3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes"
''Proc. ACM SIGGRAPH'', ACM Press. * Javidi, B., Okano, F., eds. (2002).
Three-Dimensional Television, Video and Display Technologies
', Springer-Verlag. * Klug, M., Burnett, T., Fancello, A., Heath, A., Gardner, K., O'Connell, S., Newswanger, C. (2013)
"A Scalable, Collaborative, Interactive Light-field Display System"
''SID Symposium Digest of Technical Papers'' * Fattal, D., Peng, Z., Tran, T., Vo, S., Fiorentino, M., Brug, J., Beausoleil, R. (2013)
"A multi-directional backlight for a wide-angle, glasses-free three-dimensional display"
'' Nature 495, 348–351''


Light field archives


"The Stanford Light Field Archive"

"UCSD/MERL Light Field Repository"

"The HCI Light Field Benchmark"

"Synthetic Light Field Archive"


Applications

* Grosenick, L., Anderson, T., Smith S. J. (2009
"Elastic Source Selection for in vivo imaging of neuronal ensembles."
From Nano to Macro, 6th IEEE International Symposium on Biomedical Imaging. (2009) 1263–1266. * Grosenick, L., Broxton, M., Kim, C. K., Liston, C., Poole, B., Yang, S., Andalman, A., Scharff, E., Cohen, N., Yizhar, O., Ramakrishnan, C., Ganguli, S., Suppes, P., Levoy, M., Deisseroth, K. (2017
"Identification of cellular-activity dynamics across large tissue volumes in the mammalian brain"
bioRxiv 132688; doi: https://doi.org/10.1101/132688. * Heide, F., Wetzstein, G., Raskar, R., Heidrich, W. (2013) 184026/http://adaptiveimagesynthesis.com/ "Adaptive Image Synthesis for Compressive Displays"], ACM Transactions on Graphics (SIGGRAPH) * Wetzstein, G., Raskar, R., Heidrich, W. (2011
"Hand-Held Schlieren Photography with Light Field Probes"
IEEE International Conference on Computational Photography (ICCP) * Pérez, F., Marichal, J. G., Rodriguez, J.M. (2008)
"The Discrete Focal Stack Transform"
''Proc. EUSIPCO'' * Raskar, R., Agrawal, A., Wilson, C., Veeraraghavan, A. (2008)

''Proc. ACM SIGGRAPH.'' * Talvala, E-V., Adams, A., Horowitz, M., Levoy, M. (2007)
"Veiling Glare in High Dynamic Range Imaging"
''Proc. ACM SIGGRAPH.'' * Halle, M., Benton, S., Klug, M., Underkoffler, J. (1991)
"The UltraGram: A Generalized Holographic Stereogram"
''SPIE Vol. 1461, Practical Holography V'', S.A. Benton, ed., pp. 142–155. * Zomet, A., Feldman, D., Peleg, S., Weinshall, D. (2003)
"Mosaicing New Views: The Crossed-Slits Projection"
''IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)'', Vol. 25, No. 6, June 2003, pp. 741–754. * Vaish, V., Garg, G., Talvala, E., Antunez, E., Wilburn, B., Horowitz, M., Levoy, M. (2005)
"Synthetic Aperture Focusing using a Shear-Warp Factorization of the Viewing Transform"
''Proc. Workshop on Advanced 3D Imaging for Safety and Security'', in conjunction with CVPR 2005. *Bedard, N., Shope, T., Hoberman, A., Haralam, M. A., Shaikh, N., Kovačević, J., Balram, N., Tošić, I. (2016)
"Light field otoscope design for 3D in vivo imaging of the middle ear"
''Biomedical optics express'', ''8''(1), pp. 260–272. *Karygianni, S., Martinello, M., Spinoulas, L., Frossard, P., Tosic, I. (2018).
Automated eardrum registration from light-field data
. IEEE International Conference on Image Processing (ICIP) * Rademacher, P., Bishop, G. (1998)
"Multiple-Center-of-Projection Images"
''Proc. ACM SIGGRAPH'', ACM Press. * Isaksen, A., McMillan, L., Gortler, S.J. (2000)
"Dynamically Reparameterized Light Fields"
''Proc. ACM SIGGRAPH'', ACM Press, pp. 297–306. * Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M. (2001)
"Unstructured Lumigraph Rendering"
''Proc. ACM SIGGRAPH'', ACM Press. * Ashdown, I. (1993)

''Journal of the Illuminating Engineering Society'', Vol. 22, No. 1, Winter, 1993, pp. 163–180. * Chaves, J. (2015
"Introduction to Nonimaging Optics, Second Edition"
CRC Press * Winston, R., Miñano, J.C., Benitez, P.G., Shatz, N., Bortz, J.C., (2005
"Nonimaging Optics"
Academic Press * Pégard, N. C., Liu H.Y., Antipa, N., Gerlock M., Adesnik, H., and Waller, L.. ''Compressive light-field microscopy for 3D neural activity recording.'' Optica 3, no. 5, pp. 517–524 (2016). * Leffingwell, J., Meagher, D., Mahmud, K., Ackerson, S. (2018)
"Generalized Scene Reconstruction."
arXiv:1803.08496v3 [cs.CV], pp. 1-13. * Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020)
“NeRF: Representing scenes as neural radiance fields for view synthesis.”
Computer Vision – ECCV 2020, 405–421. * {{cite journal, last1=Perez, first1=CC, last2=Lauri, first2=A, last3=Symvoulidis, first3=P, last4=Cappetta, first4=M, last5=Erdmann, first5=A, last6=Westmeyer, first6=GG, display-authors=2, title=Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera., journal=Journal of Biomedical Optics, date=September 2015, volume=20, issue=9, pages=096009, doi=10.1117/1.JBO.20.9.096009, pmid=26358822, bibcode=2015JBO....20i6009C, doi-access=free *Perez, C. C., Lauri, A., Symvoulidis, P., Cappetta, M., Erdmann, A., & Westmeyer, G. G. (2015). Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera. Journal of Biomedical Optics, 20(9), 096009-096009. * León, K., Galvis, L., and Arguello, H. (2016)
"Reconstruction of multispectral light field (5d plenoptic function) based on compressive sensing with colored coded apertures from 2D projections"
Revista Facultad de Ingeniería Universidad de Antioquia 80, pp. 131. Optics 3D computer graphics