Ray Tracer
   HOME

TheInfoList



OR:

In
3D computer graphics 3D computer graphics, or “3D graphics,” sometimes called CGI, 3D-CGI or three-dimensional computer graphics are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for th ...
, ray tracing is a technique for modeling light transport for use in a wide variety of rendering algorithms for generating
digital images A digital image is an image composed of picture elements, also known as ''pixels'', each with ''finite'', '' discrete quantities'' of numeric representation for its intensity or gray level that is an output from its two-dimensional functions f ...
. On a spectrum of computational cost and visual fidelity, ray tracing-based rendering techniques, such as
ray casting Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a came ...
, recursive ray tracing, distribution ray tracing,
photon mapping In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001Jensen, H. (1996). ''Global Illumination using Photon Maps''. nlineAvailable at: http://graphics.stanf ...
and path tracing, are generally slower and higher fidelity than scanline rendering methods. Thus, ray tracing was first deployed in applications where taking a relatively long time to render could be tolerated, such as in still computer-generated images, and film and television
visual effects Visual effects (sometimes abbreviated VFX) is the process by which imagery is created or manipulated outside the context of a live-action shot in filmmaking and video production. The integration of live-action footage and other live-action foota ...
(VFX), but was less suited to
real-time Real-time or real time describes various operations in computing or other processes that must guarantee response times within a specified time (deadline), usually a relatively short time. A real-time process is generally one that happens in defined ...
applications such as
video game Video games, also known as computer games, are electronic games that involves interaction with a user interface or input device such as a joystick, controller, keyboard, or motion sensing device to generate visual feedback. This fee ...
s, where speed is critical in rendering each
frame A frame is often a structural system that supports other components of a physical construction and/or steel frame that limits the construction's extent. Frame and FRAME may also refer to: Physical objects In building construction *Framing (con ...
. Since 2018, however,
hardware acceleration Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calcula ...
for real-time ray tracing has become standard on new commercial graphics cards, and graphics APIs have followed suit, allowing developers to use hybrid ray tracing and rasterization-based rendering in games and other real-time applications with a lesser hit to frame render times. Ray tracing is capable of simulating a variety of optical effects, such as reflection,
refraction In physics, refraction is the redirection of a wave as it passes from one medium to another. The redirection can be caused by the wave's change in speed or by a change in the medium. Refraction of light is the most commonly observed phenomeno ...
,
soft shadows The umbra, penumbra and antumbra are three distinct parts of a shadow, created by any light source after impinging on an opaque object. Assuming no diffraction, for a collimated beam (such as a point source) of light, only the umbra is cast. Th ...
,
scattering Scattering is a term used in physics to describe a wide range of physical processes where moving particles or radiation of some form, such as light or sound, are forced to deviate from a straight trajectory by localized non-uniformities (including ...
,
depth of field The depth of field (DOF) is the distance between the nearest and the furthest objects that are in acceptably sharp focus in an image captured with a camera. Factors affecting depth of field For cameras that can only focus on one object dist ...
, motion blur, caustics, ambient occlusion and
dispersion Dispersion may refer to: Economics and finance *Dispersion (finance), a measure for the statistical distribution of portfolio returns *Price dispersion, a variation in prices across sellers of the same item *Wage dispersion, the amount of variatio ...
phenomena (such as
chromatic aberration In optics, chromatic aberration (CA), also called chromatic distortion and spherochromatism, is a failure of a lens to focus all colors to the same point. It is caused by dispersion: the refractive index of the lens elements varies with the wave ...
). It can also be used to trace the path of
sound waves In physics, sound is a vibration that propagates as an acoustic wave, through a transmission medium such as a gas, liquid or solid. In human physiology and psychology, sound is the ''reception'' of such waves and their ''perception'' by the ...
in a similar fashion to light waves, making it a viable option for more immersive sound design in video games by rendering realistic
reverberation Reverberation (also known as reverb), in acoustics, is a persistence of sound, after a sound is produced. Reverberation is created when a sound or signal is reflected causing numerous reflections to build up and then decay as the sound is abso ...
and echoes. In fact, any physical
wave In physics, mathematics, and related fields, a wave is a propagating dynamic disturbance (change from equilibrium) of one or more quantities. Waves can be periodic, in which case those quantities oscillate repeatedly about an equilibrium (res ...
or
particle In the Outline of physical science, physical sciences, a particle (or corpuscule in older texts) is a small wikt:local, localized physical body, object which can be described by several physical property, physical or chemical property, chemical ...
phenomenon with approximately linear motion can be simulated with ray tracing. Ray tracing-based rendering techniques that involve sampling light over a domain generate image noise artifacts that can be addressed by tracing a very large number of rays or using
denoising Noise reduction is the process of removing noise from a signal. Noise reduction techniques exist for audio and images. Noise reduction algorithms may distort the signal to some degree. Noise rejection is the ability of a circuit to isolate an un ...
techniques.


History

The idea of ray tracing comes from as early as the 16th century when it was described by
Albrecht Dürer Albrecht Dürer (; ; hu, Ajtósi Adalbert; 21 May 1471 – 6 April 1528),Müller, Peter O. (1993) ''Substantiv-Derivation in Den Schriften Albrecht Dürers'', Walter de Gruyter. . sometimes spelled in English as Durer (without an umlaut) or Due ...
, who is credited for its invention.. In ''Four Books on Measurement'', he described an apparatus called a ''Dürer's door'' using a thread attached to the end of a stylus that an assistant moves along the contours of the object to draw. The thread passes through the door's frame and then through a hook on the wall. The thread forms a ray and the hook acts as the center of projection and corresponds to the camera position in ray tracing. Using a computer for ray tracing to generate shaded pictures was first accomplished by
Arthur Appel Arthur is a common male given name of Brythonic origin. Its popularity derives from it being the name of the legendary hero King Arthur. The etymology is disputed. It may derive from the Celtic ''Artos'' meaning “Bear”. Another theory, more wi ...
in 1968. Appel used ray tracing for primary visibility (determining the closest surface to the camera at each image point), and traced secondary rays to the light source from each point being shaded to determine whether the point was in shadow or not. Later, in 1971, Goldstein and Nagel of MAGI (Mathematical Applications Group, Inc.) published “3-D Visual Simulation”, wherein ray tracing is used to make shaded pictures of solids by simulating the photographic process in reverse. They cast a ray through each picture element (pixel) in the screen into the scene to identify the visible surface. The first surface intersected by the ray was the visible one. This non-recursive ray tracing-based rendering algorithm is today called "
ray casting Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a came ...
". At the ray-surface intersection point found, they computed the surface normal and, knowing the position of the light source, computed the brightness of the pixel on the screen. Their publication describes a short (30 second) film “made using the University of Maryland’s display hardware outfitted with a 16mm camera. The film showed the helicopter and a simple ground level gun emplacement. The helicopter was programmed to undergo a series of maneuvers including turns, take-offs, and landings, etc., until it eventually is shot down and crashed.” A '' CDC 6600'' computer was used. MAGI produced an animation video called ''MAGI/SynthaVision Sampler'' in 1974. Another early instance of ray casting came in 1976, when Scott Roth created a flip book animation in Bob Sproull's computer graphics course at
Caltech The California Institute of Technology (branded as Caltech or CIT)The university itself only spells its short form as "Caltech"; the institution considers other spellings such a"Cal Tech" and "CalTech" incorrect. The institute is also occasional ...
. The scanned pages are shown as a video on the right. Roth's computer program noted an edge point at a pixel location if the ray intersected a bounded plane different from that of its neighbors. Of course, a ray could intersect multiple planes in space, but only the surface point closest to the camera was noted as visible. The edges are jagged because only a coarse resolution was practical with the computing power of the time-sharing DEC
PDP-10 Digital Equipment Corporation (DEC)'s PDP-10, later marketed as the DECsystem-10, is a mainframe computer family manufactured beginning in 1966 and discontinued in 1983. 1970s models and beyond were marketed under the DECsystem-10 name, especi ...
used. The “terminal” was a Tektronix storage-tube display for text and graphics. Attached to the display was a printer which would create an image of the display on rolling thermal paper. Roth extended the framework, introduced the term ''
ray casting Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a came ...
'' in the context of
computer graphics Computer graphics deals with generating images with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, cell phone and computer displays, and many specialized applications. A great de ...
and solid modeling, and later published his work while at GM Research Labs.
Turner Whitted John Turner Whitted is an electrical engineer and computer scientist who introduced recursive ray tracing to the computer graphics community with his 1979 paper "An improved illumination model for shaded display". His algorithm proved to be a pract ...
was the first to show recursive ray tracing for mirror reflection and for refraction through translucent objects, with an angle determined by the solid's index of refraction, and to use ray tracing for anti-aliasing. Whitted also showed ray traced shadows. He produced a recursive ray-traced film called ''The Compleat Angler'' in 1979 while an engineer at Bell Labs. Whitted's deeply recursive ray tracing algorithm reframed rendering from being primarily a matter of surface visibility determination to being a matter of light transport. His paper inspired a series of subsequent work by others that included distribution ray tracing and finally
unbiased Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, ...
path tracing, which provides the '' rendering equation'' framework that has allowed computer generated imagery to be faithful to reality. For decades,
global illumination Global illumination (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from ...
in major films using computer generated imagery was faked with additional lights. Ray tracing-based rendering eventually changed that by enabling physically-based light transport. Early feature films rendered entirely using path tracing include '' Monster House'' (2006), ''
Cloudy with a Chance of Meatballs ''Cloudy with a Chance of Meatballs'' is a children's book written by Judi Barrett and illustrated by Ron Barrett. It was first published in 1978 by Atheneum Books, followed by a 1982 trade paperback edition from sister company Aladdin Paperb ...
'' (2009), and ''
Monsters University ''Monsters University'' is a 2013 American computer-animated monster comedy film produced by Pixar Animation Studios and released by Walt Disney Pictures. It was directed by Dan Scanlon (in his feature directorial debut) and produced by Kori ...
'' (2013).


Algorithm overview

Optical ray tracing describes a method for producing visual images constructed in
3D computer graphics 3D computer graphics, or “3D graphics,” sometimes called CGI, 3D-CGI or three-dimensional computer graphics are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for th ...
environments, with more photorealism than either
ray casting Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a came ...
or scanline rendering techniques. It works by tracing a path from an imaginary eye through each
pixel In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest point in an all points addressable display device. In most digital display devices, pixels are the smal ...
in a virtual screen, and calculating the color of the object visible through it. Scenes in ray tracing are described mathematically by a programmer or by a visual artist (normally using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography. Typically, each ray must be tested for
intersection In mathematics, the intersection of two or more objects is another object consisting of everything that is contained in all of the objects simultaneously. For example, in Euclidean geometry, when two lines in a plane are not parallel, their i ...
with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming
light Light or visible light is electromagnetic radiation that can be perceived by the human eye. Visible light is usually defined as having wavelengths in the range of 400–700 nanometres (nm), corresponding to frequencies of 750–420 tera ...
at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene. It may at first seem counterintuitive or "backward" to send rays ''away'' from the camera, rather than ''into'' it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could potentially waste a tremendous amount of computation on light paths that are never recorded. Therefore, the shortcut taken in ray tracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated.


Calculate rays for rectangular viewport

On input we have (in calculation we use vector
normalization Normalization or normalisation refers to a process that makes something more normal or regular. Most commonly it refers to: * Normalization (sociology) or social normalization, the process through which ideas and behaviors that may fall outside of ...
and
cross product In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here E), and is ...
): * E \in \mathbb eye position * T \in \mathbb target position * \theta \in ,\pi field of view - for humans, we can assume \approx \pi/2 \text= 90^\circ * m,k \in \mathbb numbers of square pixels on viewport vertical and horizontal direction * i,j \in \mathbb, 1\leq i\leq k \land 1\leq j\leq m numbers of actual pixel * \vec v \in \mathbb vertical vector which indicates where is up and down, usually \vec v = ,1,0/math> (not visible on picture) -
roll Roll or Rolls may refer to: Movement about the longitudinal axis * Roll angle (or roll rotation), one of the 3 angular degrees of freedom of any stiff body (for example a vehicle), describing motion about the longitudinal axis ** Roll (aviation), ...
component which determine viewport rotation around point C (where the axis of rotation is the ET section) The idea is to find the position of each viewport pixel center P_ which allows us to find the line going from eye E through that pixel and finally get the ray described by point E and vector \vec R_ = P_ -E (or its normalisation \vec r_). First we need to find the coordinates of the bottom left viewport pixel P_ and find the next pixel by making a shift along directions parallel to viewport (vectors \vec b_n i \vec v_n) multiplied by the size of the pixel. Below we introduce formulas which include distance d between the eye and the viewport. However, this value will be reduced during ray normalization \vec r_ (so you might as well accept that d=1 and remove it from calculations). Pre-calculations: let's find and normalise vector \vec t and vectors \vec b, \vec v which are parallel to the viewport (all depicted on above picture) : \vec t = T-E, \qquad \vec b = \vec t\times \vec v : \vec t_n = \frac, \qquad \vec b_n = \frac, \qquad \vec v_n = \vec t_n\times \vec b_n note that viewport center C=E+\vec t_nd, next we calculate viewport sizes h_x, h_y divided by 2 including inverse aspect ratio \frac : g_x=\frac =d \tan \frac, \qquad g_y =\frac = g_x \frac and then we calculate next-pixel shifting vectors q_x, q_y along directions parallel to viewport (\vec b,\vec v), and left bottom pixel center p_ : \vec q_x = \frac\vec b_n, \qquad \vec q_y = \frac\vec v_n, \qquad \vec p_ = \vec t_n d - g_x\vec b_n - g_y\vec v_n Calculations: note P_ = E + \vec p_ and ray \vec R_ = P_ -E = \vec p_ so : \vec p_ = \vec p_ + \vec q_x(i-1) + \vec q_y(j-1) : \vec r_ = \frac = \frac Above formula was tested in thi
javascript project
(works in browser).


Detailed description of ray tracing computer algorithm and its genesis


What happens in nature (simplified)

In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of
photon A photon () is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless, so they always ...
s traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring
relativistic effects Relativistic quantum chemistry combines relativistic mechanics with quantum chemistry to calculate elemental properties and structure, especially for the heavier elements of the periodic table. A prominent example is an explanation for the color of ...
). Any combination of four things might happen with this light ray:
absorption Absorption may refer to: Chemistry and biology * Absorption (biology), digestion **Absorption (small intestine) *Absorption (chemistry), diffusion of particles of gas or liquid into liquid or solid materials *Absorption (skin), a route by which ...
, reflection,
refraction In physics, refraction is the redirection of a wave as it passes from one medium to another. The redirection can be caused by the wave's change in speed or by a change in the medium. Refraction of light is the most commonly observed phenomeno ...
and
fluorescence Fluorescence is the emission of light by a substance that has absorbed light or other electromagnetic radiation. It is a form of luminescence. In most cases, the emitted light has a longer wavelength, and therefore a lower photon energy, tha ...
. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has any transparent or
translucent In the field of optics, transparency (also called pellucidity or diaphaneity) is the physical property of allowing light to pass through the material without appreciable scattering of light. On a macroscopic scale (one in which the dimensions a ...
properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the
spectrum A spectrum (plural ''spectra'' or ''spectrums'') is a condition that is not limited to a specific set of values but can vary, without gaps, across a continuum. The word was first used scientifically in optics to describe the rainbow of colors i ...
(and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength color in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.


Ray casting algorithm

The idea behind ray casting, the predecessor to recursive ray tracing, is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the
shading Shading refers to the depiction of depth perception in 3D models (within the field of 3D computer graphics) or illustrations (in visual art) by varying the level of darkness. Shading tries to approximate local behavior of light on the object's ...
of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and
sphere A sphere () is a Geometry, geometrical object that is a solid geometry, three-dimensional analogue to a two-dimensional circle. A sphere is the Locus (mathematics), set of points that are all at the same distance from a given point in three ...
s. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modeling techniques and easily rendered.


Volume ray casting algorithm

In the method of volume ray casting, each ray is traced so that color and/or density can be sampled along the ray and then be combined into a final pixel color. This is often used when objects cannot be easily represented by explicit surfaces (such as triangles), for example when rendering clouds or 3D medical scans.


SDF ray marching algorithm

In SDF ray marching, or sphere tracing, each ray is traced in multiple steps to approximate an intersection point between the ray and a surface defined by a
signed distance function In mathematics and its applications, the signed distance function (or oriented distance function) is the orthogonal distance of a given point ''x'' to the boundary of a set Ω in a metric space, with the sign determined by whether or not ''x'' i ...
(SDF). The SDF is evaluated for each iteration in order to be able take as large steps as possible without missing any part of the surface. A threshold is used to cancel further iteration when a point has reached that is close enough to the surface. This method is often used for 3D fractal rendering.


Recursive ray tracing algorithm

Earlier algorithms traced rays from the eye into the scene until they hit an object, but determined the ray color without recursively tracing more rays. Recursive ray tracing continues the process. When a ray hits a surface, additional rays may be cast because of reflection, refraction, and shadow.: * A reflection ray is traced in the mirror-reflection direction. The closest object it intersects is what will be seen in the reflection. *A refraction ray traveling through transparent material works similarly, with the addition that a refractive ray could be entering or exiting a material.
Turner Whitted John Turner Whitted is an electrical engineer and computer scientist who introduced recursive ray tracing to the computer graphics community with his 1979 paper "An improved illumination model for shaded display". His algorithm proved to be a pract ...
extended the mathematical logic for rays passing through a transparent solid to include the effects of refraction. * A shadow ray is traced toward each light. If any opaque object is found between the surface and the light, the surface is in shadow and the light does not illuminate it. These recursive rays add more realism to ray traced images.


Advantages over other rendering methods

Ray tracing-based rendering's popularity stems from its basis in a realistic simulation of light transport, as compared to other rendering methods, such as rasterization, which focuses more on the realistic simulation of geometry. Effects such as reflections and
shadow A shadow is a dark area where light from a light source is blocked by an opaque object. It occupies all of the three-dimensional volume behind an object with light in front of it. The cross section of a shadow is a two-dimensional silhouette, o ...
s, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. The computational independence of each ray makes ray tracing amenable to a basic level of parallelization, but the divergence of ray paths makes high utilization under parallelism quite difficult to achieve in practice.


Disadvantages

A serious disadvantage of ray tracing is performance (though it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels on-screen). Until the late 2010s, ray tracing in real time was usually considered impossible on consumer hardware for nontrivial tasks. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform spatial anti-aliasing and improve image quality where needed. Although it does handle interreflection and optical effects such as refraction accurately, traditional ray tracing is also not necessarily photorealistic. True photorealism occurs when the rendering equation is closely approximated or fully implemented. Implementing the rendering equation gives true photorealism, as the equation describes every physical effect of light flow. However, this is usually infeasible given the computing resources required. The realism of all rendering methods can be evaluated as an approximation to the equation. Ray tracing, if it is limited to Whitted's algorithm, is not necessarily the most realistic. Methods that trace rays, but include additional techniques (
photon mapping In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001Jensen, H. (1996). ''Global Illumination using Photon Maps''. nlineAvailable at: http://graphics.stanf ...
, path tracing), give a far more accurate simulation of real-world lighting.


Reversed direction of traversal of scene by the rays

The process of shooting rays from the eye to the light source to render an image is sometimes called ''backwards ray tracing'', since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term ''backwards ray tracing'' to mean shooting rays from the lights and gathering the results. Therefore, it is clearer to distinguish ''eye-based'' versus ''light-based'' ray tracing. While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.
Photon mapping In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001Jensen, H. (1996). ''Global Illumination using Photon Maps''. nlineAvailable at: http://graphics.stanf ...
is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points. The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias. An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases, only a very small subset of paths will transport energy; Metropolis light transport is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays. To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions. First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example, if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.


Example

As a demonstration of the principles involved in ray tracing, consider how one would find the intersection between a ray and a sphere. This is merely the math behind the
line–sphere intersection In analytic geometry, a line and a sphere can intersect in three ways: # No intersection at all # Intersection in exactly one point # Intersection in two points. Methods for distinguishing these cases, and determining the coordinates for the ...
and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of ray tracing, but this demonstrates an example of the algorithms used. In vector notation, the equation of a sphere with center \mathbf c and radius r is :\left\Vert \mathbf x - \mathbf c \right\Vert^2=r^2. Any point on a ray starting from point \mathbf s with direction \mathbf d (here \mathbf d is a
unit vector In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in \hat (pronounced "v-hat"). The term ''direction vecto ...
) can be written as :\mathbf x=\mathbf s+t\mathbf d, where t is its distance between \mathbf x and \mathbf s. In our problem, we know \mathbf c, r, \mathbf s (e.g. the position of a light source) and \mathbf d, and we need to find t. Therefore, we substitute for \mathbf x: :\left\Vert\mathbf+t\mathbf-\mathbf\right\Vert^=r^2. Let \mathbf\ \stackrel\ \mathbf-\mathbf for simplicity; then :\left\Vert\mathbf+t\mathbf\right\Vert^=r^ :\mathbf^2+t^2\mathbf^2+2\mathbf\cdot t\mathbf=r^2 :(\mathbf^2)t^2+(2\mathbf\cdot\mathbf)t+(\mathbf^2-r^2)=0. Knowing that d is a unit vector allows us this minor simplification: :t^2+(2\mathbf\cdot\mathbf)t+(\mathbf^2-r^2)=0. This
quadratic equation In algebra, a quadratic equation () is any equation that can be rearranged in standard form as ax^2 + bx + c = 0\,, where represents an unknown (mathematics), unknown value, and , , and represent known numbers, where . (If and then the equati ...
has solutions :t=\frac=-(\mathbf\cdot\mathbf)\pm\sqrt. The two values of t found by solving this equation are the two ones such that \mathbf s+t\mathbf d are the points where the ray intersects the sphere. Any value which is negative does not lie on the ray, but rather in the opposite
half-line In geometry, a line is an infinitely long object with no width, depth, or curvature. Thus, lines are One-dimensional space, one-dimensional objects, though they may exist in Two-dimensional Euclidean space, two, Three-dimensional space, three, ...
(i.e. the one starting from \mathbf s with opposite direction). If the quantity under the square root ( the
discriminant In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the origi ...
) is negative, then the ray does not intersect the sphere. Let us suppose now that there is at least a positive solution, and let t be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the normal to the sphere. The normal to the sphere is simply :\mathbf n=\frac, where \mathbf y=\mathbf s+t\mathbf d is the intersection point found before. The reflection direction can be found by a reflection of \mathbf d with respect to \mathbf n, that is : \mathbf r = \mathbf d - 2(\mathbf n \cdot \mathbf d ) \mathbf n. Thus the reflected ray has equation : \mathbf x = \mathbf y + u \mathbf r. \, Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection.


Adaptive depth control

Adaptive depth control means that the renderer stops generating reflected/transmitted rays when the computed intensity becomes less than a certain threshold. There must always be a set maximum depth or else the program would generate an infinite number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective. To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays are traced. Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the reflection from the second: 0.5 × 0.5 = 0.25, the third: 0.25 × 0.5 = 0.125, the fourth: 0.125 × 0.5 = 0.0625, the fifth: 0.0625 × 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would also decrease the intensity contribution. For a transmitted ray we could do something similar but in that case the distance traveled through the object would cause even faster intensity decrease. As an example of this, Hall & Greenberg found that even for a very reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.


Bounding volumes

Enclosing groups of objects in sets of hierarchical bounding volumes decreases the amount of computations required for ray tracing. A cast ray is first tested for an intersection with the
bounding volume In computer graphics and computational geometry, a bounding volume for a set of objects is a closed volume that completely contains the union of the objects in the set. Bounding volumes are used to improve the efficiency of geometrical operatio ...
, and then if there is an intersection, the volume is recursively divided until the ray hits the object. The best type of bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin, then a sphere will enclose mainly empty space compared to a box. Boxes are also easier to generate hierarchical bounding volumes. Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logarithmic dependence. This is because, for a perfect case, each intersection test would divide the possibilities by two, and result in a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this. Kay & Kajiya give a list of desired properties for hierarchical bounding volumes: * Subtrees should contain objects that are near each other and the further down the tree the closer should be the objects. * The volume of each node should be minimal. * The sum of the volumes of all bounding volumes should be minimal. * Greater attention should be placed on the nodes near the root since pruning a branch near the root will remove more potential objects than one farther down the tree. * The time spent constructing the hierarchy should be much less than the time saved by using it.


Interactive ray tracing

The first implementation of an interactive ray tracer was the LINKS-1 Computer Graphics System built in 1982 at
Osaka University , abbreviated as , is a public research university located in Osaka Prefecture, Japan. It is one of Japan's former Imperial Universities and a Designated National University listed as a "Top Type" university in the Top Global University Project. ...
's School of Engineering, by professors Ohmura Kouichi, Shirakawa Isao and Kawata Toru with 50 students. It was a
massively parallel Massively parallel is the term for using a large number of computer processors (or separate computers) to simultaneously perform a set of coordinated computations in parallel. GPUs are massively parallel architecture with tens of thousands of t ...
processing
computer A computer is a machine that can be programmed to Execution (computing), carry out sequences of arithmetic or logical operations (computation) automatically. Modern digital electronic computers can perform generic sets of operations known as C ...
system with 514
microprocessor A microprocessor is a computer processor where the data processing logic and control is included on a single integrated circuit, or a small number of integrated circuits. The microprocessor contains the arithmetic, logic, and control circu ...
s (257
Zilog Z8001 The Z8000 ("''zee-'' or ''zed-eight-thousand''") is a 16-bit microprocessor introduced by Zilog in early 1979. The architecture was designed by Bernard Peuto while the logic and physical implementation was done by Masatoshi Shima, assisted by a ...
s and 257
iAPX 86 The 8086 (also called iAPX 86) is a 16-bit microprocessor chip designed by Intel between early 1976 and June 8, 1978, when it was released. The Intel 8088, released July 1, 1979, is a slightly modified chip with an external 8-bit data bus (allowin ...
s), used for rendering realistic
3D computer graphics 3D computer graphics, or “3D graphics,” sometimes called CGI, 3D-CGI or three-dimensional computer graphics are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for th ...
with high-speed ray tracing. According to the Information Processing Society of Japan: "The core of 3D image rendering is calculating the luminance of each pixel making up a rendered surface from the given viewpoint, light source, and object position. The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using ray tracing. By developing a new software methodology specifically for high-speed image rendering, LINKS-1 was able to rapidly render highly realistic images." It was used to create an early 3D
planetarium A planetarium ( planetariums or ''planetaria'') is a theatre built primarily for presenting educational and entertaining shows about astronomy and the night sky, or for training in celestial navigation. A dominant feature of most planetarium ...
-like video of the heavens made completely with computer graphics. The video was presented at the
Fujitsu is a Japanese multinational information and communications technology equipment and services corporation, established in 1935 and headquartered in Tokyo. Fujitsu is the world's sixth-largest IT services provider by annual revenue, and the la ...
pavilion at the 1985 International Exposition in
Tsukuba is a city located in Ibaraki Prefecture, Japan. , the city had an estimated population of 244,528 in 108,669 households and a population density of 862 persons per km². The percentage of the population aged over 65 was 20.3%. The total ar ...
." It was the second system to do so after the Evans & Sutherland Digistar in 1982. The LINKS-1 was reported to be the world's most powerful computer in 1984. The earliest public record of "real-time" ray tracing with interactive rendering (i.e., updates greater than a frame per second) was credited at the 2005
SIGGRAPH SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) is an annual conference on computer graphics (CG) organized by the ACM SIGGRAPH, starting in 1974. The main conference is held in North America; SIGGRAPH Asia ...
computer graphics conference as being the REMRT/RT tools developed in 1986 by Mike Muuss for the
BRL-CAD BRL-CAD is a constructive solid geometry (CSG) solid modeling computer-aided design (CAD) system. It includes an interactive geometry editor, ray tracing support for graphics rendering and geometric analysis, computer network distributed frameb ...
solid modeling system. Initially published in 1987 at USENIX, the BRL-CAD ray tracer was an early implementation of a parallel network distributed ray tracing system that achieved several frames per second in rendering performance. This performance was attained by means of the highly optimized yet platform independent LIBRT ray tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray tracer, including the REMRT/RT tools, continue to be available and developed today as
open source Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product. The open-source model is a decentralized sof ...
software. Since then, there have been considerable efforts and research towards implementing ray tracing at real-time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3D graphics applications such as demoscene productions,
computer and video games ''Computer and Video Games'' (also known as ''CVG'', ''Computer & Video Games'', ''C&VG'', ''Computer + Video Games'', or ''C+VG'') was a UK-based video game magazine, published in its original form between 1981 and 2004. Its offshoot website ...
, and image rendering. Some real-time software 3D engines based on ray tracing have been developed by hobbyist demo programmers since the late 1990s. In 1999 a team from the
University of Utah The University of Utah (U of U, UofU, or simply The U) is a public research university in Salt Lake City, Utah. It is the flagship institution of the Utah System of Higher Education. The university was established in 1850 as the University of De ...
, led by Steven Parker, demonstrated interactive ray tracing live at the 1999 Symposium on Interactive 3D Graphics. They rendered a 35 million sphere model at 512 by 512 pixel resolution, running at approximately 15 frames per second on 60 CPUs. The Open RT project included a highly optimized software core for ray tracing along with an
OpenGL OpenGL (Open Graphics Library) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardwa ...
-like API in order to offer an alternative to the current rasterisation based approach for interactive 3D graphics. Ray tracing hardware, such as the experimental
Ray Processing Unit Ray-tracing hardware is special-purpose computer hardware designed for accelerating ray tracing calculations. Introduction: Ray tracing and rasterization The problem of rendering 3D graphics can be conceptually presented as finding all intersect ...
developed by Sven Woop at the
Saarland University Saarland University (german: Universität des Saarlandes, ) is a public research university located in Saarbrücken, the capital of the German state of Saarland. It was founded in 1948 in Homburg in co-operation with France and is organized in si ...
, has been designed to accelerate some of the computationally intensive operations of ray tracing. The idea that video games could ray-trace their graphics in real time received media attention in the late 2000s. During that time, a researcher named Daniel Pohl, under the guidance of graphics professor Philipp Slusallek and in cooperation with the
Erlangen University Erlangen (; East Franconian: ''Erlang'', Bavarian: ''Erlanga'') is a Middle Franconian city in Bavaria, Germany. It is the seat of the administrative district Erlangen-Höchstadt (former administrative district Erlangen), and with 116,062 inhabi ...
and
Saarland University Saarland University (german: Universität des Saarlandes, ) is a public research university located in Saarbrücken, the capital of the German state of Saarland. It was founded in 1948 in Homburg in co-operation with France and is organized in si ...
in Germany, equipped '' Quake III'' and ''
Quake IV ''Quake 4'' is a 2005 military science fiction first-person shooter video game developed by Raven Software and published by Activision. It is the fourth title in the ''Quake'' series, after the multiplayer ''Quake III Arena'', and a sequel to ' ...
'' with an
engine An engine or motor is a machine designed to convert one or more forms of energy into mechanical energy. Available energy sources include potential energy (e.g. energy of the Earth's gravitational field as exploited in hydroelectric power gen ...
he programmed himself, which Saarland University then demonstrated at
CeBIT CeBIT was the largest and most internationally representative computer expo. The trade fair was held each year on the Hanover fairground, the world's largest fairground, in Hanover, Germany. In its day, it was considered a barometer of cur ...
2007.
Intel Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California. It is the world's largest semiconductor chip manufacturer by revenue, and is one of the developers of the x86 seri ...
, a patron of Saarland, became impressed enough that it hired Pohl and embarked on a research program dedicated to ray-traced graphics, which it saw as justifying increasing the number of its processors' cores. On June 12, 2008, Intel demonstrated a special version of '' Enemy Territory: Quake Wars'', titled ''Quake Wars: Ray Traced'', using ray tracing for rendering, running in basic HD (720p) resolution. ''ETQW'' operated at 14–29 frames per second on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93 GHz. At SIGGRAPH 2009, Nvidia announced OptiX, a free API for real-time ray tracing on Nvidia GPUs. The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion. OptiX-based renderers are used in Autodesk Arnold,
Adobe Adobe ( ; ) is a building material made from earth and organic materials. is Spanish for ''mudbrick''. In some English-speaking regions of Spanish heritage, such as the Southwestern United States, the term is used to refer to any kind of e ...
AfterEffects Adobe After Effects is a digital visual effects, motion graphics, and compositing application developed by Adobe Inc., and used in the post-production process of film making, video games and television production. Among other things, After Effec ...
, Bunkspeed Shot, Autodesk Maya,
3ds max Autodesk 3ds Max, formerly 3D Studio and 3D Studio Max, is a professional 3D computer graphics program for making 3D animations, models, games and images. It is developed and produced by Autodesk Media and Entertainment. It has modeling capabil ...
, and many other renderers. In 2014, a demo of the
PlayStation 4 The PlayStation 4 (PS4) is a home video game console developed by Sony Interactive Entertainment. Announced as the successor to the PlayStation 3 in February 2013, it was launched on November 15, 2013, in North America, November 29, 2013 in ...
video game ''
The Tomorrow Children ''The Tomorrow Children'' is an adventure video game developed by Q-Games and published by Sony Interactive Entertainment. The game was released as an early access title on September 6, 2016 as ''The Tomorrow Children: Founder's Pack'', and was ...
'', developed by Q-Games and
Japan Studio Japan Studio was a Japanese video game developer based in Tokyo. A first-party studio for Sony Interactive Entertainment (formerly Sony Computer Entertainment), it was best known for the ''Ape Escape'', ''LocoRoco'', '' Patapon'', ''Gravity Ru ...
, demonstrated new
lighting Lighting or illumination is the deliberate use of light to achieve practical or aesthetic effects. Lighting includes the use of both artificial light sources like lamps and light fixtures, as well as natural illumination by capturing daylig ...
techniques developed by Q-Games, notably cascaded voxel cone ray tracing, which simulates lighting in real-time and uses more realistic reflections rather than screen space reflections. Nvidia introduced their GeForce RTX and Quadro RTX GPUs September 2018, based on the Turing architecture that allows for hardware-accelerated ray tracing. The Nvidia hardware uses a separate functional block, publicly called an "RT core". This unit is somewhat comparable to a texture unit in size, latency, and interface to the processor core. The unit features BVH traversal, compressed BVH node decompression, ray-AABB intersection testing, and ray-triangle intersection testing. The GeForce RTX, in the form of models 2080 and 2080 Ti, became the first consumer-oriented brand of graphics card that can process ray tracing in real time, and, in November 2018,
Electronic Arts Electronic Arts Inc. (EA) is an American video game company headquartered in Redwood City, California. Founded in May 1982 by Apple employee Trip Hawkins, the company was a pioneer of the early home computer game industry and promoted the d ...
' ''
Battlefield V ''Battlefield V'' is a first-person shooter game developed by DICE and published by Electronic Arts. It is the eleventh main installment in the ''Battlefield'' series and the successor to 2016's ''Battlefield 1'', and was released for Microsoft ...
'' became the first game to take advantage of its ray tracing capabilities, which it achieves via Microsoft's new API,
DirectX Raytracing DirectX Raytracing (DXR) is a feature introduced in Microsoft's DirectX 12 that implements ray tracing, for video graphic rendering. DXR was released with the Windows 10 October update (version 1809) on October 10, 2018. It requires an AMD Radeo ...
. AMD, which already offered interactive ray tracing on top of
OpenCL OpenCL (Open Computing Language) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-progra ...
through its Radeon ProRender, unveiled in October 2020 the Radeon RX 6000 series, its second generation Navi GPUs with support for hardware-accelerated ray tracing at an online event. Subsequent games that render their graphics by such means appeared since, which has been credited to the improvements in hardware and efforts to make more APIs and game engines compatible with the technology. Current home gaming consoles implement dedicated ray tracing hardware components in their GPUs for real-time ray tracing effects, which began with the ninth-generation consoles
PlayStation 5 The PlayStation 5 (PS5) is a home video game console developed by Sony Interactive Entertainment. Announced as the successor to the PlayStation 4 in April 2019, it was launched on November 12, 2020, in Australia, Japan, New Zealand, North Ame ...
, Xbox Series X and Series S.


Computational complexity

Various complexity results have been proven for certain formulations of the ray tracing problem. In particular, if the decision version of the ray tracing problem is defined as follows – given a light ray's initial position and direction and some fixed point, does the ray eventually reach that point, then the referenced paper proves the following results: * Ray tracing in 3D optical systems with a finite set of reflective or refractive objects represented by a system of rational quadratic inequalities is undecidable. * Ray tracing in 3D optical systems with a finite set of refractive objects represented by a system of rational linear inequalities is undecidable. * Ray tracing in 3D optical systems with a finite set of rectangular reflective or refractive objects is undecidable. * Ray tracing in 3D optical systems with a finite set of reflective or partially reflective objects represented by a system of linear inequalities, some of which can be irrational is undecidable. * Ray tracing in 3D optical systems with a finite set of reflective or partially reflective objects represented by a system of rational linear inequalities is PSPACE-hard. * For any dimension equal to or greater than 2, ray tracing with a finite set of parallel and perpendicular reflective surfaces represented by rational linear inequalities is in PSPACE.


See also

*
Beam tracing Beam tracing is an algorithm to simulate wave propagation. It was developed in the context of computer graphics to render 3D scenes, but it has been also used in other similar areas such as acoustics and electromagnetism simulations. Beam tracing ...
*
Cone tracing Cone tracing and beam tracing are a derivative of the ray tracing algorithm that replaces rays, which have no thickness, with thick rays. Principles In ray tracing, rays are often modeled as geometric ray with no thickness to perform efficient g ...
* Distributed ray tracing *
Global illumination Global illumination (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from ...
* Gouraud shading * List of ray tracing software *
Parallel computing Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
* Path tracing * Phong shading *
Progressive refinement Progressive meshes is one of the techniques of dynamic level of detail (LOD). This technique was introduced by Hugues Hoppe in 1996. This method uses saving a model to the structure - the progressive mesh, which allows a smooth choice of detail le ...
*
Shading Shading refers to the depiction of depth perception in 3D models (within the field of 3D computer graphics) or illustrations (in visual art) by varying the level of darkness. Shading tries to approximate local behavior of light on the object's ...
*
Specular reflection Specular reflection, or regular reflection, is the mirror-like reflection of waves, such as light, from a surface. The law of reflection states that a reflected ray of light emerges from the reflecting surface at the same angle to the surf ...
*
Tessellation A tessellation or tiling is the covering of a surface, often a plane (mathematics), plane, using one or more geometric shapes, called ''tiles'', with no overlaps and no gaps. In mathematics, tessellation can be generalized to high-dimensional ...
* Per-pixel lighting * GPUOpen *
Nvidia GameWorks Nvidia GameWorks is a middleware software suite developed by Nvidia. The Visual FX, PhysX and Optix SDKs provide a wide range of enhancements pre-optimized for Nvidia GPUs. GameWorks is partially open-source. The competing solution being in develo ...
*
Metal (API) Metal is a low-level, low-overhead hardware-accelerated 3D graphic and compute shader API created by Apple. It debuted in iOS 8. Metal combines functions similar to OpenGL and OpenCL in one API. It is intended to improve performance by offering ...
* Vulkan *
DirectX Microsoft DirectX is a collection of application programming interfaces (APIs) for handling tasks related to multimedia, especially game programming and video, on Microsoft platforms. Originally, the names of these APIs all began with "Direct", ...


References


External links


Interactive Ray Tracing: The replacement of rasterization?

The Compleat Angler (1978)

Writing a Simple Ray Tracer (scratchapixel)

Ray tracing a torus

Ray Tracing in One Weekend Book Series
{{Computer graphics Webarchive template wayback links Geometrical optics Virtual reality Global illumination algorithms Computer graphics 3D computer graphics Shading