HOME

TheInfoList



OR:

Projective texture mapping is a method of
texture mapping Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color. History The original technique was pioneered by Edwin Catmull in 1974. Texture mapping ...
that allows a textured image to be projected onto a scene as if by a
slide projector A slide projector is an opto-mechanical device for showing photographic slides. 35 mm slide projectors, direct descendants of the larger-format magic lantern, first came into widespread use during the 1950s as a form of occasional home ...
. Projective texture mapping is useful in a variety of lighting techniques and it is the starting point for
shadow mapping Shadow mapping or shadowing projection is a process by which shadows are added to 3D computer graphics. This concept was introduced by Lance Williams in 1978, in a paper entitled "Casting curved shadows on curved surfaces." Since then, it has b ...
. Projective texture mapping is essentially a special
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
transformation Transformation may refer to: Science and mathematics In biology and medicine * Metamorphosis, the biological process of changing physical form after birth or hatching * Malignant transformation, the process of cells becoming cancerous * Trans ...
which is performed per-vertex and then linearly interpolated as standard texture mapping.


Fixed function pipeline approach

Historically, using projective texture mapping involved considering a special form of eye linear texture coordinate generation transform (''tcGen'' for short). This transform was then multiplied by another matrix representing the projector's properties which were stored in texture coordinate transform matrix. The resulting concatenated matrix was basically a function of both projector properties and vertex eye positions. The key points of this approach are that eye linear tcGen is a function of vertex eye coordinates, which is a result of both eye properties and object space vertex coordinates (more specifically, the object space vertex position is transformed by the model-view-projection matrix). Because of that, the corresponding texture matrix can be used to "shift" the eye properties so the concatenated result is the same as using an eye linear tcGen from a point of view which can be different from the observer.


Programmable pipeline approach

A less involved method to compute this approach became possible with
vertex shaders In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene - a process known as ''shading''. Shaders have evolved to perform a variety of speci ...
. The previous algorithm can then be reformulated by simply considering two model-view-projection matrices: one from the eye point of view and the other from the projector point of view. In this case, the projector model-view-projection matrix is essentially the aforementioned concatenation of eye-linear tcGen with the intended projector shift function. By using those two matrices, a few instructions are sufficient to output the transformed eye space vertex position and a projective texture coordinate. This coordinate is simply obtained by considering the projector's model-view-projection matrix: in other words, this is the eye-space vertex position if the considered projector would have been an observer.


Caveats

In both the proposed approaches there are two little problems which can be trivially solved and comes from the different conventions used by eye space and texture space. Defining properties of those spaces is beyond the scope of this article but it's well known that textures should usually be addressed in the range ..1while eye space coordinates are addressed in the range 1..1 According to the used texture wrap mode various artifacts may occur but it's obvious a shift and scale operation is definitely necessary to get the expected result. The other problem is actually a mathematical issue. It is well known the matrix math used produces a back projection. This artifact has historically been avoided by using a special black and white texture to cut away unnecessary projecting contributions. Using pixel shaders a different approach can be used: a coordinate check is sufficient to discriminate between forward (correct) contributions and backward (wrong, to be avoided) ones.


References

# Th
original paper
from th
nVIDIA web site
includes all the needed documentation on this issue. The same site also contain

# Texture coordinate generation is covered in section 2.11.4 "Generating Texture Coordinates" from th
OpenGL 2.0 specification
Eye linear texture coordinate generation is a special case. #{{note, glTCXFform Texture matrix is introduced in section 2.11.2 "Matrices" of th
OpenGL 2.0 specification


External links

* http://www.3dkingdoms.com/weekly/weekly.php?a=20 A tutorial showing how to implement projective texturing using the programmable pipeline approach in OpenGL. 3D computer graphics