HOME
*



picture info

Structured Light
A structured light pattern designed for surface inspection An Automatix Seamtracker arc welding robot equipped with a camera and structured laser light source, enabling the robot to follow a welding seam automatically Structured light is the process of projecting a known pattern (often grids or horizontal bars) on to a scene. The way that these deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene, as used in structured light 3D scanners. ''Invisible'' (or ''imperceptible'') structured light uses structured light without interfering with other computer vision tasks for which the projected pattern will be confusing. Example methods include the use of infrared light or of extremely high frame rates alternating between two exact opposite patterns. Structured light is used by a number of police forces for the purpose of photographing fingerprints in a 3D scene. Where previously they would use tape to extract ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Light Stage
A light stage or light cage is equipment used for shape, texture, reflectance and motion capture often with structured light and a multi-camera setup. Reflectance capture The reflectance field over a human face was first captured in 1999 by Paul Debevec, Tim Hawkins et al and presented in SIGGRAPH 2000. The method they used to find the light that travels under the skin was based on the existing scientific knowledge that light reflecting off the air-to-oil retains its polarization while light that travels under the skin loses its polarization. Using this information, a light stage was built by Debevec et al., consisting of # Moveable digital camera # Moveable simple light source (full rotation with adjustable radius and height) # Two polarizers set into various angles in front of the light and the camera # A computer with relatively simple programs doing relatively simple tasks. The setup enabled the team to find the subsurface scattering component of the bidirectio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Time-of-flight Camera
A time-of-flight camera (ToF camera), also known as time-of-flight sensor (ToF sensor), is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers. Types of devices Several different technologies for time-of-flight cameras have been developed. RF-modulated light sources with phase detectors Photonic Mixer Devices (PMD), the Swiss Ranger, an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Kinect
Kinect is a line of motion sensing input devices produced by Microsoft and first released in 2010. The devices generally contain RGB cameras, and infrared projectors and detectors that map depth through either structured light or time of flight calculations, which can in turn be used to perform real-time gesture recognition and body skeletal detection, among other capabilities. They also contain microphones that can be used for speech recognition and voice control. Kinect was originally developed as a motion controller peripheral for Xbox video game consoles, distinguished from competitors (such as Nintendo's Wii Remote and Sony's PlayStation Move) by not requiring physical controllers. The first-generation Kinect was based on technology from Israeli company PrimeSense, and unveiled at E3 2009 as a peripheral for Xbox 360 codenamed "Project Natal". It was first released on November 4, 2010, and would go on to sell eight million units in its first 60 days of availability. Th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Range Imaging
Range imaging is the name for a collection of techniques that are used to produce a 2D image showing the distance to points in a scene from a specific point, normally associated with some type of sensor device. The resulting range image has pixel values that correspond to the distance. If the sensor that is used to produce the range image is properly calibrated the pixel values can be given directly in physical units, such as meters. Types of range cameras The sensor device that is used for producing the range image is sometimes referred to as a ''range camera'' or ''depth camera''. Range cameras can operate according to a number of different techniques, some of which are presented here. Stereo triangulation Stereo triangulation is an application of stereophotogrammetry where the depth data of the pixels are determined from data acquired using a stereo or multiple-camera setup system. This way it is possible to determine the depth to points in the scene, for example, from ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Lidar
Lidar (, also LIDAR, or LiDAR; sometimes LADAR) is a method for determining ranges (variable distance) by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. It can also be used to make digital 3-D representations of areas on the Earth's surface and ocean bottom of the intertidal and near coastal zone by varying the wavelength of light. It has terrestrial, airborne, and mobile applications. ''Lidar'' is an acronym of "light detection and ranging" or "laser imaging, detection, and ranging". It is sometimes called 3-D laser scanning, a special combination of 3-D scanning and laser scanning. Lidar is commonly used to make high-resolution maps, with applications in surveying, geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics, laser guidance, airborne laser swath mapping (ALSM), and laser altimetry. It is also used in control and navigation for som ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Laser Dynamic Range Imager
The Laser Dynamic Range Imager (LDRI) is a LIDAR range imaging device developed by Sandia National Laboratories for the US Space Shuttle program. The sensor was developed as part of NASA's "Return to Flight" effort following the Space Shuttle Columbia disaster to provide 2-D and 3-D images of the thermal protection system on the Space Shuttle Orbiter. The LDRI generates 3-dimensional images from 2-dimensional video. Modulated laser illumination is demodulated by the receive optics, and the resulting video sequences can be processed to produce 3-d images. The modulation produces a flickering effect from frame-to-frame in the video imagery. As part of the Orbiter Boom Sensor System, the LDRI is mounted at the end of the boom on a pan-tilt unit (PTU) along with an intensified video camera (ITVC). During 2-dimensional imaging of the reinforced carbon-carbon panels on the leading edge of the shuttle's wings, the LDRI is capable of seeing damage as small as a 0.020-inch crack. Dur ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Depth Map
In 3D computer graphics and computer vision, a depth map is an image or image channel that contains information relating to the distance of the surfaces of scene objects from a viewpoint. The term is related (and may be analogous) to ''depth buffer'', ''Z-buffer'', ''Z-buffering'', and ''Z-depth''. tp://ftp.futurenet.co.uk/pub/arts/Glossary.pdf Computer Arts / 3D World Glossary Document retrieved 26 January 2011. The "Z" in these latter terms relates to a convention that the central axis of view of a camera is in the direction of the camera's Z axis, and not to the absolute Z axis of a scene. Examples File:Cubic Structure.jpg, Cubic Structure File:Cubic Frame Stucture and Floor Depth Map.jpg, Depth Map: Nearer is darker File:Cubic Structure and Floor Depth Map with Front and Back Delimitation.jpg, Depth Map: Nearer the Focal Plane is darker Two different depth maps can be seen here, together with the original model from which they are derived. The first depth map shows lu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Reflectance Capture
The reflectance of the surface of a material is its effectiveness in reflecting radiant energy. It is the fraction of incident electromagnetic power that is reflected at the boundary. Reflectance is a component of the response of the electronic structure of the material to the electromagnetic field of light, and is in general a function of the frequency, or wavelength, of the light, its polarization, and the angle of incidence. The dependence of reflectance on the wavelength is called a ''reflectance spectrum'' or ''spectral reflectance curve''. Mathematical definitions Hemispherical reflectance The ''hemispherical reflectance'' of a surface, denoted , is defined as R = \frac, where is the radiant flux ''reflected'' by that surface and is the radiant flux ''received'' by that surface. Spectral hemispherical reflectance The ''spectral hemispherical reflectance in frequency'' and ''spectral hemispherical reflectance in wavelength'' of a surface, denoted and respectively, are ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Dual Photography
Dual photography is a photographic technique that uses Helmholtz reciprocity to capture the light field of all light paths from a structured illumination source to a camera. Image processing software can then be used to reconstruct the scene as it would have been seen from the viewpoint of the projector. See also * Light-field camera A light field camera, also known as a plenoptic camera, is a camera that captures information about the ''light field'' emanating from a scene; that is, the intensity of light in a scene, and also the precise direction that the light rays are tr ... References {{reflist External links * http://graphics.stanford.edu/papers/dual_photography/ Photographic techniques Imaging ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Structured Light Sources
Structuring, also known as smurfing in banking jargon, is the practice of executing financial transactions such as making bank deposits in a specific pattern, calculated to avoid triggering financial institutions to file reports required by law, such as the United States' Bank Secrecy Act (BSA) and Internal Revenue Code section 6050I (relating to the requirement to file Form 8300). Structuring may be done in the context of money laundering, fraud, and other financial crimes. Legal restrictions on structuring are concerned with limiting the size of domestic transactions for individuals. Definition Structuring is the act of parceling what would otherwise be a large financial transaction into a series of smaller transactions to avoid scrutiny by regulators and law enforcement. Typically each of the smaller transactions is executed in an amount below some statutory limit that normally does not require a financial institution to file a report with a government agency. Criminal enterp ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Multiple-camera Setup
The multiple-camera setup, multiple-camera mode of production, multi-camera or simply multicam is a method of filmmaking and video production. Several cameras—either film or professional video cameras—are employed on the set and simultaneously record or broadcast a scene. It is often contrasted with a single-camera setup, which uses one camera. Description Generally, the two outer cameras shoot close-up shots or "crosses" of the two most active characters on the set at any given time, while the central camera or cameras shoot a wider master shot to capture the overall action and establish the geography of the room. In this way, multiple shots are obtained in a single take without having to start and stop the action. This is more efficient for programs that are to be shown a short time after being shot as it reduces the time spent in film or video editing. It is also a virtual necessity for regular, high-output shows like daily soap operas. Apart from saving editing time, s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]