HOME

TheInfoList



OR:

Deep image compositing is a recently emerged way of
compositing Compositing is the process or technique of combining visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene. Live action, Live-action shooting for compositing is ...
and rendering digital images. In addition to the usual color and opacity channels a notion of depth is created. This allows multiple samples in the depth of the image to make up the final resulting color. This technique produces high quality results and removes artifacts around edges that could not be dealt with otherwise.


Deep data

Deep data is encoded by advanced 3D renderers into an image that samples information about the path each rendered
pixel In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest point in an all points addressable display device. In most digital display devices, pixels are the ...
takes along the
z axis Z (or z) is the 26th and last letter of the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its usual names in English are ''zed'' () and ''zee'' (), with an occas ...
extending outward from the virtual camera through space, including the color and opacity of every non-opaque surface or volume it passes through along the way, as well as neighboring samples. It might be considered somewhat analogous to the way ray tracing generates simulated photon paths through such mediums; however, ray tracing and other traditional rendering techniques generally produce images that contain only three or four channels of color and opacity values per pixel, flattened into a two dimensional frame. Depth maps, on the other hand, contain z axis information encoded in a grayscale image. Each level of gray represents a different slice of the z space. The "thickness" of each slice is determined at time of render, allowing for more or less depth fidelity depending on how deep the scene is. Depth maps have been a boon to compositors for blending 3D renders with live action and practical elements. To be useful, the map must have high enough bit depth to encode separation between close-to-camera objects and objects near infinity. Most 3D software packages are now capable of generating 16-bit and 32-bit depth maps, providing up to 2 billion depth levels. Depth maps do not however include transparency information about non-opaque surfaces or volumes and as such, objects beyond and viewed through these semi- or fully-transparent objects will have no depth information of their own and may not get composited or blurred correctly. Even the popular addition of cryptomattes to many post-production and
VFX Visual effects (sometimes abbreviated VFX) is the process by which imagery is created or manipulated outside the context of a live-action shot in filmmaking and video production. The integration of live-action footage and other live-action foota ...
studios' pipelines, while providing separate color-coded ID shapes for individual elements in a rendered scene to further bridge the gap between CGI and compositing, don't allow for the nearly automated and fully non-linear workflows that deep data does. This is because deep images encapsulate enough 3D information that normally time-intensive tasks such as rotoscoping with numerous holdout mattes for complex interactions between moving characters and semi-transparent environmental volumes like smoke or water, are essentially trivial. Instead of going through that process, multiple mattes could easily be generated from a single set of deep images with no need to re-render every matte element and background for each case. In addition to that efficiency and flexibility, deep data images inherently provide much higher visual quality in common areas that have been difficult with traditional renders, such as the motion-blurred edges of characters with semi-transparent elements like hair. One downside to the use of deep images is their substantial file size, since they encode a relatively enormous amount of data per frame compared to even multichannel formats such as OpenEXR.


Function-based (integrated)

The data is stored as a function of depth. This results in a function curve that can be used to look up the data at any arbitrary depth. Manipulating the data is harder.


Sample-based (deintegrated)

Each sample is considered as an independent piece and can so be manipulated easily. To make sure the data is representing the right detail, an additional expand value needs to be introduced.


Generating deep data

3D renderers produce the necessary data as a part of the rendering pipeline. Samples are gathered in depth and then combined. The deep data can be written out before this happens and so is nothing new to the process. Generating deep data from camera data needs a proper
depth map In 3D computer graphics and computer vision, a depth map is an image or image channel that contains information relating to the distance of the surfaces of scene objects from a viewpoint. The term is related (and may be analogous) to ''depth ...
. This is used in a couple of cases but still not accurate enough for detailed representation. For basic holdout task this can be sufficient though.


Compositing deep data images

Deep images can be composited like regular images. The depth component makes it easier to determine the layering order. Traditionally this had to be input by the user. Deep images have that information for themselves and need no user input. Edge artifacts are reduced as transparent pixels have more data to work with.


History

Deep Images have been around in
3D rendering 3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles. Rendering methods Rendering is the final process of creati ...
packages for quite a while now. The use of them for holdouts was first done at several VFX houses in shaders. Holdout mattes can be generated at render time. Using them in a more interactive manner was started recently by several companies, SideFX integrated it in their Houdini software and facilities like Industrial Light & Magic, DreamWorks Animation, Weta, AnimalLogic and DRD studios have implemented interactive solutions. In 2014 the
Academy of Motion Picture Arts and Sciences The Academy of Motion Picture Arts and Sciences (AMPAS, often pronounced ; also known as simply the Academy or the Motion Picture Academy) is a professional honorary organization with the stated goal of advancing the arts and sciences of motio ...
honored the technology with its annual SciTech awards. Dr. Peter Hillman for the long-term development and continued advancement of innovative, robust and complete toolsets for deep compositing and to Colin Doncaster, Johannes Saam, Areito Echevarria, Janne Kontkanen and Chris Cooper for the development, prototyping and promotion of technologies and workflows for deep compositing.


Resources


Pixar PaperDeep Image PaperDeepimg.comVideo tutorial of Deep Imaging as used on 2012 film ''Rise of the Planet of the Apes'', Nuke compositing software

Deep Image Training
*{{google code, dif, Deep Image File Format
Academy Award for the Technology
Image processing Visual effects