3Dc
   HOME
*





3Dc
3Dc (FourCC : ATI2), also known as DXN, BC5, or Block Compression 5 is a lossy data compression algorithm for normal maps invented and first implemented by ATI. It builds upon the earlier DXT5 algorithm and is an open standard. 3Dc is now implemented by both ATI and Nvidia. Target application The target application, normal mapping, is an extension of bump mapping that simulates lighting on geometric surfaces by reading surface normals from a rectilinear grid analogous to a texture map - giving simple models the impression of increased complexity. This additional channel however increases the load on the graphics system's memory bandwidth. Pre-existing lossy compression algorithms implemented on consumer 3D hardware lacked the precision necessary for reproducing normal maps without excessive visible artefacts, justifying the development of 3Dc. Algorithm Surface normals are three-dimensional vectors of unit length. Because of the length constraint only two elements of any no ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


DXT5
S3 Texture Compression (S3TC) (sometimes also called DXTn, DXTC, or BCn) is a group of related lossy texture compression algorithms originally developed by Iourcha et al. of S3 Graphics, Ltd. for use in their Savage 3D computer graphics accelerator. The method of compression is strikingly similar to the previously published Color Cell Compression, which is in turn an adaptation of Block Truncation Coding published in the late 1970s. Unlike some image compression algorithms (e.g. JPEG), S3TC's fixed-rate data compression coupled with the single memory access (cf. Color Cell Compression and some VQ-based schemes) made it well-suited for use in compressing textures in hardware-accelerated 3D computer graphics. Its subsequent inclusion in Microsoft's DirectX 6.0 and OpenGL 1.3 (via the GL_EXT_texture_compression_s3tc extension) led to widespread adoption of the technology among hardware and software makers. While S3 Graphics is no longer a competitor in the graphics accelerator ma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


FourCC
A FourCC ("four-character code") is a sequence of four bytes (typically ASCII) used to uniquely identify data formats. It originated from the OSType or ResType metadata system used in classic Mac OS and was adopted for the Amiga/Electronic Arts Interchange File Format and derivatives. The idea was later reused to identify compressed data types in QuickTime and DirectShow. History In 1984, the earliest version of a Macintosh OS, Apple System 1, was released. It used the single-level Macintosh File System with metadata fields including file types, creator (application) information, and forks to store additional resources. All these fields are four-character codes known as OSType. It was possible to change this information without changing the data itself, so that they could be interpreted differently. Identical codes were used throughout the system, as type tags for all kinds of data. In 1985, Electronic Arts introduced the Interchange File Format (IFF) meta-format (family of f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Lossy Data Compression
In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storing, handling, and transmitting content. The different versions of the photo of the cat on this page show how higher degrees of approximation create coarser images as more details are removed. This is opposed to Lossless compression, lossless data compression (reversible data compression) which does not degrade the data. The amount of data reduction possible using lossy compression is much higher than using lossless techniques. Well-designed lossy compression technology often reduces file sizes significantly before degradation is noticed by the end-user. Even when noticeable by the user, further data reduction may be desirable (e.g., for real-time communication or to reduce transmission times or storage needs). The most widely use ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Normal Mapping
In 3D computer graphics, normal mapping, or Dot3 bump mapping, is a texture mapping technique used for faking the lighting of bumps and dents – an implementation of bump mapping. It is used to add details without using more polygons. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by generating a normal map from a high polygon model or height map. Normal maps are commonly stored as regular RGB images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the surface normal. History In 1978 Jim Blinn described how the normals of a surface could be perturbed to make geometrically flat faces have a detailed appearance. The idea of taking geometric details from a high polygon model was introduced in "Fitting Smooth Surfaces to Dense Polygon Meshes" by Krishnamurthy and Levoy, Proc. SIGGRAPH 1996, where this approach was used for creating displacement maps over nurbs. In 1998, two papers were pre ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

ATI Technologies
ATI Technologies Inc. (commonly called ATI) was a Canadian semiconductor industry, semiconductor technology corporation based in Markham, Ontario, Markham, Ontario, that specialized in the development of graphics processing units and chipsets. Founded in 1985 as Array Technology Inc., the company listed publicly in 1993. Advanced Micro Devices (AMD) acquired ATI in 2006. As a major fabrication-less or fabless semiconductor company, ATI conducted research and development in-house and outsourcing, outsourced the manufacturing and assembly of its products. With the decline and eventual bankruptcy of 3dfx in 2000, ATI and its chief rival Nvidia emerged as the two dominant players in the graphics processors industry, eventually forcing other manufacturers into niche roles. The acquisition of ATI in 2006 was important to AMD's strategic development of its AMD Fusion, Fusion generation of computer processors, which integrated general processing abilities with graphics processing function ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Open Standard
An open standard is a standard that is openly accessible and usable by anyone. It is also a prerequisite to use open license, non-discrimination and extensibility. Typically, anybody can participate in the development. There is no single definition, and interpretations vary with usage. The terms ''open'' and ''standard'' have a wide range of meanings associated with their usage. There are a number of definitions of open standards which emphasize different aspects of openness, including the openness of the resulting specification, the openness of the drafting process, and the ownership of rights in the standard. The term "standard" is sometimes restricted to technologies approved by formalized committees that are open to participation by all interested parties and operate on a consensus basis. The definitions of the term ''open standard'' used by academics, the European Union, and some of its member governments or parliaments such as Denmark, France, and Spain preclude open standard ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Nvidia
Nvidia CorporationOfficially written as NVIDIA and stylized in its logo as VIDIA with the lowercase "n" the same height as the uppercase "VIDIA"; formerly stylized as VIDIA with a large italicized lowercase "n" on products from the mid 1990s to early-mid 2000s. Though unofficial, second letter capitalization of NVIDIA, i.e. nVidia, may be found within enthusiast communities and publications. ( ) is an American multinational technology company incorporated in Delaware and based in Santa Clara, California. It is a software and fabless company which designs graphics processing units (GPUs), application programming interface (APIs) for data science and high-performance computing as well as system on a chip units (SoCs) for the mobile computing and automotive market. Nvidia is a global leader in artificial intelligence hardware and software. Its professional line of GPUs are used in workstations for applications in such fields as architecture, engineering and construction, media ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Bump Mapping
Bump mapping is a texture mapping technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface although the surface of the underlying object is not changed. Bump mapping was introduced by James Blinn in 1978.Blinn, James F"Simulation of Wrinkled Surfaces" Computer Graphics, Vol. 12 (3), pp. 286-292 SIGGRAPH-ACM (August 1978) Normal mapping is the most common variation of bump mapping used. Principles Bump mapping is a technique in computer graphics to make a rendered surface look more realistic by simulating small displacements of the surface. However, unlike displacement mapping, the surface geometry is not modified. Instead only the surface normal is modified ''as if'' the surface had been displaced. The modified surface normal is then used fo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Geometric
Geometry (; ) is, with arithmetic, one of the oldest branches of mathematics. It is concerned with properties of space such as the distance, shape, size, and relative position of figures. A mathematician who works in the field of geometry is called a ''List of geometers, geometer''. Until the 19th century, geometry was almost exclusively devoted to Euclidean geometry, which includes the notions of point (geometry), point, line (geometry), line, plane (geometry), plane, distance, angle, surface (mathematics), surface, and curve, as fundamental concepts. During the 19th century several discoveries enlarged dramatically the scope of geometry. One of the oldest such discoveries is Carl Friedrich Gauss' ("remarkable theorem") that asserts roughly that the Gaussian curvature of a surface is independent from any specific embedding in a Euclidean space. This implies that surfaces can be studied ''intrinsically'', that is, as stand-alone spaces, and has been expanded into the theory of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Texture Mapping
Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color. History The original technique was pioneered by Edwin Catmull in 1974. Texture mapping originally referred to diffuse mapping, a method that simply mapped pixels from a texture to a 3D surface ("wrapping" the image around the object). In recent decades, the advent of multi-pass rendering, multitexturing, mipmaps, and more complex mappings such as height mapping, bump mapping, normal mapping, displacement mapping, reflection mapping, specular mapping, occlusion mapping, and many other variations on the technique (controlled by a materials system) have made it possible to simulate near-photorealism in real time by vastly reducing the number of polygons and lighting calculations needed to construct a realistic and functional 3D scene. Texture maps A is an image applied (mapped) to the surface of a shape or polygon. This ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Data Compression
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder. The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding; encoding done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Texture Compression
Texture compression is a specialized form of image compression designed for storing texture maps in 3D computer graphics rendering systems. Unlike conventional image compression algorithms, texture compression algorithms are optimized for random access. Tradeoffs In their seminal paper on texture compression, Beers, Agrawala and Chaddha list four features that tend to differentiate texture compression from other image compression techniques. These features are: ;Decoding Speed: It is highly desirable to be able to render directly from the compressed texture data and so, in order not to impact rendering performance, decompression must be fast. ;Random Access: Since predicting the order that a renderer accesses texels would be difficult, any texture compression scheme must allow fast random access to decompressed texture data. This tends to rule out many better-known image compression schemes such as JPEG or run-length encoding. ;Compression Rate and Visual Quality: In a rendering ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]