Color Quantization
   HOME
*



picture info

Color Quantization
In computer graphics, color quantization or color image quantization is quantization applied to color spaces; it is a process that reduces the number of distinct colors used in an image, usually with the intention that the new image should be as visually similar as possible to the original image. Computer algorithms to perform color quantization on bitmaps have been studied since the 1970s. Color quantization is critical for displaying images with many colors on devices that can only display a limited number of colors, usually due to memory limitations, and enables efficient compression of certain types of images. The name "color quantization" is primarily used in computer graphics research literature; in applications, terms such as ''optimized palette generation'', ''optimal palette generation'', or ''decreasing color depth'' are used. Some of these are misleading, as the palettes generated by standard algorithms are not necessarily the best possible. Algorithms Most standard ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Dithering Example Undithered
Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD. A common use of dither is converting a grayscale image to black and white, such that the density of black dots in the new image approximates the average gray level in the original. Etymology The term ''dither'' was published in books on analog computation and hydraulically controlled guns shortly after World War II. Though he did not use the term ''dither'', the concept of dithering to reduce quantization patterns was first applied by Lawrence G. Roberts in his 1961 MIT master's thesis and 1962 article. By 1964 dither was being used in the modern sense described in this article. The technique was in use at least as early as 1915, though not under the name ''dither''. In digital pr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Voronoi Diagram
In mathematics, a Voronoi diagram is a partition of a plane into regions close to each of a given set of objects. In the simplest case, these objects are just finitely many points in the plane (called seeds, sites, or generators). For each seed there is a corresponding region, called a Voronoi cell, consisting of all points of the plane closer to that seed than to any other. The Voronoi diagram of a set of points is dual to that set's Delaunay triangulation. The Voronoi diagram is named after mathematician Georgy Voronoy, and is also called a Voronoi tessellation, a Voronoi decomposition, a Voronoi partition, or a Dirichlet tessellation (after Peter Gustav Lejeune Dirichlet). Voronoi cells are also known as Thiessen polygons. Voronoi diagrams have practical and theoretical applications in many fields, mainly in science and technology, but also in visual art. The simplest case In the simplest case, shown in the first picture, we are given a finite set of points in the Euclidean p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Color Cell Compression
Color Cell Compression is a lossy image compression algorithm developed by Campbell et al., in 1986, which can be considered an early forerunner of modern texture compression algorithms, such as S3 Texture Compression and Adaptive Scalable Texture Compression. It is closely related to Block Truncation Coding, another lossy image compression algorithm, which predates Color Cell Compression, in that it uses the dominant luminance of a block of pixels to partition said pixels into two representative colors. The primary difference between Block Truncation Coding and Color Cell Compression is that the former was designed to compress grayscale images and the latter was designed to compress color images. Also, Block Truncation Coding requires that the standard deviation of the colors of pixels in a block be computed in order to compress an image, whereas Color Cell Compression does not use the standard deviation. Both algorithms, though, can compress an image down to effectively 2 bits p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Block Truncation Coding
Block Truncation Coding (BTC) is a type of lossy image compression technique for greyscale images. It divides the original images into blocks and then uses a quantizer to reduce the number of grey levels in each block whilst maintaining the same mean and standard deviation. It is an early predecessor of the popular hardware DXTC technique, although BTC compression method was first adapted to color long before DXTC using a very similar approach called Color Cell Compression. BTC has also been adapted to video compression. BTC was first proposed by Professors Mitchell and Delp at Purdue University. Another variation of BTC is Absolute Moment Block Truncation Coding or AMBTC, in which instead of using the standard deviation the first absolute moment is preserved along with the mean. AMBTC is computationally simpler than BTC and also typically results in a lower Mean Squared Error (MSE). AMBTC was proposed by Maximo Lema and Robert Mitchell. Using sub-blocks of 4×4 pixels gives a com ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Indexed Color
In computing, indexed color is a technique to manage digital images' colors in a limited fashion, in order to save computer memory and file storage, while speeding up display refresh and file transfers. It is a form of vector quantization compression. When an image is encoded in this way, color information is not directly carried by the image pixel data, but is stored in a separate piece of data called a color lookup table (CLUT) or palette: an array of color specifications. Every element in the array represents a color, indexed by its position within the array. Each image pixel does not contain the full specification of its color, but only its index into the ''palette''. This technique is sometimes referred as pseudocolor or indirect color, as colors are addressed indirectly. History Early graphics display systems that used 8-bit indexed color with frame buffers and color lookup tables include Shoup's SuperPaint (1973) and the video frame buffer described in 1975 by Kajiya, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Web Colors
Web colors are colors used in displaying web pages on the World Wide Web, and the methods for describing and specifying those colors. Colors may be specified as an RGB triplet or in hexadecimal format (a ''hex triplet'') or according to their common English names in some cases. A color tool or other graphics software is often used to generate color values. In some uses, hexadecimal color codes are specified with notation using a leading number sign (#). A color is specified according to the intensity of its red, green and blue components, each represented by eight bits. Thus, there are 24 bits used to specify a web color within the sRGB gamut, and 16,777,216 colors that may be so specified. Colors outside the sRGB gamut can be specified in Cascading Style Sheets by making one or more of the red, green and blue components negative or greater than 100%, so the color space is theoretically an unbounded extrapolation of sRGB similar to scRGB. Specifying a non-sRGB color this way req ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

World Wide Web
The World Wide Web (WWW), commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet. Documents and downloadable media are made available to the network through web servers and can be accessed by programs such as web browsers. Servers and resources on the World Wide Web are identified and located through character strings called uniform resource locators (URLs). The original and still very common document type is a web page formatted in Hypertext Markup Language (HTML). This markup language supports plain text, images, embedded video and audio contents, and scripts (short programs) that implement complex user interaction. The HTML language also supports hyperlinks (embedded URLs) which provide immediate access to other web resources. Web navigation, or web surfing, is the common practice of following such hyperlinks across multiple websites. Web applications are web pages that function as application s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Portable Network Graphics
Portable Network Graphics (PNG, officially pronounced , colloquially pronounced ) is a raster-graphics file format that supports lossless data compression. PNG was developed as an improved, non-patented replacement for Graphics Interchange Format (GIF) — unofficially, the initials PNG stood for the recursive acronym "PNG's not GIF". PNG supports palette-based images (with palettes of 24-bit RGB or 32-bit RGBA colors), grayscale images (with or without an alpha channel for transparency), and full-color non-palette-based RGB or RGBA images. The PNG working group designed the format for transferring images on the Internet, not for professional-quality print graphics; therefore non-RGB color spaces such as CMYK are not supported. A PNG file contains a single image in an extensible structure of ''chunks'', encoding the basic pixels and other information such as textual comments and integrity checks documented in RFC 2083. PNG files use the file extension PNG or png and hav ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

University Of Bonn
The Rhenish Friedrich Wilhelm University of Bonn (german: Rheinische Friedrich-Wilhelms-Universität Bonn) is a public research university located in Bonn, North Rhine-Westphalia, Germany. It was founded in its present form as the ( en, Rhine University) on 18 October 1818 by Frederick William III, as the linear successor of the ( en, Academy of the Prince-elector of Cologne) which was founded in 1777. The University of Bonn offers many undergraduate and graduate programs in a range of subjects and has 544 professors. The University of Bonn is a member of the U15 (German universities), German U15 association of major research-intensive universities in Germany and has the title of "University of Excellence" under the German Universities Excellence Initiative; it is consistently ranked amongst the best German universities in the world rankings and is one of the most research intensive universities in Germany. Bonn has 6 Clusters of Excellence, the most of any German university; t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Self-organizing Map
A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher dimensional data set while preserving the topological structure of the data. For example, a data set with p variables measured in n observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze. An SOM is a type of artificial neural network but is trained using competitive learning rather than the error-correction learning (e.g., backpropagation with gradient descent) used by other artificial neural networks. The SOM was introduced by the Finnish professor Teuvo Kohonen in the 1980s and therefore is sometim ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Image And Vision Computing
An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimensional picture, that resembles a subject. In the context of signal processing, an image is a distributed amplitude of color(s). In optics, the term “image” may refer specifically to a 2D image. An image does not have to use the entire visual system to be a visual representation. A popular example of this is of a greyscale image, which uses the visual system's sensitivity to brightness across all wavelengths, without taking into account different colors. A black and white visual representation of something is still an image, even though it does not make full use of the visual system's capabilities. Images are typically still, but in some cases can be moving or animated. Characteristics Images may be two or three- dimensional, such as a p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

K-means Clustering
''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to partition ''n'' observations into ''k'' clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. ''k''-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, better Euclidean solutions can be found using k-medians and k-medoids. The problem is computationally difficult (NP-hard); however, efficient heuristic algorithms converge quickly to a local optimum. These are usually similar to the expectation-maximization algorithm for mixtures of Gaussian distributions via an iterative refine ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]