HOME
*





Variance Adaptive Quantization
Variance Adaptive Quantization (VAQ) is a video encoding algorithm that was first introduced in the open source video encoder x264. According to Xvid Builds FAQ: "It's an algorithm that tries to optimally choose a quantizer for each macroblock using advanced math algorithms." It was later ported to programs which encode video content in other video standards, like MPEG-4 ASP or MPEG-2. In the case of Xvid Xvid (formerly "XviD") is a video codec library following the MPEG-4 video coding standard, specifically MPEG-4 Part 2 Advanced Simple Profile (ASP). It uses ASP features such as b-frames, global and quarter pixel motion compensation, lumi mas ..., the algorithm is intended to make up for the earlier limitations in its Adaptive Quantization mode. The first Xvid library containing this improvement was released in February 2008. References External links Implementation of variance-based adaptive quantization in x264 {{Tv-tech-stub Signal processing Video compression ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


X264
x264 is a free and open-source software library and a command-line utility developed by VideoLAN for encoding video streams into the H.264/MPEG-4 AVC video coding format. It is released under the terms of the GNU General Public License. History x264 was originally developed by Laurent Aimar, who stopped development in 2004 after being hired by ATEME. Loren Merritt then took over development. Later, in 2008, Fiona Glaser joined the project. They both stopped contributing in 2014. Today, x264 is primarily developed by Anton Mitrofanov and Henrik Gramner. Capabilities x264 provides a command line interface as well as an API. The former is used by many graphical user interfaces, such as Staxrip and MeGUI. The latter is used by many other interfaces, such as HandBrake and FFmpeg. x264 implements a large number of features compared to other H.264 encoders. x264 contains some psychovisual enhancements which aim to increase the subjective video quality of the encoded video. * Adapti ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Quantization (signal Processing)
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms. The difference between an input value and its quantized value (such as round-off error) is referred to as quantization error. A device or algorithmic function that performs quantization is called a quantizer. An analog-to-digital converter is an example of a quantizer. Example For example, rounding a real number x to the nearest integer value forms a very basic type of quantizer – a ''uniform'' one. A typical (''mid-tread'') uni ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Macroblock
The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform blocks, and may be further subdivided into prediction blocks. Formats which are based on macroblocks include JPEG, where they are called MCU blocks, H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, MPEG-4 Part 2, and H.264/MPEG-4 AVC. In H.265/HEVC, the macroblock as a basic processing unit has been replaced by the coding tree unit. Technical details Transform blocks A macroblock is divided into transform blocks, which serve as input to the linear block transform, e.g. the DCT. In H.261, the first video codec to use macroblocks, transform blocks have a fixed size of 8×8 samples. In the YCbCr color space with 4:2:0 chroma subsampling, a 16×16 macroblock consists of 16×16 luma (Y) samples and 8×8 chroma (Cb and Cr) samples. These ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


MPEG-4 ASP
MPEG-4 Part 2, MPEG-4 Visual (formally ISO/ IEC 14496-2) is a video compression format developed by the Moving Picture Experts Group (MPEG). It belongs to the MPEG-4 ISO/IEC standards. It uses block-wise motion compensation and a discrete cosine transform (DCT), similar to previous standards such as MPEG-1 Part 2 and H.262/MPEG-2 Part 2. Several popular codecs including DivX, Xvid, and Nero Digital implement this standard. Note that MPEG-4 Part 10 defines a different format from MPEG-4 Part 2 and should not be confused with it. MPEG-4 Part 10 is commonly referred to as H.264 or AVC, and was jointly developed by ITU-T and MPEG. MPEG-4 Part 2 is H.263 compatible in the sense that a basic H.263 bitstream is correctly decoded by an MPEG-4 Video decoder. (MPEG-4 Video decoder is natively capable of decoding a basic form of H.263.) In MPEG-4 Visual, there are two types of video object layers: the video object layer that provides full MPEG-4 functionality, and a reduced functionali ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




MPEG-2
MPEG-2 (a.k.a. H.222/H.262 as was defined by the ITU) is a standard for "the generic video coding format, coding of moving pictures and associated audio information". It describes a combination of Lossy compression, lossy video compression and Lossy compression, lossy audio data compression methods, which permit storage and transmission of movies using currently available storage media and transmission bandwidth. While MPEG-2 is not as efficient as newer standards such as H.264/AVC and HEVC, H.265/HEVC, backwards compatibility with existing hardware and software means it is still widely used, for example in over-the-air digital television broadcasting and in the DVD-Video standard. Main characteristics MPEG-2 is widely used as the format of digital television signals that are broadcast by terrestrial television, terrestrial (over-the-air), Cable television, cable, and direct broadcast satellite Television, TV systems. It also specifies the format of movies and other programs th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Xvid
Xvid (formerly "XviD") is a video codec library following the MPEG-4 video coding standard, specifically MPEG-4 Part 2 Advanced Simple Profile (ASP). It uses ASP features such as b-frames, global and quarter pixel motion compensation, lumi masking, trellis quantization, and H.263, MPEG and custom quantization matrices. Xvid is a primary competitor of the DivX Pro Codec. In contrast with the DivX codec, which is proprietary software developed by DivX, Inc., Xvid is free software distributed under the terms of the GNU General Public License. This also means that unlike the DivX codec, which is only available for a limited number of platforms, Xvid can be used on all platforms and operating systems for which the source code can be compiled. History In January 2001, DivXNetworks founded ''OpenDivX'' as part of Project Mayo which was intended to be a home for open source multimedia projects. OpenDivX was an open-source MPEG-4 video codec based on a stripped-down version of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Signal Processing
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, and scientific measurements. Signal processing techniques are used to optimize transmissions, Data storage, digital storage efficiency, correcting distorted signals, subjective video quality and to also detect or pinpoint components of interest in a measured signal. History According to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. They further state that the digital refinement of these techniques can be found in the digital control systems of the 1940s and 1950s. In 1948, Claude Shannon wrote the influential paper "A Mathematical Theory of Communication" which was published in the Bell System Technical Journal. The paper laid the groundwork for later development of information c ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]