Deblocking Filter
A deblocking filter is a video filter applied to decoded compressed video to improve visual quality and prediction performance by smoothing the sharp edges which can form between macroblocks when block coding techniques are used. The filter aims to improve the appearance of decoded pictures. It is a part of the specification for both the SMPTE VC-1 codec and the ITU H.264 (ISO MPEG-4 AVC) codec. H.264 deblocking filter In contrast with older MPEG- 1/ 2/ 4 standards, the H.264 deblocking filter is not an optional additional feature in the decoder. It is a feature on both the decoding path and on the encoding path, so that the in-loop effects of the filter are taken into account in reference to macroblocks used for prediction. When a stream is encoded, the filter strength can be selected, or the filter can be switched off entirely. Otherwise, the filter strength is determined by coding modes of adjacent blocks, quantization step size, and the steepness of the luminance gradie ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Filter (video)
A video filter is a software component that performs some operation on a multimedia stream. Multiple filters can be used in a chain, known as a '' filter graph'', in which each filter receives input from its upstream filter, processes the input and outputs the processed video to its downstream filter. __TOC__ With regards to video encoding three categories of filters can be distinguished: * prefilters: used before encoding * intrafilters: used while encoding (and are thus an integral part of a video codec) * postfilters: used after decoding Prefilters Common ''prefilters'' include: * denoising * resizing ( upsampling, downsampling) * contrast enhancement * deinterlacing (used to convert interlaced video to progressive video) * deflicking Intrafilters Common ''intrafilters'' include: * deblocking Postfilters Common ''postfilters'' include: * deinterlacing Deinterlacing is the process of converting interlaced video into a non-interlaced or Progressive scan, ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Compressed Video
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder. The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding: encoding is done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal. ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Compression Artifact
A compression artifact (or artefact) is a noticeable distortion of media (including Image, images, Sound recording, audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes small enough to be stored within the desired File size, disk space or Data stream, transmitted (''streamed'') within the available Bandwidth (computing), bandwidth (known as the data rate or bit rate). If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts. The compression algorithm may not be intelligent enough to discriminate between distortions of little subjective importance and those objectionable to the user. The most common digital compression artifacts are DCT blocks, caused by the discrete cosine transform (DCT) compression algorithm used in many digital media standards, such as JPEG, MP3, and MPEG video file formats. These com ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Macroblock
The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform blocks, and may be further subdivided into prediction blocks. Formats which are based on macroblocks include JPEG, where they are called MCU blocks, H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, MPEG-4 Part 2, and H.264/MPEG-4 AVC. In H.265/HEVC, the macroblock as a basic processing unit has been replaced by the coding tree unit. Technical details Transform blocks A macroblock is divided into transform blocks, which serve as input to the linear block transform, e.g. the DCT. In H.261, the first video codec to use macroblocks, transform blocks have a fixed size of 8×8 samples. In the YCbCr color space with 4:2:0 chroma subsampling, a 16×16 macroblock consists of 16×16 luma (Y) samples and 8×8 chroma (Cb and Cr) samples. ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Discrete Cosine Transform
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequency, frequencies. The DCT, first proposed by Nasir Ahmed (engineer), Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF), digital video (such as MPEG and ), digital audio (such as Dolby Digital, MP3 and Advanced Audio Coding, AAC), digital television (such as SDTV, HDTV and Video on demand, VOD), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren (codec), Siren and Opus (audio format), Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations. A DCT is a List of Fourier ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
VC-1
SMPTE 421, informally known as VC-1, is a video coding format. Most of it was initially developed as Microsoft's proprietary video format Windows Media Video 9 in 2003. With some enhancements including the development of a new Advanced Profile, it was officially approved as an SMPTE standard on April 3, 2006. It was primarily marketed as a lower-complexity competitor to the H.264/MPEG-4 AVC standard. After its development, several companies other than Microsoft asserted that they held patents that applied to the technology, including Panasonic, LG Electronics and Samsung Electronics. VC-1 is supported in the now-deprecated Microsoft Silverlight, the briefly-offered HD DVD disc format, and the Blu-ray Disc format. Format VC-1 is an evolution of the conventional block-based motion-compensated hybrid video coding design also found in H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, and MPEG-4 Part 2. It was widely characterized as an alternative to the ITU-T and MPEG video ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Video
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) systems, which, in turn, were replaced by flat-panel displays of several types. Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities, and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcasts, magnetic tape, optical discs, computer files, and network streaming. Etymology The word ''video'' comes from the Latin verb ''video,'' meaning to see or ''videre''. And as a noun, "that which is displayed on a (television) screen," History Analog video Video developed from facsimile systems developed in the mid-19th century. Early mechanical video scanners, such as the Nipkow disk, were patented as early as 1884, however, it took several decades b ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
MPEG-4 Part 2
MPEG-4 Part 2, MPEG-4 Visual (formally International Organization for Standardization, ISO/International Electrotechnical Commission, IEC 14496-2) is a video encoding specification designed by the Moving Picture Experts Group (MPEG). It belongs to the MPEG-4 ISO/IEC family of encoders. It uses block-wise motion compensation and a discrete cosine transform (DCT), similar to previous encoders such as MPEG-1#Part 2: Video, MPEG-1 Part 2 and H.262/MPEG-2 Part 2. Examples of popular implementations of the encoder specifications include DivX, Xvid and Nero Digital. MPEG-4 Part 2 is H.263 compatible in the sense that a basic H.263 bitstream is correctly decoded by an MPEG-4 Video decoder. (MPEG-4 Video decoder is natively capable of decoding a basic form of H.263.) In MPEG-4 Visual, there are two types of video object layers: the video object layer that provides full MPEG-4 functionality, and a reduced functionality video object layer, the video object layer with short headers (which prov ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Quantization (signal Processing)
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms. The difference between an input value and its quantized value (such as round-off error) is referred to as quantization error, noise or distortion. A device or algorithm function, algorithmic function that performs quantization is called a quantizer. An analog-to-digital converter is an example of a quantizer. Example For example, Rounding#Round half up, rounding a real number x to the nearest integer value forms a very basic type of q ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Luma (video)
In video, luma (Y') represents the brightness in an image (the "black-and-white" or achromatic portion of the image). Luma is typically paired with chrominance. Luma represents the grey, achromatic image, while the chroma components represent the color information. Converting RGB color model, R′G′B′ sources (such as the output of a three-CCD camera) into luma and chroma allows for chroma subsampling: because human vision has finer spatial sensitivity to luminance ("black and white") differences than chromatic differences, video systems can store and transmit chromatic information at lower resolution, optimizing perceived detail at a particular bandwidth. Luma versus relative luminance Luma is the weighted sum of gamma-compressed R′G′B′ components of a color video—the ''prime symbols'' ′ denote Gamma correction, gamma compression. The word was proposed to prevent confusion between luma as implemented in video engineering and relative luminance as used in color s ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Chrominance
Chrominance (''chroma'' or ''C'' for short) is the signal used in video systems to convey the color information of the picture (see YUV color model), separately from the accompanying Luma (video), luma signal (or Y' for short). Chrominance is usually represented as two color-difference components: U = B-Y, B′ − Y′ (blue − luma) and V = R-Y, R′ − Y′ (red − luma). Each of these different components may have scale factors and offsets applied to it, as specified by the applicable video standard. In composite video signals, the U and V signals modulate a color subcarrier signal, and the result is referred to as the chrominance signal; the Phase (waves), phase and amplitude of this modulated chrominance signal correspond approximately to the hue and Saturation (color theory), saturation of the color. In digital-video and still-image color spaces such as Y′CbCr, the luma and chrominance components are digita ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Reference Frame (video)
''Reference frames'' are frames of a compressed video that are used to define future frames. As such, they are only used in inter-frame compression techniques. In older video encoding standards, such as MPEG-2, only one reference frame – the previous frame – was used for P-frames. Two reference frames (one past and one future) were used for B-frames. Multiple reference frames Some modern video encoding standards, such as H.264/AVC, allow the use of multiple reference frames. This allows the video encoder to choose among more than one previously decoded frame on which to base each macroblock in the next frame. While the best frame for this purpose is usually the previous frame, the extra reference frames can improve compression efficiency and/or video quality. Note that different reference frames can be chosen for different macroblocks in the same frame. The maximum number of concurrent reference frames supported by H.264 is 16. Different reference frames can be chosen ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |