Deblocking Filter
A deblocking filter is a video filter applied to decoded compressed video to improve visual quality and prediction performance by smoothing the sharp edges which can form between macroblocks when block coding techniques are used. The filter aims to improve the appearance of decoded pictures. It is a part of the specification for both the SMPTE VC-1 codec and the ITU H.264 (ISO MPEG-4 AVC) codec. H.264 deblocking filter In contrast with older MPEG- 1/ 2/ 4 standards, the H.264 deblocking filter is not an optional additional feature in the decoder. It is a feature on both the decoding path and on the encoding path, so that the in-loop effects of the filter are taken into account in reference macroblocks used for prediction. When a stream is encoded, the filter strength can be selected, or the filter can be switched off entirely. Otherwise, the filter strength is determined by coding modes of adjacent blocks, quantization step size, and the steepness of the luminance gradient betw ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Filter (video)
A video filter is a software component that performs some operation on a multimedia stream. Multiple filters can be used in a chain, known as a ''filter graph'', in which each filter receives input from its upstream filter, processes the input and outputs the processed video to its downstream filter. __TOC__ With regards to video encoding three categories of filters can be distinguished: * prefilters: used before encoding * intrafilters: used while encoding (and are thus an integral part of a video codec) * postfilters: used after decoding Prefilters Common ''prefilters'' include: * denoising * resizing ( upsampling, downsampling) * contrast enhancement * deinterlacing (used to convert interlaced video to progressive video) * deflicking In video processing, deflicking is a filtering operation applied to brightness Brightness is an attribute of visual perception in which a source appears to be radiating or reflecting light. In other words, brightness is the perce ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Compressed Video
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder. The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding; encoding done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal. ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Compression Artifact
A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes small enough to be stored within the desired disk space or transmitted (''streamed'') within the available bandwidth (known as the data rate or bit rate). If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts. The compression algorithm may not be intelligent enough to discriminate between distortions of little subjective importance and those objectionable to the user. The most common digital compression artifacts are DCT blocks, caused by the discrete cosine transform (DCT) compression algorithm used in many digital media standards, such as JPEG, MP3, and MPEG video file formats. These compression artifacts appear when heavy compression is applied, and occu ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Macroblock
The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform blocks, and may be further subdivided into prediction blocks. Formats which are based on macroblocks include JPEG, where they are called MCU blocks, H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, MPEG-4 Part 2, and H.264/MPEG-4 AVC. In H.265/HEVC, the macroblock as a basic processing unit has been replaced by the coding tree unit. Technical details Transform blocks A macroblock is divided into transform blocks, which serve as input to the linear block transform, e.g. the DCT. In H.261, the first video codec to use macroblocks, transform blocks have a fixed size of 8×8 samples. In the YCbCr color space with 4:2:0 chroma subsampling, a 16×16 macroblock consists of 16×16 luma (Y) samples and 8×8 chroma (Cb and Cr) samples. T ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
VC-1
SMPTE 421, informally known as VC-1, is a video coding format. Most of it was initially developed as Microsoft's proprietary video format Windows Media Video 9 in 2003. With some enhancements including the development of a new Advanced Profile, it was officially approved as a SMPTE standard on April 3, 2006. It was primarily marketed as a lower-complexity competitor to the H.264/MPEG-4 AVC standard. After its development, several companies other than Microsoft asserted that they held patents that applied to the technology, including Panasonic, LG Electronics and Samsung Electronics. VC-1 is supported in the now-deprecated Microsoft Silverlight, the briefly-offered HD DVD disc format, and the Blu-ray Disc format. Format VC-1 is an evolution of the conventional block-based motion-compensated hybrid video coding design also found in H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, and MPEG-4 Part 2. It was widely characterized as an alternative to the ITU-T and MPEG video ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Video
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) systems which were later replaced by flat panel displays of several types. Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcast, magnetic tape, optical discs, computer files, and network streaming. History Analog video Video technology was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Video was originally exclusively a live technology. Charles Ginsburg led an Ampex research team developing one of the first practical ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
MPEG-4 Part 2
MPEG-4 Part 2, MPEG-4 Visual (formally ISO/ IEC 14496-2) is a video compression format developed by the Moving Picture Experts Group (MPEG). It belongs to the MPEG-4 ISO/IEC standards. It uses block-wise motion compensation and a discrete cosine transform (DCT), similar to previous standards such as MPEG-1 Part 2 and H.262/MPEG-2 Part 2. Several popular codecs including DivX, Xvid, and Nero Digital implement this standard. Note that MPEG-4 Part 10 defines a different format from MPEG-4 Part 2 and should not be confused with it. MPEG-4 Part 10 is commonly referred to as H.264 or AVC, and was jointly developed by ITU-T and MPEG. MPEG-4 Part 2 is H.263 compatible in the sense that a basic H.263 bitstream is correctly decoded by an MPEG-4 Video decoder. (MPEG-4 Video decoder is natively capable of decoding a basic form of H.263.) In MPEG-4 Visual, there are two types of video object layers: the video object layer that provides full MPEG-4 functionality, and a reduced functionali ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Reference Frame (video)
''Reference frames'' are frames of a compressed video that are used to define future frames. As such, they are only used in inter-frame compression techniques. In older video encoding standards, such as MPEG-2, only one reference frame – the previous frame – was used for P-frames. Two reference frames (one past and one future) were used for B-frames. Multiple reference frames Some modern video encoding standards, such as H.264/AVC, allow the use of multiple reference frames. This allows the video encoder to choose among more than one previously decoded frame on which to base each macroblock in the next frame. While the best frame for this purpose is usually the previous frame, the extra reference frames can improve compression efficiency and/or video quality. Note that different reference frames can be chosen for different macroblocks in the same frame. The maximum number of concurrent reference frames supported by H.264 is 16. Different reference frames can be chosen for ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Video Processing
In electronics engineering, video processing is a particular case of signal processing, in particular image processing, which often employs video filters and where the input and output signals are video files or video streams. Video processing techniques are used in television sets, VCRs, DVDs, video codecs, video players, video scalers and other devices. For example—commonly only design and video processing is different in TV sets of different manufactures. Video processor Video processors are often combined with video scalers to create a video processor that improves the apparent definition of video signals. They perform the following tasks: * deinterlacing * aspect ratio control * digital zoom and pan * brightness/ contrast/ hue/ saturation/ sharpness/gamma adjustments * frame rate conversion and inverse-telecine * color point conversion (601 to 709 or 709 to 601) * color space conversion ( YPBPR/ YCBCR to RGB or RGB to YPBPR/YCBCR) * mosquito noise reductio ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |