HOME
The Info List - H.264


--- Advertisement ---



H.264 or MPEG-4 Part 10, Advanced Video Coding ( MPEG-4 AVC) is a block-oriented motion-compensation-based video compression standard. As of 2014[update] it is one of the most commonly used formats for the recording, compression, and distribution of video content.[1] It supports resolutions up to 8192×4320, including 8K UHD.[2] The intent of the H.264/AVC project was to create a standard capable of providing good video quality at substantially lower bit rates than previous standards (i.e., half or less the bit rate of MPEG-2, H.263, or MPEG-4 Part 2), without increasing the complexity of design so much that it would be impractical or excessively expensive to implement. An additional goal was to provide enough flexibility to allow the standard to be applied to a wide variety of applications on a wide variety of networks and systems, including low and high bit rates, low and high resolution video, broadcast, DVD
DVD
storage, RTP/IP packet networks, and ITU-T multimedia telephony systems. The H.264 standard can be viewed as a "family of standards" composed of a number of different profiles. A specific decoder decodes at least one, but not necessarily all profiles. The decoder specification describes which profiles can be decoded. H.264 is typically used for lossy compression, although it is also possible to create truly lossless-coded regions within lossy-coded pictures or to support rare use cases for which the entire encoding is lossless. H.264 was developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC JTC1 Moving Picture Experts Group
Moving Picture Experts Group
(MPEG). The project partnership effort is known as the Joint Video Team (JVT). The ITU-T H.264 standard and the ISO/IEC MPEG-4 AVC standard (formally, ISO/IEC 14496-10 – MPEG-4 Part 10, Advanced Video Coding) are jointly maintained so that they have identical technical content. The final drafting work on the first version of the standard was completed in May 2003, and various extensions of its capabilities have been added in subsequent editions. High Efficiency Video Coding (HEVC), a.k.a. H.265 and MPEG-H Part 2 is a successor to H.264/MPEG-4 AVC developed by the same organizations, while earlier standards are still in common use. H.264 is perhaps best known as being one of the video encoding standards for Blu-ray
Blu-ray
Discs; all Blu-ray Disc
Blu-ray Disc
players must be able to decode H.264. It is also widely used by streaming Internet sources, such as videos from Vimeo, YouTube, and the iTunes Store, Web software such as the Adobe Flash Player
Adobe Flash Player
and Microsoft
Microsoft
Silverlight, and also various HDTV broadcasts over terrestrial (Advanced Television Systems Committee standards, ISDB-T, DVB-T
DVB-T
or DVB-T2), cable (DVB-C), and satellite ( DVB-S and DVB-S2). H.264 is protected by patents owned by various parties. A license covering most (but not all) patents essential to H.264 is administered by patent pool MPEG
MPEG
LA.[3] Commercial use of patented H.264 technologies requires the payment of royalties to MPEG LA
MPEG LA
and other patent owners. MPEG LA
MPEG LA
has allowed the free use of H.264 technologies for streaming Internet video that is free to end users, and Cisco Systems pays royalties to MPEG LA
MPEG LA
on behalf of the users of binaries for its open source H.264 encoder.

Contents

1 Naming 2 History

2.1 Versions

3 Applications

3.1 Derived formats

4 Design

4.1 Features 4.2 Profiles

4.2.1 Feature support in particular profiles

4.3 Levels 4.4 Decoded picture buffering

5 Implementations

5.1 Software encoders 5.2 Hardware

6 Licensing 7 See also 8 References 9 Further reading 10 External links

Naming[edit] The H.264 name follows the ITU-T naming convention, where the standard is a member of the H.26x line of VCEG video coding standards; the MPEG-4 AVC name relates to the naming convention in ISO/IEC MPEG, where the standard is part 10 of ISO/IEC 14496, which is the suite of standards known as MPEG-4. The standard was developed jointly in a partnership of VCEG and MPEG, after earlier development work in the ITU-T as a VCEG project called H.26L. It is thus common to refer to the standard with names such as H.264/AVC, AVC/H.264, H.264/MPEG-4 AVC, or MPEG-4/H.264 AVC, to emphasize the common heritage. Occasionally, it is also referred to as "the JVT codec", in reference to the Joint Video Team (JVT) organization that developed it. (Such partnership and multiple naming is not uncommon. For example, the video compression standard known as MPEG-2
MPEG-2
also arose from the partnership between MPEG
MPEG
and the ITU-T, where MPEG-2
MPEG-2
video is known to the ITU-T community as H.262.[4]) Some software programs (such as VLC media player) internally identify this standard as AVC1. History[edit] In early 1998, the Video Coding Experts Group (VCEG – ITU-T SG16 Q.6) issued a call for proposals on a project called H.26L, with the target to double the coding efficiency (which means halving the bit rate necessary for a given level of fidelity) in comparison to any other existing video coding standards for a broad variety of applications. VCEG was chaired by Gary Sullivan (Microsoft, formerly PictureTel, U.S.). The first draft design for that new standard was adopted in August 1999. In 2000, Thomas Wiegand (Heinrich Hertz Institute, Germany) became VCEG co-chair. In December 2001, VCEG and the Moving Picture Experts Group (MPEG – ISO/IEC JTC 1/SC 29/WG 11) formed a Joint Video Team (JVT), with the charter to finalize the video coding standard.[5] Formal approval of the specification came in March 2003. The JVT was (is) chaired by Gary Sullivan, Thomas Wiegand, and Ajay Luthra (Motorola, U.S.: later Arris, U.S.). In June 2004, the Fidelity range extensions (FRExt) project was finalized. From January 2005 to November 2007, the JVT was working on an extension of H.264/AVC towards scalability by an Annex (G) called Scalable Video Coding (SVC). The JVT management team was extended by Jens-Rainer Ohm (Aachen University, Germany). From July 2006 to November 2009, the JVT worked on Multiview Video Coding (MVC), an extension of H.264/AVC towards free viewpoint television and 3D television. That work included the development of two new profiles of the standard: the Multiview High Profile and the Stereo High Profile. The standardization of the first version of H.264/AVC was completed in May 2003. In the first project to extend the original standard, the JVT then developed what was called the Fidelity Range Extensions (FRExt). These extensions enabled higher quality video coding by supporting increased sample bit depth precision and higher-resolution color information, including sampling structures known as Y'CbCr 4:2:2 (=YUV 4:2:2) and Y'CbCr 4:4:4. Several other features were also included in the Fidelity Range Extensions project, such as adaptive switching between 4×4 and 8×8 integer transforms, encoder-specified perceptual-based quantization weighting matrices, efficient inter-picture lossless coding, and support of additional color spaces. The design work on the Fidelity Range Extensions was completed in July 2004, and the drafting work on them was completed in September 2004. Further recent extensions of the standard then included adding five other new profiles[which?] intended primarily for professional applications, adding extended-gamut color space support, defining additional aspect ratio indicators, defining two additional types of "supplemental enhancement information" (post-filter hint and tone mapping), and deprecating one of the prior FRExt profiles[which?] that industry feedback[by whom?] indicated should have been designed differently. The next major feature added to the standard was Scalable Video Coding (SVC). Specified in Annex G of H.264/AVC, SVC allows the construction of bitstreams that contain sub-bitstreams that also conform to the standard, including one such bitstream known as the "base layer" that can be decoded by a H.264/AVC codec that does not support SVC. For temporal bitstream scalability (i.e., the presence of a sub-bitstream with a smaller temporal sampling rate than the main bitstream), complete access units are removed from the bitstream when deriving the sub-bitstream. In this case, high-level syntax and inter-prediction reference pictures in the bitstream are constructed accordingly. On the other hand, for spatial and quality bitstream scalability (i.e. the presence of a sub-bitstream with lower spatial resolution/quality than the main bitstream), the NAL (Network Abstraction Layer) is removed from the bitstream when deriving the sub-bitstream. In this case, inter-layer prediction (i.e., the prediction of the higher spatial resolution/quality signal from the data of the lower spatial resolution/quality signal) is typically used for efficient coding. The Scalable Video Coding extensions were completed in November 2007. The next major feature added to the standard was Multiview Video Coding (MVC). Specified in Annex H of H.264/AVC, MVC enables the construction of bitstreams that represent more than one view of a video scene. An important example of this functionality is stereoscopic 3D video coding. Two profiles were developed in the MVC work: Multiview High Profile supports an arbitrary number of views, and Stereo High Profile is designed specifically for two-view stereoscopic video. The Multiview Video Coding extensions were completed in November 2009. Versions[edit] Versions of the H.264/AVC standard include the following completed revisions, corrigenda, and amendments (dates are final approval dates in ITU-T, while final "International Standard" approval dates in ISO/IEC are somewhat different and slightly later in most cases). Each version represents changes relative to the next lower version that is integrated into the text.

Version 1 (Edition 1): (May 30, 2003) First approved version of H.264/AVC containing Baseline, Main, and Extended profiles.[6] Version 2 (Edition 1.1): (May 7, 2004) Corrigendum containing various minor corrections.[7] Version 3 (Edition 2): (March 1, 2005) Major addition to H.264/AVC containing the first amendment providing Fidelity Range Extensions (FRExt) containing High, High 10, High 4:2:2, and High 4:4:4 profiles.[8] Version 4 (Edition 2.1): (September 13, 2005) Corrigendum containing various minor corrections and adding three aspect ratio indicators.[9] Version 5 (Edition 2.2): (June 13, 2006) Amendment consisting of removal of prior High 4:4:4 profile (processed as a corrigendum in ISO/IEC).[10] Version 6 (Edition 2.2): (June 13, 2006) Amendment consisting of minor extensions like extended-gamut color space support (bundled with above-mentioned aspect ratio indicators in ISO/IEC).[10] Version 7 (Edition 2.3): (April 6, 2007) Amendment containing the addition of High 4:4:4 Predictive and four Intra-only profiles (High 10 Intra, High 4:2:2 Intra, High 4:4:4 Intra, and CAVLC
CAVLC
4:4:4 Intra).[11] Version 8 (Edition 3): (November 22, 2007) Major addition to H.264/AVC containing the amendment for Scalable Video Coding (SVC) containing Scalable Baseline, Scalable High, and Scalable High Intra profiles.[12] Version 9 (Edition 3.1): (January 13, 2009) Corrigendum containing minor corrections.[13] Version 10 (Edition 4): (March 16, 2009) Amendment containing definition of a new profile (the Constrained Baseline profile) with only the common subset of capabilities supported in various previously specified profiles.[14] Version 11 (Edition 4): (March 16, 2009) Major addition to H.264/AVC containing the amendment for Multiview Video Coding (MVC) extension, including the Multiview High profile.[14] Version 12 (Edition 5): (March 9, 2010) Amendment containing definition of a new MVC profile (the Stereo High profile) for two-view video coding with support of interlaced coding tools and specifying an additional SEI message (the frame packing arrangement SEI message).[15] Version 13 (Edition 5): (March 9, 2010) Corrigendum containing minor corrections.[15] Version 14 (Edition 6): (June 29, 2011) Amendment specifying a new level (Level 5.2) supporting higher processing rates in terms of maximum macroblocks per second, and a new profile (the Progressive High profile) supporting only the frame coding tools of the previously specified High profile.[16] Version 15 (Edition 6): (June 29, 2011) Corrigendum containing minor corrections.[16] Version 16 (Edition 7): (January 13, 2012) Amendment containing definition of three new profiles intended primarily for real-time communication applications: the Constrained High, Scalable Constrained Baseline, and Scalable Constrained High profiles.[17] Version 17 (Edition 8): (April 13, 2013) Amendment with additional SEI message indicators.[18] Version 18 (Edition 8): (April 13, 2013) Amendment to specify the coding of depth map data for 3D stereoscopic video, including a Multiview Depth High profile.[18] Version 19 (Edition 8): (April 13, 2013) Corrigendum to correct an error in the sub-bitstream extraction process for multiview video.[18] Version 20 (Edition 8): (April 13, 2013) Amendment to specify additional color space identifiers (including support of ITU-R Recommendation BT.2020 for UHDTV) and an additional model type in the tone mapping information SEI message.[18] Version 21 (Edition 9): (February 13, 2014) Amendment to specify the Enhanced Multiview Depth High profile.[19] Version 22 (Edition 9): (February 13, 2014) Amendment to specify the multi-resolution frame compatible (MFC) enhancement for 3D stereoscopic video, the MFC High profile, and minor corrections.[19] Version 23 (Edition 10): (February 13, 2016) Amendment to specify MFC stereoscopic video with depth maps, the MFC Depth High profile, the mastering display color volume SEI message, and additional color-related video usability information codepoint identifiers.[20] Version 24 (Edition 11): (October 14, 2016) Amendment to specify additional levels of decoder capability supporting larger picture sizes (Levels 6, 6.1, and 6.2), the green metadata SEI message, the alternative depth information SEI message, and additional color-related video usability information codepoint identifiers.[21] Version 25 (Edition 12): (April 13, 2017) Amendment to specify the Progressive High 10 profile, Hybrid Log-Gamma
Hybrid Log-Gamma
(HLG), and additional color-related VUI code points and SEI messages.[22]

Applications[edit] Further information: List of video services using H.264/ MPEG-4 AVC The H.264 video format has a very broad application range that covers all forms of digital compressed video from low bit-rate Internet streaming applications to HDTV broadcast and Digital Cinema applications with nearly lossless coding. With the use of H.264, bit rate savings of 50% or more compared to MPEG-2
MPEG-2
Part 2 are reported. For example, H.264 has been reported to give the same Digital Satellite TV quality as current MPEG-2
MPEG-2
implementations with less than half the bitrate, with current MPEG-2
MPEG-2
implementations working at around 3.5 Mbit/s and H.264 at only 1.5 Mbit/s.[23] Sony
Sony
claims that 9 Mbit/s AVC recording mode is equivalent to the image quality of the HDV
HDV
format, which uses approximately 18–25 Mbit/s.[24] To ensure compatibility and problem-free adoption of H.264/AVC, many standards bodies have amended or added to their video-related standards so that users of these standards can employ H.264/AVC. Both the Blu-ray Disc
Blu-ray Disc
format and the now-discontinued HD DVD
DVD
format include the H.264/AVC High Profile as one of three mandatory video compression formats. The Digital Video Broadcast project (DVB) approved the use of H.264/AVC for broadcast television in late 2004. The Advanced Television Systems Committee
Advanced Television Systems Committee
(ATSC) standards body in the United States approved the use of H.264/AVC for broadcast television in July 2008, although the standard is not yet used for fixed ATSC broadcasts within the United States.[25][26] It has also been approved for use with the more recent ATSC-M/H
ATSC-M/H
(Mobile/Handheld) standard, using the AVC and SVC portions of H.264.[27] The CCTV
CCTV
(Closed Circuit TV) and Video Surveillance
Video Surveillance
markets have included the technology in many products. Many common DSLRs use H.264 video wrapped in QuickTime
QuickTime
MOV containers as the native recording format. Derived formats[edit] AVCHD
AVCHD
is a high-definition recording format designed by Sony
Sony
and Panasonic
Panasonic
that uses H.264 (conforming to H.264 while adding additional application-specific features and constraints). AVC-Intra is an intraframe-only compression format, developed by Panasonic. XAVC is a recording format designed by Sony
Sony
that uses level 5.2 of H.264/ MPEG-4 AVC, which is the highest level supported by that video standard.[28][29] XAVC can support 4K resolution
4K resolution
(4096 × 2160 and 3840 × 2160) at up to 60 frames per second (fps).[28][29] Sony has announced that cameras that support XAVC include two CineAlta cameras—the Sony
Sony
PMW-F55 and Sony
Sony
PMW-F5.[30][31] The Sony
Sony
PMW-F55 can record XAVC with 4K resolution
4K resolution
at 30 fps at 300 Mbit/s and 2K resolution at 30 fps at 100 Mbit/s.[32] XAVC can record 4K resolution at 60 fps with 4:2:2 chroma subsampling at 600 Mbit/s.[33][34] Design[edit]

This article is in a list format that may be better presented using prose. You can help by converting this article to prose, if appropriate. Editing help is available. (April 2016)

Features[edit]

Block diagram of H.264

H.264/AVC/ MPEG-4 Part 10 contains a number of new features that allow it to compress video much more efficiently than older standards and to provide more flexibility for application to a wide variety of network environments. In particular, some such key features include:

Multi-picture inter-picture prediction including the following features:

Using previously encoded pictures as references in a much more flexible way than in past standards, allowing up to 16 reference frames (or 32 reference fields, in the case of interlaced encoding) to be used in some cases. In profiles that support non-IDR frames, most levels specify that sufficient buffering should be available to allow for at least 4 or 5 reference frames at maximum resolution. This is in contrast to prior standards, where the limit was typically one; or, in the case of conventional "B pictures" (B-frames), two. This particular feature usually allows modest improvements in bit rate and quality in most scenes.[citation needed] But in certain types of scenes, such as those with repetitive motion or back-and-forth scene cuts or uncovered background areas, it allows a significant reduction in bit rate while maintaining clarity. Variable block-size motion compensation (VBSMC) with block sizes as large as 16×16 and as small as 4×4, enabling precise segmentation of moving regions. The supported luma prediction block sizes include 16×16, 16×8, 8×16, 8×8, 8×4, 4×8, and 4×4, many of which can be used together in a single macroblock. Chroma prediction block sizes are correspondingly smaller according to the chroma subsampling in use. The ability to use multiple motion vectors per macroblock (one or two per partition) with a maximum of 32 in the case of a B macroblock constructed of 16 4×4 partitions. The motion vectors for each 8×8 or larger partition region can point to different reference pictures. The ability to use any macroblock type in B-frames, including I-macroblocks, resulting in much more efficient encoding when using B-frames. This feature was notably left out from MPEG-4 ASP. Six-tap filtering for derivation of half-pel luma sample predictions, for sharper subpixel motion-compensation. Quarter-pixel motion is derived by linear interpolation of the halfpel values, to save processing power. Quarter-pixel precision for motion compensation, enabling precise description of the displacements of moving areas. For chroma the resolution is typically halved both vertically and horizontally (see 4:2:0) therefore the motion compensation of chroma uses one-eighth chroma pixel grid units. Weighted prediction, allowing an encoder to specify the use of a scaling and offset when performing motion compensation, and providing a significant benefit in performance in special cases—such as fade-to-black, fade-in, and cross-fade transitions. This includes implicit weighted prediction for B-frames, and explicit weighted prediction for P-frames.

Spatial prediction from the edges of neighboring blocks for "intra" coding, rather than the "DC"-only prediction found in MPEG-2
MPEG-2
Part 2 and the transform coefficient prediction found in H.263v2 and MPEG-4 Part 2. This includes luma prediction block sizes of 16×16, 8×8, and 4×4 (of which only one type can be used within each macroblock). Lossless macroblock coding features including:

A lossless "PCM macroblock" representation mode in which video data samples are represented directly,[35] allowing perfect representation of specific regions and allowing a strict limit to be placed on the quantity of coded data for each macroblock. An enhanced lossless macroblock representation mode allowing perfect representation of specific regions while ordinarily using substantially fewer bits than the PCM mode.

Flexible interlaced-scan video coding features, including:

Macroblock-adaptive frame-field (MBAFF) coding, using a macroblock pair structure for pictures coded as frames, allowing 16×16 macroblocks in field mode (compared with MPEG-2, where field mode processing in a picture that is coded as a frame results in the processing of 16×8 half-macroblocks). Picture-adaptive frame-field coding (PAFF or PicAFF) allowing a freely selected mixture of pictures coded either as complete frames where both fields are combined together for encoding or as individual single fields.

New transform design features, including:

An exact-match integer 4×4 spatial block transform, allowing precise placement of residual signals with little of the "ringing" often found with prior codec designs. This design is conceptually similar to that of the well-known discrete cosine transform (DCT), introduced in 1974 by N. Ahmed, T.Natarajan and K.R.Rao, which is Citation 1 in Discrete cosine transform. However, it is simplified and made to provide exactly specified decoding. An exact-match integer 8×8 spatial block transform, allowing highly correlated regions to be compressed more efficiently than with the 4×4 transform. This design is conceptually similar to that of the well-known DCT, but simplified and made to provide exactly specified decoding. Adaptive encoder selection between the 4×4 and 8×8 transform block sizes for the integer transform operation. A secondary Hadamard transform
Hadamard transform
performed on "DC" coefficients of the primary spatial transform applied to chroma DC coefficients (and also luma in one special case) to obtain even more compression in smooth regions.

A quantization design including:

Logarithmic step size control for easier bit rate management by encoders and simplified inverse-quantization scaling Frequency-customized quantization scaling matrices selected by the encoder for perceptual-based quantization optimization

An in-loop deblocking filter that helps prevent the blocking artifacts common to other DCT-based image compression techniques, resulting in better visual appearance and compression efficiency An entropy coding design including:

Context-adaptive binary arithmetic coding (CABAC), an algorithm to losslessly compress syntax elements in the video stream knowing the probabilities of syntax elements in a given context. CABAC compresses data more efficiently than CAVLC
CAVLC
but requires considerably more processing to decode. Context-adaptive variable-length coding
Context-adaptive variable-length coding
(CAVLC), which is a lower-complexity alternative to CABAC for the coding of quantized transform coefficient values. Although lower complexity than CABAC, CAVLC
CAVLC
is more elaborate and more efficient than the methods typically used to code coefficients in other prior designs. A common simple and highly structured variable length coding (VLC) technique for many of the syntax elements not coded by CABAC or CAVLC, referred to as Exponential-Golomb coding (or Exp-Golomb).

Loss resilience features including:

A Network Abstraction Layer (NAL) definition allowing the same video syntax to be used in many network environments. One very fundamental design concept of H.264 is to generate self-contained packets, to remove the header duplication as in MPEG-4's Header Extension Code (HEC).[36] This was achieved by decoupling information relevant to more than one slice from the media stream. The combination of the higher-level parameters is called a parameter set.[36] The H.264 specification includes two types of parameter sets: Sequence Parameter Set (SPS) and Picture Parameter Set (PPS). An active sequence parameter set remains unchanged throughout a coded video sequence, and an active picture parameter set remains unchanged within a coded picture. The sequence and picture parameter set structures contain information such as picture size, optional coding modes employed, and macroblock to slice group map.[36] Flexible macroblock ordering (FMO), also known as slice groups, and arbitrary slice ordering (ASO), which are techniques for restructuring the ordering of the representation of the fundamental regions (macroblocks) in pictures. Typically considered an error/loss robustness feature, FMO and ASO can also be used for other purposes. Data partitioning (DP), a feature providing the ability to separate more important and less important syntax elements into different packets of data, enabling the application of unequal error protection (UEP) and other types of improvement of error/loss robustness. Redundant slices (RS), an error/loss robustness feature that lets an encoder send an extra representation of a picture region (typically at lower fidelity) that can be used if the primary representation is corrupted or lost. Frame numbering, a feature that allows the creation of "sub-sequences", enabling temporal scalability by optional inclusion of extra pictures between other pictures, and the detection and concealment of losses of entire pictures, which can occur due to network packet losses or channel errors.

Switching slices, called SP and SI slices, allowing an encoder to direct a decoder to jump into an ongoing video stream for such purposes as video streaming bit rate switching and "trick mode" operation. When a decoder jumps into the middle of a video stream using the SP/SI feature, it can get an exact match to the decoded pictures at that location in the video stream despite using different pictures, or no pictures at all, as references prior to the switch. A simple automatic process for preventing the accidental emulation of start codes, which are special sequences of bits in the coded data that allow random access into the bitstream and recovery of byte alignment in systems that can lose byte synchronization. Supplemental enhancement information (SEI) and video usability information (VUI), which are extra information that can be inserted into the bitstream to enhance the use of the video for a wide variety of purposes.[clarification needed] SEI FPA (Frame Packing Arrangement) message that contains the 3D arrangement:

0: checkerboard: pixels are alternatively from L and R. 1: column alternation: L and R are interlaced by column. 2: row alternation: L and R are interlaced by row. 3: side by side: L is on the left, R on the right. 4: top bottom: L is on top, R on bottom. 5: frame alternation: one view per frame.

Auxiliary pictures, which can be used for such purposes as alpha compositing. Support of monochrome (4:0:0), 4:2:0, 4:2:2, and 4:4:4 chroma subsampling (depending on the selected profile). Support of sample bit depth precision ranging from 8 to 14 bits per sample (depending on the selected profile). The ability to encode individual color planes as distinct pictures with their own slice structures, macroblock modes, motion vectors, etc., allowing encoders to be designed with a simple parallelization structure (supported only in the three 4:4:4-capable profiles). Picture order count, a feature that serves to keep the ordering of the pictures and the values of samples in the decoded pictures isolated from timing information, allowing timing information to be carried and controlled/changed separately by a system without affecting decoded picture content.

These techniques, along with several others, help H.264 to perform significantly better than any prior standard under a wide variety of circumstances in a wide variety of application environments. H.264 can often perform radically better than MPEG-2
MPEG-2
video—typically obtaining the same quality at half of the bit rate or less, especially on high bit rate and high resolution situations.[37] Like other ISO/IEC MPEG
MPEG
video standards, H.264/AVC has a reference software implementation that can be freely downloaded.[38] Its main purpose is to give examples of H.264/AVC features, rather than being a useful application per se. Some reference hardware design work is also under way in the Moving Picture Experts Group. The above-mentioned are complete features of H.264/AVC covering all profiles of H.264. A profile for a codec is a set of features of that codec identified to meet a certain set of specifications of intended applications. This means that many of the features listed are not supported in some profiles. Various profiles of H.264/AVC are discussed in next section. Profiles[edit] The standard defines a set of capabilities, which are referred to as profiles, targeting specific classes of applications. These are declared as a profile code (profile_idc) and a set of constraints applied in the encoder. This allows a decoder to recognize the requirements to decode that specific stream. Profiles for non-scalable 2D video applications include the following:

Constrained Baseline Profile (CBP, 66 with constraint set 1) Primarily for low-cost applications, this profile is most typically used in videoconferencing and mobile applications. It corresponds to the subset of features that are in common between the Baseline, Main, and High Profiles. Baseline Profile (BP, 66) Primarily for low-cost applications that require additional data loss robustness, this profile is used in some videoconferencing and mobile applications. This profile includes all features that are supported in the Constrained Baseline Profile, plus three additional features that can be used for loss robustness (or for other purposes such as low-delay multi-point video stream compositing). The importance of this profile has faded somewhat since the definition of the Constrained Baseline Profile in 2009. All Constrained Baseline Profile bitstreams are also considered to be Baseline Profile bitstreams, as these two profiles share the same profile identifier code value. Extended Profile (XP, 88) Intended as the streaming video profile, this profile has relatively high compression capability and some extra tricks for robustness to data losses and server stream switching. Main Profile (MP, 77) This profile is used for standard-definition digital TV broadcasts that use the MPEG-4 format as defined in the DVB standard.[39] It is not, however, used for high-definition television broadcasts, as the importance of this profile faded when the High Profile was developed in 2004 for that application. High Profile (HiP, 100) The primary profile for broadcast and disc storage applications, particularly for high-definition television applications (for example, this is the profile adopted by the Blu-ray Disc
Blu-ray Disc
storage format and the DVB HDTV broadcast service). Progressive High Profile (PHiP, 100 with constraint set 4) Similar to the High profile, but without support of field coding features. Constrained High Profile (100 with constraint set 4 and 5) Similar to the Progressive High profile, but without support of B (bi-predictive) slices. High 10 Profile (Hi10P, 110) Going beyond typical mainstream consumer product capabilities, this profile builds on top of the High Profile, adding support for up to 10 bits per sample of decoded picture precision. High 4:2:2 Profile (Hi422P, 122) Primarily targeting professional applications that use interlaced video, this profile builds on top of the High 10 Profile, adding support for the 4:2:2 chroma subsampling format while using up to 10 bits per sample of decoded picture precision. High 4:4:4 Predictive Profile (Hi444PP, 244) This profile builds on top of the High 4:2:2 Profile, supporting up to 4:4:4 chroma sampling, up to 14 bits per sample, and additionally supporting efficient lossless region coding and the coding of each picture as three separate color planes.

For camcorders, editing, and professional applications, the standard contains four additional Intra-frame-only profiles, which are defined as simple subsets of other corresponding profiles. These are mostly for professional (e.g., camera and editing system) applications:

High 10 Intra Profile (110 with constraint set 3) The High 10 Profile constrained to all-Intra use. High 4:2:2 Intra Profile (122 with constraint set 3) The High 4:2:2 Profile constrained to all-Intra use. High 4:4:4 Intra Profile (244 with constraint set 3) The High 4:4:4 Profile constrained to all-Intra use. CAVLC
CAVLC
4:4:4 Intra Profile (44) The High 4:4:4 Profile constrained to all-Intra use and to CAVLC entropy coding (i.e., not supporting CABAC).

As a result of the Scalable Video Coding (SVC) extension, the standard contains five additional scalable profiles, which are defined as a combination of a H.264/AVC profile for the base layer (identified by the second word in the scalable profile name) and tools that achieve the scalable extension:

Scalable Baseline Profile (83) Primarily targeting video conferencing, mobile, and surveillance applications, this profile builds on top of the Constrained Baseline profile to which the base layer (a subset of the bitstream) must conform. For the scalability tools, a subset of the available tools is enabled. Scalable Constrained Baseline Profile (83 with constraint set 5) A subset of the Scalable Baseline Profile intended primarily for real-time communication applications. Scalable High Profile (86) Primarily targeting broadcast and streaming applications, this profile builds on top of the H.264/AVC High Profile to which the base layer must conform. Scalable Constrained High Profile (86 with constraint set 5) A subset of the Scalable High Profile intended primarily for real-time communication applications. Scalable High Intra Profile (86 with constraint set 3) Primarily targeting production applications, this profile is the Scalable High Profile constrained to all-Intra use.

As a result of the Multiview Video Coding (MVC) extension, the standard contains two multiview profiles:

Stereo High Profile (128) This profile targets two-view stereoscopic 3D video and combines the tools of the High profile with the inter-view prediction capabilities of the MVC extension. Multiview High Profile (118) This profile supports two or more views using both inter-picture (temporal) and MVC inter-view prediction, but does not support field pictures and macroblock-adaptive frame-field coding. Multiview Depth High Profile (138)

Feature support in particular profiles[edit]

Feature CBP BP XP MP ProHiP HiP Hi10P Hi422P Hi444PP

I and P slices Yes Yes Yes Yes Yes Yes Yes Yes Yes

Bit depth (per sample) 8 8 8 8 8 8 8 to 10 8 to 10 8 to 14

Chroma formats 4:2:0

  4:2:0

  4:2:0

  4:2:0

  4:2:0

  4:2:0

  4:2:0

  4:2:0/ 4:2:2   4:2:0/ 4:2:2/ 4:4:4

Flexible macroblock ordering (FMO) No Yes Yes No No No No No No

Arbitrary slice ordering (ASO) No Yes Yes No No No No No No

Redundant slices (RS) No Yes Yes No No No No No No

Data Partitioning No No Yes No No No No No No

SI and SP slices No No Yes No No No No No No

Interlaced coding (PicAFF, MBAFF) No No Yes Yes No Yes Yes Yes Yes

B slices No No Yes Yes Yes Yes Yes Yes Yes

Multiple reference frames Yes Yes Yes Yes Yes Yes Yes Yes Yes

In-loop deblocking filter Yes Yes Yes Yes Yes Yes Yes Yes Yes

CAVLC
CAVLC
entropy coding Yes Yes Yes Yes Yes Yes Yes Yes Yes

CABAC entropy coding No No No Yes Yes Yes Yes Yes Yes

4:0:0 (Monochrome) No No No No Yes Yes Yes Yes Yes

8×8 vs. 4×4 transform adaptivity No No No No Yes Yes Yes Yes Yes

Quantization scaling matrices No No No No Yes Yes Yes Yes Yes

Separate Cb and Cr QP control No No No No Yes Yes Yes Yes Yes

Separate color plane coding No No No No No No No No Yes

Predictive lossless coding No No No No No No No No Yes

Levels[edit] As the term is used in the standard, a "level" is a specified set of constraints that indicate a degree of required decoder performance for a profile. For example, a level of support within a profile specifies the maximum picture resolution, frame rate, and bit rate that a decoder may use. A decoder that conforms to a given level must be able to decode all bitstreams encoded for that level and all lower levels.

Levels with maximum property values[40]

Level Max. decoding speed Max. frame size Max. video bit rate for video coding layer (VCL) kbit/s (Baseline, Extended and Main Profiles) Examples for high resolution @ highest frame rate Toggle additional details

Luma samples/s Macroblocks/s Luma samples Macroblocks

1 380,160 1,485 25,344 99 64

128×96@30 176×144@15

1b 380,160 1,485 25,344 99 128

128×96@30 176×144@15

1.1 768,000 3,000 101,376 396 192

128x96@60 176×144@30 352×288@7.5

1.2 1,536,000 6,000 101,376 396 384

128x96@120 176×144@60 352×288@15

1.3 3,041,280 11,880 101,376 396 768

128x96@172 176×144@120 352×288@30

2 3,041,280 11,880 101,376 396 2,000

128x96@172 176x144@120 352×288@30

2.1 5,068,800 19,800 202,752 792 4,000

176x144@172 352×240@60 352×288@50 352×480@30 352×576@25

2.2 5,184,000 20,250 414,720 1,620 4,000

176×144@172 352×480@30 352×576@25 720×480@15 720×576@12.5

3 10,368,000 40,500 414,720 1,620 10,000

176×144@172 352×240@120 352×480@60 720×480@30 720×576@25

3.1 27,648,000 108,000 921,600 3,600 14,000

352x288@172 352x576@130 640x480@90 720×576@60 1,280×720@30

3.2 55,296,000 216,000 1,310,720 5,120 20,000

640x480@172 720x480@160 720x576@130 1,280×720@60

4 62,914,560 245,760 2,097,152 8,192 20,000

720x480@172 720x576@150 1,280×720@60 2,048×1,024@30

4.1 62,914,560 245,760 2,097,152 8,192 50,000

720x480@172 720x576@150 1,280×720@60 2,048×1,024@30

4.2 133,693,440 522,240 2,228,224 8,704 50,000

720x576@172 1,280×720@140 2,048×1,080@60

5 150,994,944 589,824 5,652,480 22,080 135,000

1,024×768@172 1,280×720@160 2,048×1,080@60 2,560×1,920@30 3,680×1,536@25

5.1 251,658,240 983,040 9,437,184 36,864 240,000

1,280×720@172 1,920×1,080@120 2,048×1,536@80 4,096×2,048@30

5.2 530,841,600 2,073,600 9,437,184 36,864 240,000

1,920×1,080@172 2,048×1,536@160 4,096×2,160@60

6 1,069,547,520 4,177,920 35,651,584 139,264 240,000

2,048×1,536@300 4,096×2,160@120 8,192×4,320@30

6.1 2,139,095,040 8,355,840 35,651,584 139,264 480,000

2,048×1,536@300 4,096×2,160@240 8,192×4,320@60

6.2 4,278,190,080 16,711,680 36,651,584 139,264 800,000

4,096*2,304@300 8,192×4,320@120

The maximum bit rate for High Profile is 1.25 times that of the Base/Extended/Main Profiles, 3 times for Hi10P, and 4 times for Hi422P/Hi444PP. The number of luma samples is 16x16=256 times the number of macroblocks (and the number of luma samples per second is 256 times the number of macroblocks per second). Decoded picture buffering[edit] Previously encoded pictures are used by H.264/AVC encoders to provide predictions of the values of samples in other pictures. This allows the encoder to make efficient decisions on the best way to encode a given picture. At the decoder, such pictures are stored in a virtual decoded picture buffer (DPB). The maximum capacity of the DPB, in units of frames (or pairs of fields), as shown in parentheses in the right column of the table above, can be computed as follows:

capacity = min(floor(MaxDpbMbs / (PicWidthInMbs * FrameHeightInMbs)), 16)

Where MaxDpbMbs is a constant value provided in the table below as a function of level number, and PicWidthInMbs and FrameHeightInMbs are the picture width and frame height for the coded video data, expressed in units of macroblocks (rounded up to integer values and accounting for cropping and macroblock pairing when applicable). This formula is specified in sections A.3.1.h and A.3.2.f of the 2009 edition of the standard.

Level

1

1b

1.1

1.2

1.3

2

2.1

2.2

3

3.1

3.2

4

4.1

4.2

5

5.1

5.2

6

6.1

6.2

MaxDpbMbs

396

396

900

2,376

2,376

2,376

4,752

8,100

8,100

18,000

20,480

32,768

32,768

34,816

110,400

184,320

184,320

696,320

696,320

696,320

For example, for an HDTV picture that is 1920 samples wide (PicWidthInMbs = 120) and 1080 samples high (FrameHeightInMbs = 68), a Level 4 decoder has a maximum DPB storage capacity of Floor(32768/(120*68)) = 4 frames (or 8 fields) when encoded with minimal cropping parameter values. Thus, the value 4 is shown in parentheses in the table above in the right column of the row for Level 4 with the frame size 1920×1080. It is important to note that the current picture being decoded is not included in the computation of DPB fullness (unless the encoder has indicated for it to be stored for use as a reference for decoding other pictures or for delayed output timing). Thus, a decoder needs to actually have sufficient memory to handle (at least) one frame more than the maximum capacity of the DPB as calculated above. Implementations[edit] In 2009, the HTML5 working group was split between supporters of Ogg Theora, a free video format which is thought[by whom?] to be unencumbered by patents, and H.264, which contains patented technology. As late as July 2009, Google and Apple were said to support H.264, while Mozilla
Mozilla
and Opera support Ogg
Ogg
Theora (now Google, Mozilla
Mozilla
and Opera all support Theora and WebM
WebM
with VP8).[41] Microsoft, with the release of Internet Explorer 9, has added support for HTML
HTML
5 video encoded using H.264. At the Gartner Symposium/ITXpo in November 2010, Microsoft
Microsoft
CEO Steve Ballmer answered the question " HTML
HTML
5 or Silverlight?" by saying "If you want to do something that is universal, there is no question the world is going HTML5."[42] In January 2011, Google announced that they were pulling support for H.264 from their Chrome browser and supporting both Theora and WebM/ VP8
VP8
to use only open formats.[43] On March 18, 2012, Mozilla
Mozilla
announced support for H.264 in Firefox on mobile devices, due to prevalence of H.264-encoded video and the increased power-efficiency of using dedicated H.264 decoder hardware common on such devices.[44] On February 20, 2013, Mozilla
Mozilla
implemented support in Firefox for decoding H.264 on Windows 7 and above. This feature relies on Windows' built in decoding libraries.[45] Firefox 35.0, released on January 13, 2015 supports H.264 on OS X 10.6 and higher.[46] On October 30, 2013, Rowan Trollope from Cisco Systems
Cisco Systems
announced that Cisco would release both binaries and source code of an H.264 video codec called OpenH264 under the Simplified BSD license, and pay all royalties for its use to MPEG LA
MPEG LA
for any software projects that use Cisco's precompiled binaries, thus making Cisco's OpenH264 binaries free to use. However, any software projects that use Cisco's source code instead of its binaries would be legally responsible for paying all royalties to MPEG
MPEG
LA. Current target CPU architectures are x86 and ARM, and current target operating systems are Linux, Windows XP and later, Mac OS X, and Android; iOS is notably absent from this list, because it doesn't allow applications to fetch and install binary modules from the Internet.[47][48][49] Also on October 30, 2013, Brendan Eich
Brendan Eich
from Mozilla
Mozilla
wrote that it would use Cisco's binaries in future versions of Firefox to add support for H.264 to Firefox where platform codecs are not available.[50] Cisco published the source to OpenH264 on December 9, 2013.[51] Software encoders[edit]

AVC software implementations

Feature QuickTime Nero LEAD x264 Main- Concept Elecard  TSE  Pro- Coder Avivo Elemental  IPP 

B slices Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes

Multiple reference frames Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes

Interlaced coding (PicAFF, MBAFF) No MBAFF MBAFF MBAFF Yes Yes No Yes MBAFF Yes No

CABAC entropy coding Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes

8×8 vs. 4×4 transform adaptivity No Yes Yes Yes Yes Yes Yes Yes No Yes Yes

Quantization scaling matrices No No No Yes Yes No No No No No No

Separate Cb and Cr QP control No No No Yes Yes Yes No No No No No

Extended chroma formats No No No 4:2:2[52] 4:4:4[53] 4:2:0   4:2:2 4:2:2 4:2:2 No No 4:2:0 4:2:2 No

Largest sample depth (bit) 8 8 8 10[54] 10 8 8 8 8 10 12

Predictive lossless coding No No No Yes[55] No No No No No No No

Hardware[edit] See also: List of cameras with onboard video stream encoding and H.264/ MPEG-4 AVC products and implementations Because H.264 encoding and decoding requires significant computing power in specific types of arithmetic operations, software implementations that run on general-purpose CPUs are typically less power efficient. However, the latest quad-core general-purpose x86 CPUs have sufficient computation power to perform real-time SD and HD encoding. Compression efficiency depends on video algorithmic implementations, not on whether hardware or software implementation is used. Therefore, the difference between hardware and software based implementation is more on power-efficiency, flexibility and cost. To improve the power efficiency and reduce hardware form-factor, special-purpose hardware may be employed, either for the complete encoding or decoding process, or for acceleration assistance within a CPU-controlled environment. CPU based solutions are known to be much more flexible, particularly when encoding must be done concurrently in multiple formats, multiple bit rates and resolutions (multi-screen video), and possibly with additional features on container format support, advanced integrated advertising features, etc. CPU based software solution generally makes it much easier to load balance multiple concurrent encoding sessions within the same CPU. The 2nd generation Intel
Intel
"Sandy Bridge" Core i3/i5/i7 processors introduced at the January 2011 CES (Consumer Electronics Show) offer an on-chip hardware full HD H.264 encoder, known as Intel
Intel
Quick Sync Video.[56][57] A hardware H.264 encoder can be an ASIC or an FPGA. ASIC encoders with H.264 encoder functionality are available from many different semiconductor companies, but the core design used in the ASIC is typically licensed from one of a few companies such as Chips&Media, Allegro DVT, On2
On2
(formerly Hantro, acquired by Google), Imagination Technologies, NGCodec. Some companies have both FPGA and ASIC product offerings.[58] Texas Instruments manufactures a line of ARM + DSP cores that perform DSP H.264 BP encoding 1080p at 30fps.[59] This permits flexibility with respect to codecs (which are implemented as highly optimized DSP code) while being more efficient than software on a generic CPU. Licensing[edit] See also: Microsoft
Microsoft
Corp. v. Motorola
Motorola
Inc. In countries where patents on software algorithms are upheld, vendors and commercial users of products that use H.264/AVC are expected to pay patent licensing royalties for the patented technology that their products use.[60] This applies to the Baseline Profile as well.[61] A private organization known as MPEG
MPEG
LA, which is not affiliated in any way with the MPEG
MPEG
standardization organization, administers the licenses for patents applying to this standard, as well as the patent pools for MPEG-2
MPEG-2
Part 1 Systems, MPEG-2
MPEG-2
Part 2 Video, MPEG-4 Part 2 Video, HEVC, MPEG-DASH, and other technologies. The MPEG LA
MPEG LA
H.264 patents in the US last at least until 2027.[62] On August 26, 2010, MPEG LA
MPEG LA
announced that H.264 encoded Internet video that is free to end users will never be charged royalties.[63] All other royalties remain in place, such as royalties for products that decode and encode H.264 video as well as to operators of free television and subscription channels.[64] The license terms are updated in 5-year blocks.[65] In 2005, Qualcomm, which was the assignee of U.S. Patent
Patent
5,452,104 and U.S. Patent
Patent
5,576,767, sued Broadcom in US District Court, alleging that Broadcom infringed the two patents by making products that were compliant with the H.264 video compression standard.[66] In 2007, the District Court found that the patents were unenforceable because Qualcomm had failed to disclose them to the JVT prior to the release of the H.264 standard in May 2003.[66] In December 2008, the US Court of Appeals for the Federal Circuit affirmed the District Court's order that the patents be unenforceable but remanded to the District Court with instructions to limit the scope of unenforceability to H.264 compliant products.[66] See also[edit]

VP8 VP9 AOMedia Video 1 Comparison of H.264 and VC-1 Dirac (video compression format) Ultra-high-definition television IPTV

References[edit]

^ Ozer, Jan. "Encoding for Multiple Screen Delivery, Section 3, Lecture 7: Introduction to H.264". Udemy. Retrieved 10 October 2016.  ^ "Delivering 8K using AVC/H.264". Mystery Box. Retrieved 2017-08-23.  ^ "AVC/H.264 FAQ". www.mpegla.com. Retrieved 2016-09-15.  ^ "H.262 : Information technology — Generic coding of moving pictures and associated audio information: Video". Retrieved 2007-04-15.  ^ Joint Video Team, ITU-T Web site. ^ " ITU-T Recommendation H.264 (05/2003)". ITU. 2003-05-30. Retrieved 2013-04-18.  ^ " ITU-T Recommendation H.264 (05/2003) Cor. 1 (05/2004)". ITU. 2004-05-07. Retrieved 2013-04-18.  ^ " ITU-T Recommendation H.264 (03/2005)". ITU. 2005-03-01. Retrieved 2013-04-18.  ^ " ITU-T Recommendation H.264 (2005) Cor. 1 (09/2005)". ITU. 2005-09-13. Retrieved 2013-04-18.  ^ a b " ITU-T Recommendation H.264 (2005) Amd. 1 (06/2006)". ITU. 2006-06-13. Retrieved 2013-04-18.  ^ " ITU-T Recommendation H.264 (2005) Amd. 2 (04/2007)". ITU. 2007-04-06. Retrieved 2013-04-18.  ^ " ITU-T Recommendation H.264 (11/2007)". ITU. 2007-11-22. Retrieved 2013-04-18.  ^ " ITU-T Recommendation H.264 (2007) Cor. 1 (01/2009)". ITU. 2009-01-13. Retrieved 2013-04-18.  ^ a b " ITU-T Recommendation H.264 (03/2009)". ITU. 2009-03-16. Retrieved 2013-04-18.  ^ a b " ITU-T Recommendation H.264 (03/2010)". ITU. 2010-03-09. Retrieved 2013-04-18.  ^ a b " ITU-T Recommendation H.264 (06/2011)". ITU. 2011-06-29. Retrieved 2013-04-18.  ^ " ITU-T Recommendation H.264 (01/2012)". ITU. 2012-01-13. Retrieved 2013-04-18.  ^ a b c d " ITU-T Recommendation H.264 (04/2013)". ITU. 2013-06-12. Retrieved 2013-06-16.  ^ a b " ITU-T Recommendation H.264 (02/2014)". ITU. 2014-11-28. Retrieved 2016-02-28.  ^ " ITU-T Recommendation H.264 (02/2016)". ITU. 2016-02-13. Retrieved 2017-06-14.  ^ " ITU-T Recommendation H.264 (10/2016)". ITU. 2016-10-14. Retrieved 2017-06-14.  ^ " ITU-T Recommendation H.264 (04/2017)". ITU. 2017-04-13. Retrieved 2017-06-14.  ^ Wenger; et al. "RFC 3984 : RTP Payload Format for H.264 Video": 2.  ^ "Which recording mode is equivalent to the image quality of the High Definition Video (HDV) format?". Sony
Sony
eSupport.  ^ "ATSC Standard A/72 Part 1: Video System Characteristics of AVC in the ATSC Digital Television System" (PDF). Archived from the original (PDF) on August 7, 2011. Retrieved July 30, 2011.  ^ "ATSC Standard A/72 Part 2: AVC Video Transport Subsystem Characteristics" (PDF). Retrieved 2011-07-30.  ^ "ATSC Standard A/153 Part 7: AVC and SVC Video System Characteristics" (PDF). Archived from the original (PDF) on July 26, 2011. Retrieved July 30, 2011.  ^ a b " Sony
Sony
introduces new XAVC recording format to accelerate 4K development in the professional and consumer markets". Sony. 2012-10-30. Retrieved 2012-11-01.  ^ a b " Sony
Sony
introduces new XAVC recording format to accelerate 4K development in the professional and consumer markets" (PDF). Sony. 2012-10-30. Retrieved 2012-11-01. [permanent dead link] ^ " Sony
Sony
supports "Beyond HD" strategy with new full sensor cameras". broadcastengineering.com. 2012-10-30. Retrieved 2012-11-01.  ^ Steve Dent (2012-10-30). " Sony
Sony
goes Red-hunting with PMW-F55 and PMW-F5 pro CineAlta
CineAlta
4K Super 35mm sensor camcorders". Engadget. Retrieved 2012-11-05.  ^ "F55 CineAlta
CineAlta
4K the future, ahead of schedule" (PDF). Sony. 2012-10-30. Retrieved 2012-11-01.  ^ "Ultra-fast "SxS PRO+" memory cards transform 4K video capture". Sony. Retrieved 2012-11-05.  ^ "Ultra-fast "SxS PRO+" memory cards transform 4K video capture" (PDF). Sony. Retrieved 2012-11-05.  ^ "The H.264/AVC Advanced Video Coding Standard: Overview and Introduction to the Fidelity Range Extensions" (PDF). Retrieved 2011-07-30.  ^ a b c RFC 3984, p.3 ^ Apple Inc. (1999-03-26). "H.264 FAQ". Apple. Archived from the original on March 7, 2010. Retrieved 2010-05-17.  ^ Karsten Suehring. "H.264/AVC JM Reference Software Download". Iphome.hhi.de. Retrieved 2010-05-17.  ^ "TS 101 154 – V1.9.1 – Digital Video Broadcasting (DVB); Specification for the use of Video and Audio Coding in Broadcasting
Broadcasting
Applications based on the MPEG-2
MPEG-2
Transport Stream" (PDF). Retrieved 2010-05-17.  ^ Advanced video coding for generic audiovisual services. ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU. p. 324. Recommendation ITU-T H.264  ^ "Decoding the HTML
HTML
5 video codec debate". Ars Technica. 2009-07-06. Retrieved 2011-01-12.  ^ "Steve Ballmer, CEO Microsoft, interviewed at Gartner Symposium/ITxpo Orlando 2010". Gartnervideo. November 2010. Retrieved 2011-01-12.  ^ " HTML
HTML
Video Codec Support in Chrome". 2011-01-11. Retrieved 2011-01-12.  ^ "Video, Mobile, and the Open Web". 2012-03-18. Retrieved 2012-03-20.  ^ "WebRTC enabled, H.264/ MP3
MP3
support in Win 7 on by default, Metro UI for Windows 8 + more – Firefox Development Highlights". hacks.mozilla.org. mozilla. 2013-02-20. Retrieved 2013-03-15.  ^ Firefox Notes Version 35.0 ^ "Open-Sourced H.264 Removes Barriers to WebRTC". 2013-10-30. Retrieved 2013-11-01.  ^ "Cisco OpenH264 project FAQ". 2013-10-30. Retrieved 2013-11-01.  ^ " OpenH264 Simplified BSD License". 2013-10-27. Retrieved 2013-11-21.  ^ "Video Interoperability on the Web Gets a Boost From Cisco's H.264 Codec". 2013-10-30. Retrieved 2013-11-01.  ^ https://github.com/cisco/openh264/commit/59dae50b1069dbd532226ea024a3ba3982ab4386 ^ "x264 4:2:2 encoding support", Retrieved 2011-09-22. ^ "x264 4:4:4 encoding support", Retrieved 2011-06-22. ^ "x264 support for 9 and 10-bit encoding", Retrieved 2011-06-22. ^ "x264 replace High 4:4:4 profile lossless with High 4:4:4 Predictive", Retrieved 2011-06-22. ^ "Quick Reference Guide to generation Intel® Core™ Processor Built-in Visuals". Intel® Software Network. 2010-10-01. Retrieved 2011-01-19.  ^ "Intel® Quick Sync Video". www.intel.com. 2010-10-01. Retrieved 2011-01-19.  ^ "Design-reuse.com". Design-reuse.com. 1990-01-01. Retrieved 2010-05-17.  ^ "Category:DM6467 - Texas Instruments Embedded Processors Wiki". Processors.wiki.ti.com. 2011-07-12. Retrieved 2011-07-30.  ^ http://www.mpegla.com/main/programs/AVC/Documents/avcweb.pdf ^ "OMS Video, A Project of Sun's Open Media Commons Initiative". Retrieved 2008-08-26.  ^ http://www.osnews.com/story/24954/US_Patent_Expiration_for_MP3_MPEG-2_H_264 has a MPEG LA
MPEG LA
patent US 7826532 that was filed in September 5, 2003 and has a 1546 day term extension. http://patft1.uspto.gov/netacgi/nph-Parser?patentnumber=7826532 http://www.google.com/patents/about?id=2onYAAAAEBAJ ^ " MPEG
MPEG
LA's AVC License Will Not Charge Royalties for Internet Video that is Free to End Users through Life of License" (PDF). MPEG
MPEG
LA. 2010-08-26. Retrieved 2010-08-26.  ^ Hachman, Mark (2010-08-26). " MPEG LA
MPEG LA
Cuts Royalties from Free Web Video, Forever". pcmag.com. Retrieved 2010-08-26.  ^ "AVC FAQ". MPEG
MPEG
LA. 2002-08-01. Retrieved 2010-05-17.  ^ a b c See Qualcomm Inc. v. Broadcom Corp., No. 2007-1545, 2008-1162 (Fed. Cir. December 1, 2008). For articles in the popular press, see signonsandiego.com, "Qualcomm loses its patent-rights case" and "Qualcomm's patent case goes to jury"; and bloomberg.com "Broadcom Wins First Trial in Qualcomm Patent
Patent
Dispute"

Further reading[edit]

Wiegand, Thomas; Sullivan, Gary J.; Bjøntegaard, Gisle; Luthra, Ajay (July 2003). "Overview of the H.264/AVC Video Coding Standard" (PDF). IEEE Transactions on Circuits and Systems for Video Technology. 13 (7). Retrieved January 31, 2011.  Topiwala, Pankaj; Sullivan, Gary J.; Luthra, Ajay (August 2004). "Overview and Introduction to the Fidelity Range Extensions" (PDF). SPIE Applications of Digital Image Processing XXVII. Retrieved January 31, 2011.  Ostermann, J.; Bormans, J.; List, P.; Marpe, D.; Narroschke, M.; Pereira, F.; Stockhammer, T.; Wedi, T. (2004). "Video coding with H.264/AVC: Tools, Performance, and Complexity" (PDF). IEEE Circuits and Systems Magazine. 4 (1). Retrieved January 31, 2011.  Sullivan, Gary J.; Wiegand, Thomas (January 2005). "Video Compression—From Concepts to the H.264/AVC Standard" (PDF). Proceedings of the IEEE. 93 (1). doi:10.1109/jproc.2004.839617. Retrieved January 31, 2011.  Richardson, Iain E. G. (January 2011). "Learn about video compression and H.264". VCODEX. Vcodex Limited. Retrieved January 31, 2011. 

External links[edit]

ITU-T publication page: H.264: Advanced video coding for generic audiovisual services MPEG-4 AVC/H.264 Information Doom9's Forum H.264/ MPEG-4 Part 10 Tutorials (Richardson) "Part 10: Advanced Video Coding". ISO publication page: ISO/IEC 14496-10:2010 – Information technology — Coding of audio-visual objects.  "H.264/AVC JM Reference Software". IP Homepage. Retrieved 2007-04-15.  "JVT document archive site". Retrieved 2007-05-06.  "Publications". Thomas Wiegand. Retrieved 2007-06-23.  "Publications". Detlev Marpe. Retrieved 2007-04-15.  "Fourth Annual H.264 video codecs comparison". Moscow State University.  (dated December 2007) "Discussion on H.264 with respect to IP cameras in use within the security and surveillance industries".  (dated April 2009) "Sixth Annual H.264 video codecs comparison". Moscow State University.  (dated May 2010)

v t e

Multimedia
Multimedia
compression and container formats

Video compression

ISO/IEC

MJPEG Motion JPEG 2000 MPEG-1 MPEG-2

Part 2

MPEG-4

Part 2/ASP Part 10/AVC

MPEG-H

Part 2/HEVC

ITU-T

H.120 H.261 H.262 H.263 H.264 H.265

SMPTE

VC-1 VC-2 VC-3 VC-5

Alliance for Open Media

AV1

Others

Apple Video AVS Bink Cinepak Daala Dirac DV DVI FFV1 Huffyuv Indeo Lagarith Microsoft
Microsoft
Video 1 MSU Lossless OMS Video Pixlet ProRes 422 ProRes 4444 QuickTime

Animation Graphics

RealVideo RTVideo SheerVideo Smacker Sorenson Video, Spark Theora Thor VP3 VP6 VP7 VP8 VP9 WMV XEB YULS

Audio compression

ISO/IEC

MPEG-1
MPEG-1
Layer III (MP3) MPEG-1
MPEG-1
Layer II

Multichannel

MPEG-1
MPEG-1
Layer I AAC

HE-AAC AAC-LD

MPEG
MPEG
Surround MPEG-4 ALS MPEG-4 SLS MPEG-4 DST MPEG-4 HVXC MPEG-4 CELP MPEG-D USAC MPEG-H 3D Audio

ITU-T

G.711 (A-law, µ-law) G.718 G.719 G.722 G.722.1 G.722.2 G.723 G.723.1 G.726 G.728 G.729 G.729.1

IETF

Opus iLBC

3GPP

AMR AMR-WB AMR-WB+ EVRC EVRC-B EVS GSM-HR GSM-FR GSM-EFR

Others

ACELP AC-3 AC-4 ALAC Asao ATRAC CELT Codec2 DRA DTS FLAC iSAC Monkey's Audio TTA

True Audio

MT9 Musepack OptimFROG OSQ QCELP RCELP RealAudio RTAudio SD2 SHN SILK Siren SMV Speex SVOPC TwinVQ VMR-WB Vorbis VSELP WavPack WMA MQA aptX LDAC

Image compression

IEC, ISO, ITU-T, W3C, IETF

CCITT Group 4 GIF HEIF HEVC JBIG JBIG2 JPEG JPEG-LS JPEG
JPEG
2000 JPEG
JPEG
XR JPEG
JPEG
XT PNG TIFF TIFF/EP TIFF/IT

Others

APNG BPG DjVu EXR FLIF ICER MNG PGF QTVR WBMP WebP

Containers

ISO/IEC

MPEG-ES

MPEG-PES

MPEG-PS MPEG-TS ISO base media file format MPEG-4 Part 14 (MP4) Motion JPEG 2000 MPEG-21 Part 9 MPEG
MPEG
media transport

ITU-T

H.222.0 T.802

IETF

RTP

Others

3GP and 3G2 AMV ASF AIFF AVI AU BPG Bink

Smacker

BMP DivX Media Format EVO Flash Video GXF IFF M2TS Matroska

WebM

MXF Ogg QuickTime
QuickTime
File
File
Format RatDVD RealMedia RIFF

WAV

MOD and TOD VOB, IFO and BUP

Collaborations

NETVC MPEG-LA

See Compression methods for methods and Compression software for codecs

v t e

MPEG
MPEG
(Moving Picture Experts Group)

MPEG-1 2 3 4 7 21 A B C D E V M U H

MPEG-1
MPEG-1
Parts

Part 1: Systems

Program stream

Part 2: Video

based on H.261

Part 3: Audio

Layer I Layer II Layer III

MPEG-2
MPEG-2
Parts

Part 1: Systems (H.222.0)

Transport stream Program stream

Part 2: Video (H.262) Part 3: Audio

Layer I Layer II Layer III MPEG
MPEG
Multichannel

Part 6: DSM CC Part 7: Advanced Audio Coding

MPEG-4 Parts

Part 2: Video

based on H.263

Part 3: Audio Part 6: DMIF Part 10: Advanced Video Coding (H.264) Part 11: Scene description Part 12: ISO base media file format Part 14: MP4 file format Part 17: Streaming text format Part 20: LASeR Part 22: Open Font Format

MPEG-7
MPEG-7
Parts

Part 2: Description definition language

MPEG-21 Parts

Parts 2, 3 and 9: Digital Item Part 5: Rights Expression Language

MPEG-D Parts

Part 1: MPEG
MPEG
Surround Part 3: Unified Speech and Audio Coding

MPEG-H Parts

Part 1: MPEG
MPEG
media transport Part 2: High Efficiency Video Coding Part 3: MPEG-H 3D Audio Part 12: High Efficiency Image File
File
Format

Other

MPEG-DASH

v t e

High-definition (HD)

Concepts

High-definition television High-definition video Ultra-high-definition television

Analog broadcast (All defunct)

819 line system HD MAC MUSE (Hi-Vision)

Digital broadcast

ATSC DMB-T/H DVB ISDB SBTVD

Audio

Dolby Digital Surround sound DSD DXD DTS

Filming and storage

DCI HDV

HD media and compression

Blu-ray CBHD D-VHS DVD-Audio H.264 H.265 HD DVD HD VMD MPEG-2 MVC Super Audio CD Ultra HD Blu-ray Uncompressed VC-1

Connectors

Component DisplayPort DVI HDMI VGA

Deployments

List of digital television deployments by country

v t e

ISO standards by standard number

List of ISO standards / ISO romanizations / IEC standards

1–9999

1 2 3 4 5 6 7 9 16 31

-0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13

128 216 217 226 228 233 259 269 302 306 428 518 519 639

-1 -2 -3 -5 -6

646 690 732 764 843 898 965 1000 1004 1007 1073-1 1413 1538 1745 1989 2014 2015 2022 2047 2108 2145 2146 2240 2281 2709 2711 2788 2848 2852 3029 3103 3166

-1 -2 -3

3297 3307 3602 3864 3901 3977 4031 4157 4217 4909 5218 5428 5775 5776 5800 5964 6166 6344 6346 6385 6425 6429 6438 6523 6709 7001 7002 7098 7185 7200 7498 7736 7810 7811 7812 7813 7816 8000 8178 8217 8571 8583 8601 8632 8652 8691 8807 8820-5 8859

-1 -2 -3 -4 -5 -6 -7 -8 -8-I -9 -10 -11 -12 -13 -14 -15 -16

8879 9000/9001 9075 9126 9293 9241 9362 9407 9506 9529 9564 9594 9660 9897 9899 9945 9984 9985 9995

10000–19999

10005 10006 10007 10116 10118-3 10160 10161 10165 10179 10206 10218 10303

-11 -21 -22 -28 -238

10383 10487 10585 10589 10646 10664 10746 10861 10957 10962 10967 11073 11170 11179 11404 11544 11783 11784 11785 11801 11898 11940 (-2) 11941 11941 (TR) 11992 12006 12182 12207 12234-2 13211

-1 -2

13216 13250 13399 13406-2 13450 13485 13490 13567 13568 13584 13616 14000 14031 14224 14289 14396 14443 14496

-2 -3 -6 -10 -11 -12 -14 -17 -20

14644 14649 14651 14698 14750 14764 14882 14971 15022 15189 15288 15291 15292 15398 15408 15444

-3

15445 15438 15504 15511 15686 15693 15706

-2

15707 15897 15919 15924 15926 15926 WIP 15930 16023 16262 16612-2 16750 16949 (TS) 17024 17025 17100 17203 17369 17442 17799 18000 18004 18014 18245 18629 18916 19005 19011 19092 (-1 -2) 19114 19115 19125 19136 19439 19500 19501 19502 19503 19505 19506 19507 19508 19509 19510 19600 19752 19757 19770 19775-1 19794-5 19831

20000+

20000 20022 20121 20400 21000 21047 21500 21827:2002 22000 23270 23271 23360 24517 24613 24617 24707 25178 25964 26000 26300 26324 27000 series 27000 27001 27002 27006 27729 28000 29110 29148 29199-2 29500 30170 31000 32000 38500 40500 42010 55000 80000

-1 -2 -3

Category

v t e

ITU recommendations (standards)

Lists: List of ITU-T V-series recommendations List of ITU letter codes Categories: Category:ITU-R recommendations Category: ITU-T recommendations

G series (ITU-T)

G.114 G.165 G.703 G.704 G.706 G.707 G.709 G.711 G.718 G.719 G.722 G.722.1 G.722.2 G.729.1 G.723 G.723.1 G.726 G.728 G.729 G.783 G.798 G.806 G.811 G.983 G.984 G.987 G.988 G.991.1 G.991.2 G.992.1 G.992.2 G.992.3

Annex J Annex L

G.992.4 G.992.5

Annex M

G.993.1 G.993.2 G.7041 G.7042 G.7043 G.8262 G.9700 / G.9701 G.9960 G.9970 G.9972

H series (ITU-T)

H.222.0 H.225.0 H.235 H.239 H.241 H.245 H.248 H.261 H.262/ MPEG-2
MPEG-2
Part 2 H.263 H.264/ MPEG-4 AVC H.265/ MPEG-H HEVC H.320 H.323 H.323
H.323
Gatekeeper H.324 H.450

V series (ITU-T)

V.10 V.11 V.21 V.22 V.23 V.24 V.61 V.70 V.90 V.92

ITU-R

ITU-R 468 noise weighting ITU-R BS.1534-1 ITU-R BT.1304 ITU-R BT.470-6 ITU-R BT.470-7 ITU-R BT.601 ITU-R BT.709 ITU-R BT.2020

See also: All articles be

.