In the field of
video compression
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression ...
a
video frame is compressed using different
algorithms with different advantages and disadvantages, centered mainly around amount of
data compression. These different algorithms for video frames are called picture types or frame types. The three major picture types used in the different video algorithms are I, P and B. They are different in the following characteristics:
* I‑frames are the least compressible but don't require other video frames to decode.
* P‑frames can use data from previous frames to decompress and are more compressible than I‑frames.
* B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression.
Summary
Three types of ''pictures'' (or frames) are used in
video compression
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression ...
: I, P, and B frames.
An I‑frame (
Intra-coded picture) is a complete image, like a
JPG
JPEG ( ) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image ...
or
BMP image file.
A P‑frame (Predicted picture) holds only the changes in the image from the previous frame. For example, in a scene where a car moves across a stationary background, only the car's movements need to be encoded. The encoder does not need to store the unchanging background pixels in the P‑frame, thus saving space. P‑frames are also known as ''delta‑frames''.
A B‑frame (Bidirectional predicted picture) saves even more space by using differences between the current frame and both the preceding and following frames to specify its content.
P and B frames are also called
Inter frames. The order in which the I, P and B frames are arranged is called the
Group of pictures.
Pictures/frames
While the terms "frame" and "picture" are often used interchangeably, the term ''picture'' is a more general notion, as a picture can be either a frame or a
field. A frame is a complete image, and a field is the set of odd-numbered or even-numbered
scan lines composing a partial image. For example, an HD 1080 picture has 1080 lines (rows) of pixels. An odd field consists of pixel information for lines 1, 3, 5...1079. An even field has pixel information for lines 2, 4, 6...1080. When video is sent in
interlaced-scan format, each frame is sent in two fields, the field of odd-numbered lines followed by the field of even-numbered lines.
A frame used as a reference for predicting other frames is called a reference frame.
Frames encoded without information from other frames are called I-frames. Frames that use prediction from a single preceding reference frame (or a single frame for prediction of each region) are called P-frames. B-frames use prediction from a (possibly weighted) average of two reference frames, one preceding and one succeeding.
Slices
In the
H.264/MPEG-4 AVC standard, the granularity of prediction types is brought down to the "slice level." A slice is a spatially distinct region of a frame that is encoded separately from any other region in the same frame. I-slices, P-slices, and B-slices take the place of I, P, and B frames.
Macroblocks
Typically, pictures (frames) are segmented into ''
macroblock The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform ...
s'', and individual prediction types can be selected on a macroblock basis rather than being the same for the entire picture, as follows:
* I-frames can contain only intra macroblocks
* P-frames can contain both intra macroblocks and predicted macroblocks
* B-frames can contain intra, predicted, and bi-predicted macroblocks
Furthermore, in the
H.264 video coding standard, the frame can be segmented into sequences of macroblocks called ''slices'', and instead of using I, B and P-frame type selections, the encoder can choose the prediction style distinctly on each individual slice. Also in H.264 are found several additional types of frames/slices:
* SI‑frames/slices (Switching I): Facilitates switching between coded streams; contains SI-macroblocks (a special type of intra coded macroblock).
* SP‑frames/slices (Switching P): Facilitates switching between coded streams; contains P and/or I-macroblocks
* Multi‑frame
motion estimation (up to 16 reference frames or 32 reference fields)
Multi‑frame motion estimation increases the quality of the video, while allowing the same compression ratio. SI and SP frames (defined for the Extended Profile) improve
error correction
In information theory and coding theory with applications in computer science and telecommunication, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable communica ...
. When such frames are used along with a smart decoder, it is possible to recover the broadcast streams of damaged DVDs.
Intra-coded (I) frames/slices (key frames)
* I-frames contain an entire image. They are coded without reference to any other frame except (parts of) themselves.
* May be generated by an encoder to create a random access point (to allow a decoder to start decoding properly from scratch at that picture location).
* May also be generated when differentiating image details prohibit generation of effective P or B-frames.
* Typically require more bits to encode than other frame types.
Often, I‑frames are used for random access and are used as references for the decoding of other pictures. Intra refresh periods of a half-second are common on such applications as
digital television broadcast and
DVD storage. Longer refresh periods may be used in some environments. For example, in
videoconferencing systems it is common to send I-frames very infrequently.
Predicted (P) frames/slices
* Require the prior decoding of some other picture(s) in order to be decoded.
* May contain both image data and motion vector displacements and combinations of the two.
* Can reference previous pictures in decoding order.
* Older standard designs (such as
MPEG-2
MPEG-2 (a.k.a. H.222/H.262 as was defined by the ITU) is a standard for "the generic video coding format, coding of moving pictures and associated audio information". It describes a combination of Lossy compression, lossy video compression and ...
) use only one previously decoded picture as a reference during decoding, and require that picture to also precede the P picture in display order.
* In H.264, can use multiple previously decoded pictures as references during decoding, and can have any arbitrary display-order relationship relative to the picture(s) used for its prediction.
* Typically require fewer bits for encoding compared to pictures.
Bi-directional predicted (B) frames/slices (macroblocks)
* Require the prior decoding of subsequent frame(s) to be displayed.
* May contain image data and/or motion vector displacements. Older standards allow only a single
global motion compensation vector for the entire frame or a single motion compensation vector per macroblock.
* Include some prediction modes that form a prediction of a motion region (e.g., a macroblock or a smaller area) by averaging the predictions obtained using two different previously decoded reference regions. Some standards allow two motion compensation vectors per macroblock (biprediction).
* In older standards (such as MPEG-2), B-frames are never used as references for the prediction of other pictures. As a result, a lower quality encoding (requiring less space) can be used for such B-frames because the loss of detail will not harm the prediction quality for subsequent pictures.
* H.264 relaxes this restriction, and allows B-frames to be used as references for the decoding of other frames at the encoder's discretion.
* Older standards (such as MPEG-2), use exactly two previously decoded pictures as references during decoding, and require one of those pictures to precede the B-frame in display order and the other one to follow it.
* H.264 allows for one, two, or more than two previously decoded pictures as references during decoding, and can have any arbitrary display-order relationship relative to the picture(s) used for its prediction.
* The heightened flexibility of information retrieval means that B-frames typically require fewer bits for encoding than either I or P-frames.
See also
*
Key frame
In animation and filmmaking, a key frame (or keyframe) is a drawing or shot that defines the starting and ending points of a smooth transition. These are called ''frames'' because their position in time is measured in frames on a strip of film ...
term in animation
*
Video compression
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression ...
*
Intra frame
*
Inter frame
*
Group of pictures application of frame types
*
Datamosh
*
Video
References
External links
Video streaming with SP and SI frames
{{Compression Methods
Video compression