JPG Farmers Riccarton Main Entrance 2016
   HOME

TheInfoList



OR:

JPEG ( ) is a commonly used method of
lossy compression In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
for
digital image A digital image is an image composed of picture elements, also known as ''pixels'', each with ''finite'', '' discrete quantities'' of numeric representation for its intensity or gray level that is an output from its two-dimensional functions ...
s, particularly for those images produced by
digital photography Digital photography uses cameras containing arrays of electronic photodetectors interfaced to an analog-to-digital converter (ADC) to produce images focused by a lens, as opposed to an exposure on photographic film. The digitized image is sto ...
. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and
image quality Image quality can refer to the level of accuracy with which different imaging systems capture, process, store, compress, transmit and display the signals that form an image. Another definition refers to image quality as "the weighted combination of ...
. JPEG typically achieves 10:1 compression with little perceptible loss in image quality. Since its introduction in 1992, JPEG has been the most widely used
image compression Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior r ...
standard in the world, and the most widely used digital
image format An Image file format is a file format for a digital image. There are many formats that can be used, such as JPEG, PNG, and GIF. Most formats up until 2022 were for storing 2D images, not 3D ones. The data stored in an image file format may be c ...
, with several billion JPEG images produced every day as of 2015. The term "JPEG" is an
acronym An acronym is a word or name formed from the initial components of a longer name or phrase. Acronyms are usually formed from the initial letters of words, as in ''NATO'' (''North Atlantic Treaty Organization''), but sometimes use syllables, as ...
for the
Joint Photographic Experts Group The Joint Photographic Experts Group (JPEG) is the joint committee between ISO/IEC JTC 1/SC 29 and ITU-T Study Group 16 that created and maintains the JPEG, JPEG 2000, JPEG XR, JPEG XT, JPEG XS, JPEG XL, and related digital image standards. It ...
, which created the standard in 1992. JPEG was largely responsible for the proliferation of digital images and
digital photo Digital photography uses cameras containing arrays of electronic photodetectors interfaced to an analog-to-digital converter (ADC) to produce images focused by a lens, as opposed to an exposure on photographic film. The digitized image is sto ...
s across the Internet, and later
social media Social media are interactive media technologies that facilitate the creation and sharing of information, ideas, interests, and other forms of expression through virtual communities and networks. While challenges to the definition of ''social medi ...
. JPEG compression is used in a number of
image file formats An Image file format is a file format for a digital image. There are many formats that can be used, such as JPEG, PNG, and GIF. Most formats up until 2022 were for storing 2D images, not 3D ones. The data stored in an image file format may be c ...
. JPEG/
Exif Exchangeable image file format (officially Exif, according to JEIDA/JEITA/CIPA specifications) is a standard that specifies formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other system ...
is the most common image format used by
digital camera A digital camera is a camera that captures photographs in digital memory. Most cameras produced today are digital, largely replacing those that capture images on photographic film. Digital cameras are now widely incorporated into mobile device ...
s and other photographic image capture devices; along with JPEG/
JFIF The JPEG File Interchange Format (JFIF) is an image file format standard published as ITU-T Recommendation T.871 and ISO/IEC 10918-5. It defines supplementary specifications for the container format that contains the image data encoded with the J ...
, it is the most common format for storing and transmitting
photographic image A photograph (also known as a photo, image, or picture) is an image created by light falling on a photosensitive surface, usually photographic film or an electronic image sensor, such as a CCD or a CMOS chip. Most photographs are now created ...
s on the
World Wide Web The World Wide Web (WWW), commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet. Documents and downloadable media are made available to the network through web se ...
. These format variations are often not distinguished, and are simply called JPEG. The
MIME media type Multipurpose Internet Mail Extensions (MIME) is an Internet standard that extends the format of email messages to support text in character sets other than ASCII, as well as attachments of audio, video, images, and application programs. Message ...
for JPEG is ''image/jpeg'', except in older
Internet Explorer Internet Explorer (formerly Microsoft Internet Explorer and Windows Internet Explorer, commonly abbreviated IE or MSIE) is a series of graphical user interface, graphical web browsers developed by Microsoft which was used in the Microsoft Wind ...
versions, which provides a MIME type of ''image/pjpeg'' when uploading JPEG images. JPEG files usually have a
filename extension A filename extension, file name extension or file extension is a suffix to the name of a computer file (e.g., .txt, .docx, .md). The extension indicates a characteristic of the file contents or its intended use. A filename extension is typically d ...
of .jpg or .jpeg. JPEG/JFIF supports a maximum image size of 65,535×65,535 pixels, hence up to 4 gigapixels for an aspect ratio of 1:1. In 2000, the JPEG group introduced a format intended to be a successor,
JPEG 2000 JPEG 2000 (JP2) is an image compression standard and coding system. It was developed from 1997 to 2000 by a Joint Photographic Experts Group committee chaired by Touradj Ebrahimi (later the JPEG president), with the intention of superseding the ...
, but it was unable to replace the original JPEG as the dominant image standard.


History


Background

The original JPEG specification published in 1992 implements processes from various earlier
research papers Academic publishing is the subfield of publishing which distributes academic research and scholarship. Most academic work is published in academic journal articles, books or theses. The part of academic written output that is not formally publ ...
and
patent A patent is a type of intellectual property that gives its owner the legal right to exclude others from making, using, or selling an invention for a limited period of time in exchange for publishing an enabling disclosure of the invention."A p ...
s cited by the
CCITT The ITU Telecommunication Standardization Sector (ITU-T) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Commu ...
(now
ITU-T The ITU Telecommunication Standardization Sector (ITU-T) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Commu ...
) and Joint Photographic Experts Group. The JPEG specification cites patents from several companies. The following patents provided the basis for its
arithmetic coding Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a string of characters is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic e ...
algorithm. * IBM ** February 4, 1986 Kottappuram M. A. Mohiuddin and Jorma J. Rissanen Multiplication-free multi-alphabet arithmetic code ** February 27, 1990 G. Langdon, J.L. Mitchell, W.B. Pennebaker, and Jorma J. Rissanen Arithmetic coding encoder and decoder system ** June 19, 1990 W.B. Pennebaker and J.L. Mitchell Probability adaptation for arithmetic coders *
Mitsubishi Electric , established on 15 January 1921, is a Japanese multinational electronics and electrical equipment manufacturing company headquartered in Tokyo, Japan. It is one of the core companies of Mitsubishi. The products from MELCO include elevators an ...
**
1021672
January 21, 1989 Toshihiro Kimura, Shigenori Kino, Fumitaka Ono, Masayuki Yoshida Coding system **
2-46275
February 26, 1990 Fumitaka Ono, Tomohiro Kimura, Masayuki Yoshida, and Shigenori Kino Coding apparatus and coding method The JPEG specification also cites three other patents from IBM. Other companies cited as patent holders include
AT&T AT&T Inc. is an American multinational telecommunications holding company headquartered at Whitacre Tower in Downtown Dallas, Texas. It is the world's largest telecommunications company by revenue and the third largest provider of mobile tel ...
(two patents) and
Canon Inc. is a Japanese multinational corporation headquartered in Ōta, Tokyo, Japan, specializing in optical, imaging, and industrial products, such as lenses, cameras, medical equipment, scanners, printers, and semiconductor manufacturing equipment.
Absent from the list is , filed by Compression Labs' Wen-Hsiung Chen and Daniel J. Klenke in October 1986. The patent describes a DCT-based image compression algorithm, and would later be a cause of controversy in 2002 (see '' Patent controversy'' below). However, the JPEG specification did cite two earlier research papers by Wen-Hsiung Chen, published in 1977 and 1984.


JPEG standard

"JPEG" stands for
Joint Photographic Experts Group The Joint Photographic Experts Group (JPEG) is the joint committee between ISO/IEC JTC 1/SC 29 and ITU-T Study Group 16 that created and maintains the JPEG, JPEG 2000, JPEG XR, JPEG XT, JPEG XS, JPEG XL, and related digital image standards. It ...
, the name of the committee that created the JPEG standard and also other still picture coding standards. The "Joint" stood for
ISO ISO is the most common abbreviation for the International Organization for Standardization. ISO or Iso may also refer to: Business and finance * Iso (supermarket), a chain of Danish supermarkets incorporated into the SuperBest chain in 2007 * Iso ...
TC97 WG8 and
CCITT The ITU Telecommunication Standardization Sector (ITU-T) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Commu ...
SGVIII. Founded in 1986, the group developed the JPEG standard during the late 1980s. The group published the JPEG standard in 1992. In 1987, ISO TC 97 became ISO/IEC JTC 1 and, in 1992, CCITT became ITU-T. Currently on the JTC1 side, JPEG is one of two sub-groups of
ISO ISO is the most common abbreviation for the International Organization for Standardization. ISO or Iso may also refer to: Business and finance * Iso (supermarket), a chain of Danish supermarkets incorporated into the SuperBest chain in 2007 * Iso ...
/
IEC The International Electrotechnical Commission (IEC; in French: ''Commission électrotechnique internationale'') is an international standards organization that prepares and publishes international standards for all electrical, electronic and r ...
Joint Technical Committee 1, Subcommittee 29, Working Group 1 (
ISO/IEC JTC 1/SC 29 ISO/IEC JTC 1/SC 29, entitled ''Coding of audio, picture, multimedia and hypermedia information'', is a standardization subcommittee of the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the I ...
/WG 1) – titled as ''Coding of still pictures''. On the ITU-T side, ITU-T SG16 is the respective body. The original JPEG Group was organized in 1986, issuing the first JPEG standard in 1992, which was approved in September 1992 as
ITU-T The ITU Telecommunication Standardization Sector (ITU-T) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Commu ...
Recommendation T.81 and, in 1994, as
ISO ISO is the most common abbreviation for the International Organization for Standardization. ISO or Iso may also refer to: Business and finance * Iso (supermarket), a chain of Danish supermarkets incorporated into the SuperBest chain in 2007 * Iso ...
/
IEC The International Electrotechnical Commission (IEC; in French: ''Commission électrotechnique internationale'') is an international standards organization that prepares and publishes international standards for all electrical, electronic and r ...
10918-1. The JPEG standard specifies the
codec A codec is a device or computer program that encodes or decodes a data stream or signal. ''Codec'' is a portmanteau of coder/decoder. In electronic communications, an endec is a device that acts as both an encoder and a decoder on a signal or da ...
, which defines how an image is compressed into a stream of
byte The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit ...
s and decompressed back into an image, but not the file format used to contain that stream. The
Exif Exchangeable image file format (officially Exif, according to JEIDA/JEITA/CIPA specifications) is a standard that specifies formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other system ...
and
JFIF The JPEG File Interchange Format (JFIF) is an image file format standard published as ITU-T Recommendation T.871 and ISO/IEC 10918-5. It defines supplementary specifications for the container format that contains the image data encoded with the J ...
standards define the commonly used file formats for interchange of JPEG-compressed images. JPEG standards are formally named as ''Information technology – Digital compression and coding of continuous-tone still images''. ISO/IEC 10918 consists of the following parts:
Ecma International Ecma International () is a nonprofit standards organization for information and communication systems. It acquired its current name in 1994, when the European Computer Manufacturers Association (ECMA) changed its name to reflect the organizatio ...
TR/98 specifies the JPEG File Interchange Format (JFIF); the first edition was published in June 2009.


Patent controversy

In 2002,
Forgent Networks Asure Software is a software company. Prior to September 13, 2007, the company was known as Forgent Networks. After rebranding as Asure Software, the company expanded into offering human capital management (HCM) solutions, including payroll, tim ...
asserted that it owned and would enforce
patent A patent is a type of intellectual property that gives its owner the legal right to exclude others from making, using, or selling an invention for a limited period of time in exchange for publishing an enabling disclosure of the invention."A p ...
rights on the JPEG technology, arising from a patent that had been filed on October 27, 1986, and granted on October 6, 1987: by Compression Labs' Wen-Hsiung Chen and Daniel J. Klenke. While Forgent did not own Compression Labs at the time, Chen later sold Compression Labs to Forgent, before Chen went on to work for
Cisco Cisco Systems, Inc., commonly known as Cisco, is an American-based multinational digital communications technology conglomerate corporation headquartered in San Jose, California. Cisco develops, manufactures, and sells networking hardware, ...
. This led to Forgent acquiring ownership over the patent. Forgent's 2002 announcement created a furor reminiscent of
Unisys Unisys Corporation is an American multinational information technology (IT) services and consulting company headquartered in Blue Bell, Pennsylvania. It provides digital workplace solutions, cloud, applications, and infrastructure solutions, e ...
' attempts to assert its rights over the GIF image compression standard. The JPEG committee investigated the patent claims in 2002 and were of the opinion that they were invalidated by
prior art Prior art (also known as state of the art or background art) is a concept in patent law used to determine the patentability of an invention, in particular whether an invention meets the novelty and the inventive step or non-obviousness criteria f ...
, a view shared by various experts. Between 2002 and 2004, Forgent was able to obtain about US$105 million by licensing their patent to some 30 companies. In April 2004, Forgent sued 31 other companies to enforce further license payments. In July of the same year, a consortium of 21 large computer companies filed a countersuit, with the goal of invalidating the patent. In addition, Microsoft launched a separate lawsuit against Forgent in April 2005. In February 2006, the
United States Patent and Trademark Office The United States Patent and Trademark Office (USPTO) is an agency in the U.S. Department of Commerce that serves as the national patent office and trademark registration authority for the United States. The USPTO's headquarters are in Alexa ...
agreed to re-examine Forgent's JPEG patent at the request of the Public Patent Foundation. On May 26, 2006, the USPTO found the patent invalid based on
prior art Prior art (also known as state of the art or background art) is a concept in patent law used to determine the patentability of an invention, in particular whether an invention meets the novelty and the inventive step or non-obviousness criteria f ...
. The USPTO also found that Forgent knew about the
prior art Prior art (also known as state of the art or background art) is a concept in patent law used to determine the patentability of an invention, in particular whether an invention meets the novelty and the inventive step or non-obviousness criteria f ...
, yet it intentionally avoided telling the Patent Office. This makes any appeal to reinstate the patent highly unlikely to succeed. Forgent also possesses a similar patent granted by the
European Patent Office The European Patent Office (EPO) is one of the two organs of the European Patent Organisation (EPOrg), the other being the Administrative Council. The EPO acts as executive body for the organisation
in 1994, though it is unclear how enforceable it is. As of October 27, 2006, the U.S. patent's 20-year term appears to have expired, and in November 2006, Forgent agreed to abandon enforcement of patent claims against use of the JPEG standard. The JPEG committee has as one of its explicit goals that their standards (in particular their baseline methods) be implementable without payment of license fees, and they have secured appropriate license rights for their
JPEG 2000 JPEG 2000 (JP2) is an image compression standard and coding system. It was developed from 1997 to 2000 by a Joint Photographic Experts Group committee chaired by Touradj Ebrahimi (later the JPEG president), with the intention of superseding the ...
standard from over 20 large organizations. Beginning in August 2007, another company, Global Patent Holdings, LLC claimed that its patent () issued in 1993, is infringed by the downloading of JPEG images on either a website or through e-mail. If not invalidated, this patent could apply to any website that displays JPEG images. The patent was under reexamination by the U.S. Patent and Trademark Office from 2000 to 2007; in July 2007, the Patent Office revoked all of the original claims of the patent but found that an additional claim proposed by Global Patent Holdings (claim 17) was valid. Global Patent Holdings then filed a number of lawsuits based on claim 17 of its patent. In its first two lawsuits following the reexamination, both filed in Chicago, Illinois, Global Patent Holdings sued the
Green Bay Packers The Green Bay Packers are a professional American football team based in Green Bay, Wisconsin. The Packers compete in the National Football League (NFL) as a member club of the National Football Conference (NFC) NFC North, North division. It ...
,
CDW CDW Corporation, headquartered in Lincolnshire, Illinois, is a provider of technology products and services for business, government and education. The company has a secondary division known as ''CDW-G'', devoted solely to United States govern ...
,
Motorola Motorola, Inc. () was an American Multinational corporation, multinational telecommunications company based in Schaumburg, Illinois, United States. After having lost $4.3 billion from 2007 to 2009, the company split into two independent p ...
,
Apple An apple is an edible fruit produced by an apple tree (''Malus domestica''). Apple fruit tree, trees are agriculture, cultivated worldwide and are the most widely grown species in the genus ''Malus''. The tree originated in Central Asia, wh ...
,
Orbitz Orbitz.com is a travel fare aggregator website and travel metasearch engine. The website is owned by Orbitz Worldwide, Inc., a subsidiary of Expedia Group. It is headquartered in the Citigroup Center, Chicago, Illinois. Background Origina ...
,
Officemax OfficeMax is an American office supplies retailer founded in 1988. It is now a subsidiary of The ODP Corporation, which is headquartered in Boca Raton, Florida. As of December 2012, OfficeMax operated 941 stores in 47 states, Puerto Rico, the U.S ...
,
Caterpillar Caterpillars ( ) are the larval stage of members of the order Lepidoptera (the insect order comprising butterflies and moths). As with most common names, the application of the word is arbitrary, since the larvae of sawflies (suborder Sym ...
,
Kraft The second incarnation of Kraft Foods is an American food manufacturing and processing conglomerate, split from Kraft Foods Inc. in 2012 and headquartered in Chicago, Illinois. It became part of Kraft Heinz in 2015. A merger with Heinz, arra ...
and
Peapod Peapod Online Grocer (US), LLC is an American online grocery delivery service. By February 2022, it changed its name to Peapod Digital Labs. The company is based in Chicago, IL and operated in several U.S. cities. It is owned by Netherlands-bas ...
as defendants. A third lawsuit was filed on December 5, 2007, in South Florida against
ADT Security Services ADT Inc., formerly The ADT Corporation, is an American company that provides residential, small and large business electronic security, fire protection, and other related alarm monitoring services throughout the United States. The corporate hea ...
,
AutoNation AutoNation is an American automotive retailer based in Fort Lauderdale, Florida, which provides new and pre-owned vehicles and associated services in the United States. The company was founded by Wayne Huizenga in 1996, starting with twelve Aut ...
, Florida Crystals Corp., HearUSA, MovieTickets.com,
Ocwen Financial Corp. Ocwen Financial Corporation is a provider of residential and commercial mortgage loan servicing, special servicing, and asset management services, which has been described as " debt collectors, collecting monthly principal and interest from homeo ...
and
Tire Kingdom Tire Kingdom is a large American tire store chain located primarily in the southern part of the United States. In 2000, it became a subsidiary of TBC Corporation. Background Tire Kingdom was founded by Chuck Curcio in Florida in 1972, starting w ...
, and a fourth lawsuit on January 8, 2008, in South Florida against the
Boca Raton Resort & Club The Boca Raton is a luxury resort and club in Boca Raton, Florida, founded in 1926, today comprising 1,047 hotel rooms across 337 acres. Its facilities include two 18-hole golf courses, a 50,000 sq. ft. spa, seven swimming pools, 30 tennis court ...
. A fifth lawsuit was filed against Global Patent Holdings in Nevada. That lawsuit was filed by
Zappos.com Zappos.com is an American online shoe and clothing retailer based in Las Vegas, Nevada, United States. The company was founded in 1999 by Nick Swinmurn and launched under the domain name Shoesite.com. In July 2009, Amazon acquired Zappos in an ...
, Inc., which was allegedly threatened by Global Patent Holdings, and sought a judicial declaration that the '341 patent is invalid and not infringed. Global Patent Holdings had also used the '341 patent to sue or threaten outspoken critics of broad software patents, including Gregory Aharonian and the anonymous operator of a website blog known as the "
Patent Troll Tracker Richard "Rick" G. Frenkel (born 1966 or 1967 Michael Orey BusinessWeek, March 27, 2008. Consulted on April 4, 2008.) was an in-house intellectual property counsel and director of intellectual property at Cisco Systems.Asher Hawkins''Shut Up, Alread ...
." On December 21, 2007, patent lawyer Vernon Francissen of Chicago asked the U.S. Patent and Trademark Office to reexamine the sole remaining claim of the '341 patent on the basis of new prior art. On March 5, 2008, the U.S. Patent and Trademark Office agreed to reexamine the '341 patent, finding that the new prior art raised substantial new questions regarding the patent's validity. U.S. Patent Office – Granting Reexamination on 5,253,341 C1 In light of the reexamination, the accused infringers in four of the five pending lawsuits have filed motions to suspend (stay) their cases until completion of the U.S. Patent and Trademark Office's review of the '341 patent. On April 23, 2008, a judge presiding over the two lawsuits in Chicago, Illinois granted the motions in those cases. On July 22, 2008, the Patent Office issued the first "Office Action" of the second reexamination, finding the claim invalid based on nineteen separate grounds. On Nov. 24, 2009, a Reexamination Certificate was issued cancelling all claims. Beginning in 2011 and continuing as of early 2013, an entity known as Princeton Digital Image Corporation, based in Eastern Texas, began suing large numbers of companies for alleged infringement of . Princeton claims that the JPEG image compression standard infringes the '056 patent and has sued large numbers of websites, retailers, camera and device manufacturers and resellers. The patent was originally owned and assigned to General Electric. The patent expired in December 2007, but Princeton has sued large numbers of companies for "past infringement" of this patent. (Under U.S. patent laws, a patent owner can sue for "past infringement" up to six years before the filing of a lawsuit, so Princeton could theoretically have continued suing companies until December 2013.) As of March 2013, Princeton had suits pending in New York and Delaware against more than 55 companies. General Electric's involvement in the suit is unknown, although court records indicate that it assigned the patent to Princeton in 2009 and retains certain rights in the patent.


Typical use

The JPEG compression algorithm operates at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage, where reducing the amount of data used for an image is important for responsive presentation, JPEG's compression benefits make JPEG popular. JPEG/
Exif Exchangeable image file format (officially Exif, according to JEIDA/JEITA/CIPA specifications) is a standard that specifies formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other system ...
is also the most common format saved by digital cameras. However, JPEG is not well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels can cause noticeable artifacts. Such images are better saved in a lossless graphics format such as
TIFF Tag Image File Format, abbreviated TIFF or TIF, is an image file format for storing raster graphics images, popular among graphic artists, the publishing industry, and photographers. TIFF is widely supported by scanning, faxing, word processin ...
,
GIF The Graphics Interchange Format (GIF; or , see pronunciation) is a bitmap image format that was developed by a team at the online services provider CompuServe led by American computer scientist Steve Wilhite and released on 15 June 1987. ...
, PNG, or a
raw image format A camera raw image file contains unprocessed or minimally processed data from the image sensor of either a digital camera, a motion picture film scanner, or other image scanner. Raw files are named so because they are not yet processed and the ...
. The JPEG standard includes a lossless coding mode, but that mode is not supported in most products. As the typical use of JPEG is a
lossy compression In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
method, which reduces the image fidelity, it is inappropriate for exact reproduction of imaging data (such as some scientific and medical imaging applications and certain technical
image processing An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimensiona ...
work). JPEG is also not well suited to files that will undergo multiple edits, as some image quality is lost each time the image is recompressed, particularly if the image is cropped or shifted, or if encoding parameters are changed – see
digital generation loss Generation loss is the loss of quality between subsequent copies or transcodes of data. Anything that reduces the quality of the representation when copying, and would cause further reduction in quality on making a copy of the copy, can be consi ...
for details. To prevent image information loss during sequential and repetitive editing, the first edit can be saved in a lossless format, subsequently edited in that format, then finally published as JPEG for distribution.


JPEG compression

JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). This mathematical operation converts each frame/field of the video source from the spatial (2D) domain into the frequency domain (a.k.a. transform domain). A perceptual model based loosely on the human psychovisual system discards high-frequency information, i.e. sharp transitions in intensity, and color hue. In the transform domain, the process of reducing information is called quantization. In simpler terms, quantization is a method for optimally reducing a large number scale (with different occurrences of each number) into a smaller one, and the transform-domain is a convenient representation of the image because the high-frequency coefficients, which contribute less to the overall picture than other coefficients, are characteristically small-values with high compressibility. The quantized coefficients are then sequenced and losslessly packed into the output bitstream. Nearly all software implementations of JPEG permit user control over the compression ratio (as well as other optional parameters), allowing the user to trade off picture-quality for smaller file size. In embedded applications (such as miniDV, which uses a similar DCT-compression scheme), the parameters are pre-selected and fixed for the application. The compression method is usually
lossy In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
, meaning that some original image information is lost and cannot be restored, possibly affecting image quality. There is an optional
lossless Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information. Lossless compression is possible because most real-world data exhibits statistic ...
mode defined in the JPEG standard. However, this mode is not widely supported in products. There is also an
interlaced Interlaced video (also known as interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. This ...
''progressive'' JPEG format, in which data is compressed in multiple passes of progressively higher detail. This is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data. However, support for progressive JPEGs is not universal. When progressive JPEGs are received by programs that do not support them (such as versions of
Internet Explorer Internet Explorer (formerly Microsoft Internet Explorer and Windows Internet Explorer, commonly abbreviated IE or MSIE) is a series of graphical user interface, graphical web browsers developed by Microsoft which was used in the Microsoft Wind ...
before
Windows 7 Windows 7 is a major release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on July 22, 2009, and became generally available on October 22, 2009. It is the successor to Windows Vista, released nearly ...
) the software displays the image only after it has been completely downloaded. There are also many medical imaging, traffic and camera applications that create and process 12-bit JPEG images both grayscale and color. 12-bit JPEG format is included in an Extended part of the JPEG specification. The libjpeg codec supports 12-bit JPEG and there even exists a high-performance version.


Lossless editing

Several alterations to a JPEG image can be performed losslessly (that is, without recompression and the associated quality loss) as long as the image size is a multiple of 1 MCU block (Minimum Coded Unit) (usually 16 pixels in both directions, for 4:2:0
chroma subsampling Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance. It is us ...
). Utilities that implement this include: *
jpegtran libjpeg is a free library with functions for handling the JPEG image data format. It implements a JPEG codec (encoding and decoding) alongside various utilities for handling JPEG data. It is written in C and distributed as free software togeth ...
and its GUI, Jpegcrop. *
IrfanView IrfanView () is an image viewer, editor, organiser and converter program for Microsoft Windows. It can also play video and audio files, and has some image creation and painting capabilities. IrfanView is free for non-commercial use; commercial u ...
using "JPG Lossless Crop (PlugIn)" and "JPG Lossless Rotation (PlugIn)", which require installing the JPG_TRANSFORM plugin. *
FastStone Image Viewer FastStone Image Viewer is an image viewer and organizer for Microsoft Windows, provided free of charge for personal and educational use, . The program also includes basic image editing tools. Features Highlights: *Relatively fast HQ image thu ...
using "Lossless Crop to File" and "JPEG Lossless Rotate". *
XnViewMP XnView is an image organizer and general-purpose file manager used for viewing, converting, organizing and editing raster images, as well as general purpose file management. It comes with built-in hex inspection, batch renaming and screen ...
using "JPEG lossless transformations". *
ACDSee ACDSee is an image organizer, viewer, and image editor program for Windows, macOS and iOS, developed by ACD Systems International Inc. ACDSee was originally distributed as a 16-bit application for Windows 3.0 and later supplanted by a 32-bit ver ...
supports lossless rotation (but not lossless cropping) with its "Force lossless JPEG operations" option. Blocks can be rotated in 90-degree increments, flipped in the horizontal, vertical and diagonal axes and moved about in the image. Not all blocks from the original image need to be used in the modified one. The top and left edge of a JPEG image must lie on an 8 × 8 pixel block boundary, but the bottom and right edge need not do so. This limits the possible lossless crop operations, and also prevents flips and rotations of an image whose bottom or right edge does not lie on a block boundary for all channels (because the edge would end up on top or left, where – as aforementioned – a block boundary is obligatory). Rotations where the image is not a multiple of 8 or 16, which value depends upon the chroma subsampling, are not lossless. Rotating such an image causes the blocks to be recomputed which results in loss of quality. When using lossless cropping, if the bottom or right side of the crop region is not on a block boundary, then the rest of the data from the partially used blocks will still be present in the cropped file and can be recovered. It is also possible to transform between baseline and progressive formats without any loss of quality, since the only difference is the order in which the coefficients are placed in the file. Furthermore, several JPEG images can be losslessly joined, as long as they were saved with the same quality and the edges coincide with block boundaries.


JPEG files

The
file format A file format is a standard way that information is encoded for storage in a computer file. It specifies how bits are used to encode information in a digital storage medium. File formats may be either proprietary or free. Some file formats ...
known as "JPEG Interchange Format" (JIF) is specified in Annex B of the standard. However, this "pure" file format is rarely used, primarily because of the difficulty of programming encoders and decoders that fully implement all aspects of the standard and because of certain shortcomings of the standard: * Color space definition * Component sub-sampling registration * Pixel aspect ratio definition. Several additional standards have evolved to address these issues. The first of these, released in 1992, was the
JPEG File Interchange Format The JPEG File Interchange Format (JFIF) is an image file format standard published as ITU-T Recommendation T.871 and ISO/IEC 10918-5. It defines supplementary specifications for the container format that contains the image data encoded with the JP ...
(or JFIF), followed in recent years by
Exchangeable image file format Exchangeable image file format (officially Exif, according to JEIDA/JEITA/CIPA specifications) is a standard that specifies formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other syste ...
(Exif) and ICC color profiles. Both of these formats use the actual JIF byte layout, consisting of different ''markers'', but in addition, employ one of the JIF standard's extension points, namely the ''application markers'': JFIF uses APP0, while Exif uses APP1. Within these segments of the file that were left for future use in the JIF standard and are not read by it, these standards add specific metadata. Thus, in some ways, JFIF is a cut-down version of the JIF standard in that it specifies certain constraints (such as not allowing all the different encoding modes), while in other ways, it is an extension of JIF due to the added metadata. The documentation for the original JFIF standard states: Image files that employ JPEG compression are commonly called "JPEG files", and are stored in variants of the JIF image format. Most image capture devices (such as digital cameras) that output JPEG are actually creating files in the
Exif Exchangeable image file format (officially Exif, according to JEIDA/JEITA/CIPA specifications) is a standard that specifies formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other system ...
format, the format that the camera industry has standardized on for metadata interchange. On the other hand, since the Exif standard does not allow color profiles, most image editing software stores JPEG in
JFIF The JPEG File Interchange Format (JFIF) is an image file format standard published as ITU-T Recommendation T.871 and ISO/IEC 10918-5. It defines supplementary specifications for the container format that contains the image data encoded with the J ...
format, and also includes the APP1 segment from the Exif file to include the metadata in an almost-compliant way; the JFIF standard is interpreted somewhat flexibly. Strictly speaking, the JFIF and Exif standards are incompatible, because each specifies that its marker segment (APP0 or APP1, respectively) appear first. In practice, most JPEG files contain a JFIF marker segment that precedes the Exif header. This allows older readers to correctly handle the older format JFIF segment, while newer readers also decode the following Exif segment, being less strict about requiring it to appear first.


JPEG filename extensions

The most common
filename extension A filename extension, file name extension or file extension is a suffix to the name of a computer file (e.g., .txt, .docx, .md). The extension indicates a characteristic of the file contents or its intended use. A filename extension is typically d ...
s for files employing JPEG compression are .jpg and .jpeg, though .jpe, .jfif and .jif are also used. It is also possible for JPEG data to be embedded in other file types –
TIFF Tag Image File Format, abbreviated TIFF or TIF, is an image file format for storing raster graphics images, popular among graphic artists, the publishing industry, and photographers. TIFF is widely supported by scanning, faxing, word processin ...
encoded files often embed a JPEG image as a
thumbnail Thumbnails are reduced-size versions of pictures or videos, used to help in recognizing and organizing them, serving the same role for images as a normal text index does for words. In the age of digital images, visual search engines and image ...
of the main image; and
MP3 MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is a coding format for digital audio developed largely by the Fraunhofer Society in Germany, with support from other digital scientists in the United States and elsewhere. Origin ...
files can contain a JPEG of
cover art Cover art is a type of artwork presented as an illustration or photograph on the outside of a published product such as a book (often on a dust jacket), magazine, newspaper ( tabloid), comic book, video game (box art), music album (album art), ...
in the
ID3v2 ID3 is a metadata container most often used in conjunction with the MP3 audio file format. It allows information such as the title, artist, album, track number, and other information about the file to be stored in the file itself. There are two ...
tag.


Color profile

Many JPEG files embed an ICC color profile (
color space A color space is a specific organization of colors. In combination with color profiling supported by various physical devices, it supports reproducible representations of colorwhether such representation entails an analog or a digital represent ...
). Commonly used color profiles include
sRGB sRGB is a standard RGB (red, green, blue) color space that HP and Microsoft created cooperatively in 1996 to use on monitors, printers, and the World Wide Web. It was subsequently standardized by the International Electrotechnical Commission ( ...
and
Adobe RGB The Adobe RGB (1998) color space or opRGB is a color space developed by Adobe Systems, Inc. in 1998. It was designed to encompass most of the colors achievable on CMYK color printers, but by using RGB primary colors on a device such as a compu ...
. Because these color spaces use a non-linear transformation, the
dynamic range Dynamic range (abbreviated DR, DNR, or DYR) is the ratio between the largest and smallest values that a certain quantity can assume. It is often used in the context of signals, like sound and light. It is measured either as a ratio or as a base-1 ...
of an 8-bit JPEG file is about 11
stops Stop may refer to: Places *Stop, Kentucky, an unincorporated community in the United States * Stop (Rogatica), a village in Rogatica, Republika Srpska, Bosnia and Herzegovina Facilities * Bus stop * Truck stop, a type of rest stop for truck dri ...
; see
gamma curve Gamma correction or gamma is a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression: : V_\text = ...
. If the image doesn't specify color profile information (''untagged''), the color space is assumed to be sRGB for the purposes of display on webpages.


Syntax and structure

A JPEG image consists of a sequence of ''segments'', each beginning with a ''marker'', each of which begins with a 0xFF byte, followed by a byte indicating what kind of marker it is. Some markers consist of just those two bytes; others are followed by two bytes (high then low), indicating the length of marker-specific payload data that follows. (The length includes the two bytes for the length, but not the two bytes for the marker.) Some markers are followed by entropy-coded data; the length of such a marker does not include the entropy-coded data. Note that consecutive 0xFF bytes are used as fill bytes for
padding Padding is thin cushioned material sometimes added to clothes. Padding may also be referred to as batting when used as a layer in lining quilts or as a packaging or stuffing material. When padding is used in clothes, it is often done in an attempt ...
purposes, although this fill byte padding should only ever take place for markers immediately following entropy-coded scan data (see JPEG specification section B.1.1.2 and E.1.2 for details; specifically "In all cases where markers are appended after the compressed data, optional 0xFF fill bytes may precede the marker"). Within the entropy-coded data, after any 0xFF byte, a 0x00 byte is inserted by the encoder before the next byte, so that there does not appear to be a marker where none is intended, preventing framing errors. Decoders must skip this 0x00 byte. This technique, called
byte stuffing Consistent Overhead Byte Stuffing (COBS) is an algorithm for encoding data bytes that results in efficient, reliable, unambiguous packet framing regardless of packet content, thus making it easy for receiving applications to recover from malformed ...
(see JPEG specification section F.1.2.3), is only applied to the entropy-coded data, not to marker payload data. Note however that entropy-coded data has a few markers of its own; specifically the Reset markers (0xD0 through 0xD7), which are used to isolate independent chunks of entropy-coded data to allow parallel decoding, and encoders are free to insert these Reset markers at regular intervals (although not all encoders do this). There are other ''Start Of Frame'' markers that introduce other kinds of JPEG encodings. Since several vendors might use the same APP''n'' marker type, application-specific markers often begin with a standard or vendor name (e.g., "Exif" or "Adobe") or some other identifying string. At a restart marker, block-to-block predictor variables are reset, and the bitstream is synchronized to a byte boundary. Restart markers provide means for recovery after bitstream error, such as transmission over an unreliable network or file corruption. Since the runs of macroblocks between restart markers may be independently decoded, these runs may be decoded in parallel.


JPEG codec example

Although a JPEG file can be encoded in various ways, most commonly it is done with JFIF encoding. The encoding process consists of several steps: # The representation of the colors in the image is converted from
RGB The RGB color model is an additive color model in which the red, green and blue primary colors of light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additiv ...
to , consisting of one
luma Luma or LUMA may refer to: Arts * La Trobe University Museum of Art, Melbourne, Australia * LUMA Projection Arts Festival, an annual event featuring building-scale projection mapping and light installations in Binghamton, NY * LUMA Foundation, ...
component (Y'), representing brightness, and two chroma components, (CB and CR), representing color. This step is sometimes skipped. # The resolution of the chroma data is reduced, usually by a factor of 2 or 3. This reflects the fact that the eye is less sensitive to fine color details than to fine brightness details. # The image is split into blocks of 8×8 pixels, and for each block, each of the Y, CB, and CR data undergoes the discrete cosine transform (DCT). A DCT is similar to a
Fourier transform A Fourier transform (FT) is a mathematical transform that decomposes functions into frequency components, which are represented by the output of the transform as a function of frequency. Most commonly functions of time or space are transformed, ...
in the sense that it produces a kind of spatial frequency spectrum. # The amplitudes of the frequency components are quantized. Human vision is much more sensitive to small variations in color or brightness over large areas than to the strength of high-frequency brightness variations. Therefore, the magnitudes of the high-frequency components are stored with a lower accuracy than the low-frequency components. The quality setting of the encoder (for example 50 or 95 on a scale of 0–100 in the Independent JPEG Group's library) affects to what extent the resolution of each frequency component is reduced. If an excessively low quality setting is used, the high-frequency components are discarded altogether. # The resulting data for all 8×8 blocks is further compressed with a lossless algorithm, a variant of
Huffman encoding In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code proceeds by means of Huffman coding, an algori ...
. The decoding process reverses these steps, except the ''quantization'' because it is irreversible. In the remainder of this section, the encoding and decoding processes are described in more detail.


Encoding

Many of the options in the JPEG standard are not commonly used, and as mentioned above, most image software uses the simpler JFIF format when creating a JPEG file, which among other things specifies the encoding method. Here is a brief description of one of the more common methods of encoding when applied to an input that has 24
bits per pixel Color depth or colour depth (see spelling differences), also known as bit depth, is either the number of bits used to indicate the color of a single pixel, or the number of bits used for each color component of a single pixel. When referring to ...
(eight each of red, green, and blue). This particular option is a
lossy data compression In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
method.


Color space transformation

First, the image should be converted from
RGB The RGB color model is an additive color model in which the red, green and blue primary colors of light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additiv ...
(by default sRGB, but other
color space A color space is a specific organization of colors. In combination with color profiling supported by various physical devices, it supports reproducible representations of colorwhether such representation entails an analog or a digital represent ...
s are possible) into a different color space called (or, informally, YCbCr). It has three components Y', CB and CR: the Y' component represents the brightness of a pixel, and the CB and CR components represent the
chrominance Chrominance (''chroma'' or ''C'' for short) is the signal used in video systems to convey the color information of the picture (see YUV color model), separately from the accompanying luma signal (or Y' for short). Chrominance is usually represente ...
(split into blue and red components). This is basically the same color space as used by digital color television as well as digital video including video DVDs. The color space conversion allows greater compression without a significant effect on perceptual image quality (or greater perceptual image quality for the same compression). The compression is more efficient because the brightness information, which is more important to the eventual perceptual quality of the image, is confined to a single channel. This more closely corresponds to the perception of color in the human visual system. The color transformation also improves compression by statistical
decorrelation Decorrelation is a general term for any process that is used to reduce autocorrelation within a signal, or cross-correlation within a set of signals, while preserving other aspects of the signal. A frequently used method of decorrelation is the u ...
. A particular conversion to is specified in the JFIF standard, and should be performed for the resulting JPEG file to have maximum compatibility. However, some JPEG implementations in "highest quality" mode do not apply this step and instead keep the color information in the
RGB color model The RGB color model is an additive color model in which the red, green and blue primary colors of light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additiv ...
, where the image is stored in separate channels for red, green and blue brightness components. This results in less efficient compression, and would not likely be used when file size is especially important.


Downsampling

Due to the densities of color- and brightness-sensitive receptors in the human eye, humans can see considerably more fine detail in the brightness of an image (the Y' component) than in the hue and color saturation of an image (the Cb and Cr components). Using this knowledge, encoders can be designed to compress images more efficiently. The transformation into the color model enables the next usual step, which is to reduce the spatial resolution of the Cb and Cr components (called "
downsampling In digital signal processing, downsampling, compression, and decimation are terms associated with the process of ''resampling'' in a multi-rate digital signal processing system. Both ''downsampling'' and ''decimation'' can be synonymous with ''comp ...
" or "
chroma subsampling Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance. It is us ...
"). The ratios at which the downsampling is ordinarily done for JPEG images are 4:4:4 (no downsampling), 4:2:2 (reduction by a factor of 2 in the horizontal direction), or (most commonly) 4:2:0 (reduction by a factor of 2 in both the horizontal and vertical directions). For the rest of the compression process, Y', Cb and Cr are processed separately and in a very similar manner.


Block splitting

After subsampling, each
channel Channel, channels, channeling, etc., may refer to: Geography * Channel (geography), in physical geography, a landform consisting of the outline (banks) of the path of a narrow body of water. Australia * Channel Country, region of outback Austral ...
must be split into 8×8 blocks. Depending on chroma subsampling, this yields Minimum Coded Unit (MCU) blocks of size 8×8 (4:4:4 – no subsampling), 16×8 (4:2:2), or most commonly 16×16 (4:2:0). In
video compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression ...
MCUs are called
macroblock The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform ...
s. If the data for a channel does not represent an integer number of blocks then the encoder must fill the remaining area of the incomplete blocks with some form of dummy data. Filling the edges with a fixed color (for example, black) can create
ringing artifact In signal processing, particularly digital image processing, ringing artifacts are artifacts that appear as spurious signals near sharp transitions in a signal. Visually, they appear as bands or "ghosts" near edges; audibly, they appear as "ec ...
s along the visible part of the border; repeating the edge pixels is a common technique that reduces (but does not necessarily eliminate) such artifacts, and more sophisticated border filling techniques can also be applied.


Discrete cosine transform

Next, each 8×8 block of each component (Y, Cb, Cr) is converted to a
frequency-domain In physics, electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time. Put simply, a time-domain graph shows how a sign ...
representation, using a normalized, two-dimensional type-II discrete cosine transform (DCT), see Citation 1 in discrete cosine transform. The DCT is sometimes referred to as "type-II DCT" in the context of a family of transforms as in discrete cosine transform, and the corresponding inverse (IDCT) is denoted as "type-III DCT". As an example, one such 8×8 8-bit subimage might be: : \left[ \begin 52 & 55 & 61 & 66 & 70 & 61 & 64 & 73 \\ 63 & 59 & 55 & 90 & 109 & 85 & 69 & 72 \\ 62 & 59 & 68 & 113 & 144 & 104 & 66 & 73 \\ 63 & 58 & 71 & 122 & 154 & 106 & 70 & 69 \\ 67 & 61 & 68 & 104 & 126 & 88 & 68 & 70 \\ 79 & 65 & 60 & 70 & 77 & 68 & 58 & 75 \\ 85 & 71 & 64 & 59 & 55 & 61 & 65 & 83 \\ 87 & 79 & 69 & 68 & 65 & 76 & 78 & 94 \end \right]. Before computing the DCT of the 8×8 block, its values are shifted from a positive range to one centered on zero. For an 8-bit image, each entry in the original block falls in the range
, 255 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
/math>. The midpoint of the range (in this case, the value 128) is subtracted from each entry to produce a data range that is centered on zero, so that the modified range is
128, 127 1 (one, unit, unity) is a number representing a single or the only entity. 1 is also a numerical digit and represents a single unit of counting Counting is the process of determining the number of elements of a finite set of objects, i.e., d ...
/math>. This step reduces the dynamic range requirements in the DCT processing stage that follows. This step results in the following values: :g= \begin x \\ \longrightarrow \\ \left[ \begin -76 & -73 & -67 & -62 & -58 & -67 & -64 & -55 \\ -65 & -69 & -73 & -38 & -19 & -43 & -59 & -56 \\ -66 & -69 & -60 & -15 & 16 & -24 & -62 & -55 \\ -65 & -70 & -57 & -6 & 26 & -22 & -58 & -59 \\ -61 & -67 & -60 & -24 & -2 & -40 & -60 & -58 \\ -49 & -63 & -68 & -58 & -51 & -60 & -70 & -53 \\ -43 & -57 & -64 & -69 & -73 & -67 & -63 & -45 \\ -41 & -49 & -59 & -60 & -63 & -52 & -50 & -34 \end \right] \end \Bigg\downarrow y. The next step is to take the two-dimensional DCT, which is given by: :\ G_ = \frac \alpha(u) \alpha(v) \sum_^7 \sum_^7 g_ \cos \left frac \right \cos \left frac \right where * \ u is the horizontal
spatial frequency In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier tra ...
, for the integers \ 0 \leq u < 8. * \ v is the vertical spatial frequency, for the integers \ 0 \leq v < 8. * \alpha(u) = \begin \frac, & \mboxu=0 \\ 1, & \mbox \end is a normalizing scale factor to make the transformation
orthonormal In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal (or perpendicular along a line) unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of un ...
* \ g_ is the pixel value at coordinates \ (x,y) * \ G_ is the DCT coefficient at coordinates \ (u,v). If we perform this transformation on our matrix above, we get the following (rounded to the nearest two digits beyond the decimal point): :G= \begin u \\ \longrightarrow \\ \left[ \begin -415.38 & -30.19 & -61.20 & 27.24 & 56.12 & -20.10 & -2.39 & 0.46 \\ 4.47 & -21.86 & -60.76 & 10.25 & 13.15 & -7.09 & -8.54 & 4.88 \\ -46.83 & 7.37 & 77.13 & -24.56 & -28.91 & 9.93 & 5.42 & -5.65 \\ -48.53 & 12.07 & 34.10 & -14.76 & -10.24 & 6.30 & 1.83 & 1.95 \\ 12.12 & -6.55 & -13.20 & -3.95 & -1.87 & 1.75 & -2.79 & 3.14 \\ -7.73 & 2.91 & 2.38 & -5.94 & -2.38 & 0.94 & 4.30 & 1.85 \\ -1.03 & 0.18 & 0.42 & -2.42 & -0.88 & -3.02 & 4.12 & -0.66 \\ -0.17 & 0.14 & -1.07 & -4.19 & -1.17 & -0.10 & 0.50 & 1.68 \end \right] \end \Bigg\downarrow v. Note the top-left corner entry with the rather large magnitude. This is the DC coefficient (also called the constant component), which defines the basic hue for the entire block. The remaining 63 coefficients are the AC coefficients (also called the alternating components). The advantage of the DCT is its tendency to aggregate most of the signal in one corner of the result, as may be seen above. The quantization step to follow accentuates this effect while simultaneously reducing the overall size of the DCT coefficients, resulting in a signal that is easy to compress efficiently in the entropy stage. The DCT temporarily increases the bit-depth of the data, since the DCT coefficients of an 8-bit/component image take up to 11 or more bits (depending on fidelity of the DCT calculation) to store. This may force the codec to temporarily use 16-bit numbers to hold these coefficients, doubling the size of the image representation at this point; these values are typically reduced back to 8-bit values by the quantization step. The temporary increase in size at this stage is not a performance concern for most JPEG implementations, since typically only a very small part of the image is stored in full DCT form at any given time during the image encoding or decoding process.


Quantization

The human eye is good at seeing small differences in
brightness Brightness is an attribute of visual perception in which a source appears to be radiating or reflecting light. In other words, brightness is the perception elicited by the luminance of a visual target. The perception is not linear to luminance, ...
over a relatively large area, but not so good at distinguishing the exact strength of a high frequency brightness variation. This allows one to greatly reduce the amount of information in the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This rounding operation is the only lossy operation in the whole process (other than chroma subsampling) if the DCT computation is performed with sufficiently high precision. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers, which take many fewer bits to represent. The elements in the
quantization matrix Quantization, involved in image processing, is a lossy compression technique achieved by compressing a range of values to a single quantum (discrete) value. When the number of discrete symbols in a given stream is reduced, the stream becomes more ...
control the compression ratio, with larger values producing greater compression. A typical quantization matrix (for a quality of 50% as specified in the original JPEG Standard), is as follows: :Q= \begin 16 & 11 & 10 & 16 & 24 & 40 & 51 & 61 \\ 12 & 12 & 14 & 19 & 26 & 58 & 60 & 55 \\ 14 & 13 & 16 & 24 & 40 & 57 & 69 & 56 \\ 14 & 17 & 22 & 29 & 51 & 87 & 80 & 62 \\ 18 & 22 & 37 & 56 & 68 & 109 & 103 & 77 \\ 24 & 35 & 55 & 64 & 81 & 104 & 113 & 92 \\ 49 & 64 & 78 & 87 & 103 & 121 & 120 & 101 \\ 72 & 92 & 95 & 98 & 112 & 100 & 103 & 99 \end. The quantized DCT coefficients are computed with :B_ = \mathrm \left( \frac \right) \mbox j=0,1,2,\ldots,7; k=0,1,2,\ldots,7 where G is the unquantized DCT coefficients; Q is the quantization matrix above; and B is the quantized DCT coefficients. Using this quantization matrix with the DCT coefficient matrix from above results in: :B= \left[ \begin -26 & -3 & -6 & 2 & 2 & -1 & 0 & 0 \\ 0 & -2 & -4 & 1 & 1 & 0 & 0 & 0 \\ -3 & 1 & 5 & -1 & -1 & 0 & 0 & 0 \\ -3 & 1 & 2 & -1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end \right]. For example, using −415 (the DC coefficient) and rounding to the nearest integer : \mathrm \left( \frac \right) = \mathrm \left( -25.96 \right) = -26. Notice that most of the higher-frequency elements of the sub-block (i.e., those with an ''x'' or ''y'' spatial frequency greater than 4) are quantized into zero values.


Entropy coding

Entropy coding is a special form of
lossless data compression Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information. Lossless compression is possible because most real-world data exhibits statistic ...
. It involves arranging the image components in a "
zigzag A zigzag is a pattern made up of small corners at variable angles, though constant within the zigzag, tracing a path between two parallel lines; it can be described as both jagged and fairly regular. In geometry, this pattern is described as a ...
" order employing
run-length encoding Run-length encoding (RLE) is a form of lossless data compression in which ''runs'' of data (sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original ...
(RLE) algorithm that groups similar frequencies together, inserting length coding zeros, and then using
Huffman coding In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code proceeds by means of Huffman coding, an algori ...
on what is left. The JPEG standard also allows, but does not require, decoders to support the use of
arithmetic coding Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a string of characters is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic e ...
, which is mathematically superior to Huffman coding. However, this feature has rarely been used, as it was historically covered by
patent A patent is a type of intellectual property that gives its owner the legal right to exclude others from making, using, or selling an invention for a limited period of time in exchange for publishing an enabling disclosure of the invention."A p ...
s requiring royalty-bearing licenses, and because it is slower to encode and decode compared to Huffman coding. Arithmetic coding typically makes files about 5–7% smaller. The previous quantized DC coefficient is used to predict the current quantized DC coefficient. The difference between the two is encoded rather than the actual value. The encoding of the 63 quantized AC coefficients does not use such prediction differencing. The zigzag sequence for the above quantized coefficients are shown below. (The format shown is just for ease of understanding/viewing.) : If the ''i''-th block is represented by B_i and positions within each block are represented by (p,q) where p = 0, 1, ..., 7 and q = 0, 1, ..., 7, then any coefficient in the DCT image can be represented as B_i (p,q). Thus, in the above scheme, the order of encoding pixels (for the -th block) is B_i (0,0), B_i (0,1), B_i (1,0), B_i (2,0), B_i (1,1), B_i (0,2), B_i (0,3), B_i (1,2) and so on. This encoding mode is called baseline ''sequential'' encoding. Baseline JPEG also supports ''progressive'' encoding. While sequential encoding encodes coefficients of a single block at a time (in a zigzag manner), progressive encoding encodes similar-positioned batch of coefficients of all blocks in one go (called a ''scan''), followed by the next batch of coefficients of all blocks, and so on. For example, if the image is divided into N 8×8 blocks B_0,B_1,B_2,...,B_, then a 3-scan progressive encoding encodes DC component, B_i (0,0) for all blocks, i.e., for all i = 0, 1, 2, ..., N-1, in first scan. This is followed by the second scan which encoding a few more components (assuming four more components, they are B_i (0,1) to B_i (1,1), still in a zigzag manner) coefficients of all blocks (so the sequence is: B_0 (0,1),B_0 (1,0),B_0 (2,0),B_0 (1,1),B_1 (0,1),B_1 (1,0),...,B_N (2,0),B_N (1,1)), followed by all the remained coefficients of all blocks in the last scan. Once all similar-positioned coefficients have been encoded, the next position to be encoded is the one occurring next in the zigzag traversal as indicated in the figure above. It has been found that ''baseline progressive'' JPEG encoding usually gives better compression as compared to ''baseline sequential'' JPEG due to the ability to use different Huffman tables (see below) tailored for different frequencies on each "scan" or "pass" (which includes similar-positioned coefficients), though the difference is not too large. In the rest of the article, it is assumed that the coefficient pattern generated is due to sequential mode. In order to encode the above generated coefficient pattern, JPEG uses Huffman encoding. The JPEG standard provides general-purpose Huffman tables; encoders may also choose to generate Huffman tables optimized for the actual frequency distributions in images being encoded. The process of encoding the zig-zag quantized data begins with a run-length encoding explained below, where: * is the non-zero, quantized AC coefficient. * ''RUNLENGTH'' is the number of zeroes that came before this non-zero AC coefficient. * ''SIZE'' is the number of bits required to represent . * ''AMPLITUDE'' is the bit-representation of . The run-length encoding works by examining each non-zero AC coefficient and determining how many zeroes came before the previous AC coefficient. With this information, two symbols are created: : Both ''RUNLENGTH'' and ''SIZE'' rest on the same byte, meaning that each only contains four bits of information. The higher bits deal with the number of zeroes, while the lower bits denote the number of bits necessary to encode the value of . This has the immediate implication of ''Symbol 1'' being only able store information regarding the first 15 zeroes preceding the non-zero AC coefficient. However, JPEG defines two special Huffman code words. One is for ending the sequence prematurely when the remaining coefficients are zero (called "End-of-Block" or "EOB"), and another when the run of zeroes goes beyond 15 before reaching a non-zero AC coefficient. In such a case where 16 zeroes are encountered before a given non-zero AC coefficient, ''Symbol 1'' is encoded "specially" as: (15, 0)(0). The overall process continues until "EOB" denoted by (0, 0) is reached. With this in mind, the sequence from earlier becomes: :(0, 2)(-3);(1, 2)(-3);(0, 1)(-2);(0, 2)(-6);(0, 1)(2);(0, 1)(-4);(0, 1)(1);(0, 2)(-3);(0, 1)(1);(0, 1)(1); :(0, 2)(5);(0, 1)(1);(0, 1)(2);(0, 1)(-1);(0, 1)(1);(0, 1)(-1);(0, 1)(2);(5, 1)(-1);(0, 1)(-1);(0, 0); (The first value in the matrix, −26, is the DC coefficient; it is not encoded the same way. See above.) From here, frequency calculations are made based on occurrences of the coefficients. In our example block, most of the quantized coefficients are small numbers that are not preceded immediately by a zero coefficient. These more-frequent cases will be represented by shorter code words.


Compression ratio and artifacts

The resulting compression ratio can be varied according to need by being more or less aggressive in the divisors used in the quantization phase. Ten to one compression usually results in an image that cannot be distinguished by eye from the original. A compression ratio of 100:1 is usually possible, but will look distinctly artifacted compared to the original. The appropriate level of compression depends on the use to which the image will be put. Those who use the
World Wide Web The World Wide Web (WWW), commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet. Documents and downloadable media are made available to the network through web se ...
may be familiar with the irregularities known as
compression artifact A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it beco ...
s that appear in JPEG images, which may take the form of noise around contrasting edges (especially curves and corners), or "blocky" images. These are due to the quantization step of the JPEG algorithm. They are especially noticeable around sharp corners between contrasting colors (text is a good example, as it contains many such corners). The analogous artifacts in
MPEG The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by International Organization for Standardization, ISO and International Electrotechnical Commission, IEC that sets standards for media coding, includ ...
video are referred to as ''
mosquito noise A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it bec ...
,'' as the resulting "edge busyness" and spurious dots, which change over time, resemble mosquitoes swarming around the object.Phuc-Tue Le Dinh and Jacques Patry
Video compression artifacts and MPEG noise reduction
. Video Imaging DesignLine. February 24, 2006. Retrieved May 28, 2009.
These artifacts can be reduced by choosing a lower level of
compression Compression may refer to: Physical science *Compression (physics), size reduction due to forces *Compression member, a structural element such as a column *Compressibility, susceptibility to compression *Gas compression *Compression ratio, of a c ...
; they may be completely avoided by saving an image using a
lossless Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information. Lossless compression is possible because most real-world data exhibits statistic ...
file format, though this will result in a larger file size. The images created with ray-tracing programs have noticeable blocky shapes on the terrain. Certain low-intensity compression artifacts might be acceptable when simply viewing the images, but can be emphasized if the image is subsequently processed, usually resulting in unacceptable quality. Consider the example below, demonstrating the effect of lossy compression on an
edge detection Edge detection includes a variety of mathematical methods that aim at identifying edges, curves in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The same problem of finding discontinuitie ...
processing step. Some programs allow the user to vary the amount by which individual blocks are compressed. Stronger compression is applied to areas of the image that show fewer artifacts. This way it is possible to manually reduce JPEG file size with less loss of quality. Since the quantization stage ''always'' results in a loss of information, JPEG standard is always a lossy compression codec. (Information is lost both in quantizing and rounding of the floating-point numbers.) Even if the quantization matrix is a
matrix of ones In mathematics, a matrix of ones or all-ones matrix is a matrix where every entry is equal to one. Examples of standard notation are given below: :J_2 = \begin 1 & 1 \\ 1 & 1 \end;\quad J_3 = \begin 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end;\quad ...
, information will still be lost in the rounding step.


Decoding

Decoding to display the image consists of doing all the above in reverse. Taking the DCT coefficient matrix (after adding the difference of the DC coefficient back in) : \left[ \begin -26 & -3 & -6 & 2 & 2 & -1 & 0 & 0 \\ 0 & -2 & -4 & 1 & 1 & 0 & 0 & 0 \\ -3 & 1 & 5 & -1 & -1 & 0 & 0 & 0 \\ -3 & 1 & 2 & -1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end \right] and taking the Hadamard product (matrices), entry-for-entry product with the quantization matrix from above results in : \left[ \begin -416 & -33 & -60 & 32 & 48 & -40 & 0 & 0 \\ 0 & -24 & -56 & 19 & 26 & 0 & 0 & 0 \\ -42 & 13 & 80 & -24 & -40 & 0 & 0 & 0 \\ -42 & 17 & 44 & -29 & 0 & 0 & 0 & 0 \\ 18 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end \right] which closely resembles the original DCT coefficient matrix for the top-left portion. The next step is to take the two-dimensional inverse DCT (a 2D type-III DCT), which is given by: f_ = \frac \sum_^7 \sum_^7 \alpha(u) \alpha(v) F_ \cos \left frac \right \cos \left frac \right where * \ x is the pixel row, for the integers \ 0 \leq x < 8. * \ y is the pixel column, for the integers \ 0 \leq y < 8. * \ \alpha(u) is the normalizing scale factor defined earlier, for the integers \ 0 \leq u < 8. * \ F_ is the approximated DCT coefficient at coordinates \ (u,v). * \ f_ is the reconstructed pixel value at coordinates \ (x,y) Rounding the output to integer values (since the original had integer values) results in an image with values (still shifted down by 128) : \left[ \begin -66 & -63 & -71 & -68 & -56 & -65 & -68 & -46 \\ -71 & -73 & -72 & -46 & -20 & -41 & -66 & -57 \\ -70 & -78 & -68 & -17 & 20 & -14 & -61 & -63 \\ -63 & -73 & -62 & -8 & 27 & -14 & -60 & -58 \\ -58 & -65 & -61 & -27 & -6 & -40 & -68 & -50 \\ -57 & -57 & -64 & -58 & -48 & -66 & -72 & -47 \\ -53 & -46 & -61 & -74 & -65 & -63 & -62 & -45 \\ -47 & -34 & -53 & -74 & -60 & -47 & -47 & -41 \end \right] and adding 128 to each entry : \left[ \begin 62 & 65 & 57 & 60 & 72 & 63 & 60 & 82 \\ 57 & 55 & 56 & 82 & 108 & 87 & 62 & 71 \\ 58 & 50 & 60 & 111 & 148 & 114 & 67 & 65 \\ 65 & 55 & 66 & 120 & 155 & 114 & 68 & 70 \\ 70 & 63 & 67 & 101 & 122 & 88 & 60 & 78 \\ 71 & 71 & 64 & 70 & 80 & 62 & 56 & 81 \\ 75 & 82 & 67 & 54 & 63 & 65 & 66 & 83 \\ 81 & 94 & 75 & 54 & 68 & 81 & 81 & 87 \end \right]. This is the decompressed subimage. In general, the decompression process may produce values outside the original input range of
, 255 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
/math>. If this occurs, the decoder needs to clip the output values so as to keep them within that range to prevent overflow when storing the decompressed image with the original bit depth. The decompressed subimage can be compared to the original subimage (also see images to the right) by taking the difference (original − uncompressed) results in the following error values: : \left[ \begin -10 & -10 & 4 & 6 & -2 & -2 & 4 & -9 \\ 6 & 4 & -1 & 8 & 1 & -2 & 7 & 1 \\ 4 & 9 & 8 & 2 & -4 & -10 & -1 & 8 \\ -2 & 3 & 5 & 2 & -1 & -8 & 2 & -1 \\ -3 & -2 & 1 & 3 & 4 & 0 & 8 & -8 \\ 8 & -6 & -4 & -0 & -3 & 6 & 2 & -6 \\ 10 & -11 & -3 & 5 & -8 & -4 & -1 & -0 \\ 6 & -15 & -6 & 14 & -3 & -5 & -3 & 7 \end \right] with an average absolute error of about 5 values per pixels (i.e., \frac \sum_^7 \sum_^7 , e(x,y), = 4.8750). The error is most noticeable in the bottom-left corner where the bottom-left pixel becomes darker than the pixel to its immediate right.


Required precision

The required implementation precision of a JPEG codec is implicitly defined through the requirements formulated for compliance to the JPEG standard. These requirements are specified in ITU.T Recommendation T.83 , ISO/IEC 10918-2. Unlike MPEG standards and many later JPEG standards, the above document defines both required implementation precisions for the encoding and the decoding process of a JPEG codec by means of a maximal tolerable error of the forwards and inverse DCT in the DCT domain as determined by reference test streams. For example, the output of a decoder implementation must not exceed an error of one quantization unit in the DCT domain when applied to the reference testing codestreams provided as part of the above standard. While unusual, and unlike many other and more modern standards, ITU.T T.83 , ISO/IEC 10918-2 does not formulate error bounds in the image domain.


Effects of JPEG compression

JPEG compression artifacts blend well into photographs with detailed non-uniform textures, allowing higher compression ratios. Notice how a higher compression ratio first affects the high-frequency textures in the upper-left corner of the image, and how the contrasting lines become more fuzzy. The very high compression ratio severely affects the quality of the image, although the overall colors and image form are still recognizable. However, the precision of colors suffer less (for a human eye) than the precision of contours (based on luminance). This justifies the fact that images should be first transformed in a color model separating the luminance from the chromatic information, before subsampling the chromatic planes (which may also use lower quality quantization) in order to preserve the precision of the luminance plane with more information bits.


Sample photographs

For information, the uncompressed 24-bit RGB bitmap image below (73,242 pixels) would require 219,726 bytes (excluding all other information headers). The filesizes indicated below include the internal JPEG information headers and some
metadata Metadata is "data that provides information about other data", but not the content of the data, such as the text of a message or the image itself. There are many distinct types of metadata, including: * Descriptive metadata – the descriptive ...
. For highest quality images (Q=100), about 8.25 bits per color pixel is required. On grayscale images, a minimum of 6.5 bits per pixel is enough (a comparable Q=100 quality color information requires about 25% more encoded bits). The highest quality image below (Q=100) is encoded at nine bits per color pixel, the medium quality image (Q=25) uses one bit per color pixel. For most applications, the quality factor should not go below 0.75 bit per pixel (Q=12.5), as demonstrated by the low quality image. The image at lowest quality uses only 0.13 bit per pixel, and displays very poor color. This is useful when the image will be displayed in a significantly scaled-down size. A method for creating better quantization matrices for a given image quality using
PSNR Peak signal-to-noise ratio (PSNR) is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic ...
instead of the Q factor is described in Minguillón & Pujol (2001). :: The medium quality photo uses only 4.3% of the storage space required for the uncompressed image, but has little noticeable loss of detail or visible artifacts. However, once a certain threshold of compression is passed, compressed images show increasingly visible defects. See the article on
rate–distortion theory Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate ''R'', ...
for a mathematical explanation of this threshold effect. A particular limitation of JPEG in this regard is its non-overlapped 8×8 block transform structure. More modern designs such as
JPEG 2000 JPEG 2000 (JP2) is an image compression standard and coding system. It was developed from 1997 to 2000 by a Joint Photographic Experts Group committee chaired by Touradj Ebrahimi (later the JPEG president), with the intention of superseding the ...
and
JPEG XR JPEG XR (JPEG extended range) is an image compression standard for continuous tone photographic images, based on the HD Photo (formerly Windows Media Photo) specifications that Microsoft originally developed and patented. It supports both lossy ...
exhibit a more graceful degradation of quality as the bit usage decreases – by using transforms with a larger spatial extent for the lower frequency coefficients and by using overlapping transform basis functions.


Lossless further compression

From 2004 to 2008, new research emerged on ways to further compress the data contained in JPEG images without modifying the represented image.I. Bauermann and E. Steinbacj. Further Lossless Compression of JPEG Images. Proc. of Picture Coding Symposium (PCS 2004), San Francisco, US, December 15–17, 2004.N. Ponomarenko, K. Egiazarian, V. Lukin and J. Astola. Additional Lossless Compression of JPEG Images, Proc. of the 4th Intl. Symposium on Image and Signal Processing and Analysis (ISPA 2005), Zagreb, Croatia, pp. 117–120, September 15–17, 2005.M. Stirner and G. Seelmann. Improved Redundancy Reduction for JPEG Files. Proc. of Picture Coding Symposium (PCS 2007), Lisbon, Portugal, November 7–9, 2007Ichiro Matsuda, Yukio Nomoto, Kei Wakabayashi and Susumu Itoh. Lossless Re-encoding of JPEG images using block-adaptive intra prediction. Proceedings of the 16th European Signal Processing Conference (EUSIPCO 2008). This has applications in scenarios where the original image is only available in JPEG format, and its size needs to be reduced for archiving or transmission. Standard general-purpose compression tools cannot significantly compress JPEG files. Typically, such schemes take advantage of improvements to the naive scheme for coding DCT coefficients, which fails to take into account: * Correlations between magnitudes of adjacent coefficients in the same block; * Correlations between magnitudes of the same coefficient in adjacent blocks; * Correlations between magnitudes of the same coefficient/block in different channels; * The DC coefficients when taken together resemble a downscale version of the original image multiplied by a scaling factor. Well-known schemes for lossless coding of continuous-tone images can be applied, achieving somewhat better compression than the
Huffman code In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code proceeds by means of Huffman coding, an algor ...
d
DPCM Differential pulse-code modulation (DPCM) is a signal encoder that uses the baseline of pulse-code modulation (PCM) but adds some functionalities based on the prediction of the samples of the signal. The input can be an analog signal or a digita ...
used in JPEG. Some standard but rarely used options already exist in JPEG to improve the efficiency of coding DCT coefficients: the
arithmetic coding Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a string of characters is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic e ...
option, and the progressive coding option (which produces lower bitrates because values for each coefficient are coded independently, and each coefficient has a significantly different distribution). Modern methods have improved on these techniques by reordering coefficients to group coefficients of larger magnitude together; using adjacent coefficients and blocks to predict new coefficient values; dividing blocks or coefficients up among a small number of independently coded models based on their statistics and adjacent values; and most recently, by decoding blocks, predicting subsequent blocks in the spatial domain, and then encoding these to generate predictions for DCT coefficients. Typically, such methods can compress existing JPEG files between 15 and 25 percent, and for JPEGs compressed at low-quality settings, can produce improvements of up to 65%. A freely available tool called packJPG is based on the 2007 paper "Improved Redundancy Reduction for JPEG Files."


Derived formats for stereoscopic 3D


JPEG Stereoscopic

JPS is a stereoscopic JPEG image used for creating 3D effects from 2D images. It contains two static images, one for the left eye and one for the right eye; encoded as two side-by-side images in a single JPG file. JPEG Stereoscopic (JPS, extension .jps) is a JPEG-based format for
stereoscopic Stereoscopy (also called stereoscopics, or stereo imaging) is a technique for creating or enhancing the depth perception, illusion of depth in an image by means of stereopsis for binocular vision. The word ''stereoscopy'' derives . Any stere ...
images. It has a range of configurations stored in the JPEG APP3 marker field, but usually contains one image of double width, representing two images of identical size in cross-eyed (i.e. left frame on the right half of the image and vice versa) side-by-side arrangement. This file format can be viewed as a JPEG without any special software, or can be processed for rendering in other modes.


JPEG Multi-Picture Format

JPEG Multi-Picture Format (MPO, extension .mpo) is a JPEG-based format for storing multiple images in a single file. It contains two or more JPEG files concatenated together. It also defines a JPEG APP2 marker segment for image description. Various devices use it to store 3D images, such as
Fujifilm FinePix Real 3D W1 The Fujifilm FinePix Real 3D W series is a line of consumer-grade digital cameras designed to capture stereoscopic images that recreate the perception of 3D depth, having both still and video formats while retaining standard 2D still image and vi ...
, HTC Evo 3D, JVC GY-HMZ1U AVCHD/MVC extension camcorder,
Nintendo 3DS The is a handheld game console produced by Nintendo. It was announced in March 2010 and unveiled at E3 2010 as the successor to the Nintendo DS. The system features backward compatibility with Nintendo DS video games. As an eighth-generatio ...
,
Panasonic Lumix DMC-TZ20 Panasonic Lumix DMC-TZ20 is a digital camera by Panasonic Lumix. The highest-resolution pictures it records is 14.1 megapixel In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raste ...
, DMC-TZ30, DMC-TZ60, DMC-TS4 (FT4), and
Sony , commonly stylized as SONY, is a Japanese multinational conglomerate corporation headquartered in Minato, Tokyo, Japan. As a major technology company, it operates as one of the world's largest manufacturers of consumer and professional ...
DSC-HX7V. Other devices use it to store "preview images" that can be displayed on a TV. In the last few years, due to the growing use of stereoscopic images, much effort has been spent by the scientific community to develop algorithms for stereoscopic
image compression Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior r ...
.


Implementations

A very important implementation of a JPEG codec is the free programming library ''
libjpeg libjpeg is a free library with functions for handling the JPEG image data format. It implements a JPEG codec (encoding and decoding) alongside various utilities for handling JPEG data. It is written in C and distributed as free software togeth ...
'' of the Independent JPEG Group. It was first published in 1991 and was key for the success of the standard. This library or a direct derivative of it is used in countless applications. Recent versions introduced proprietary extensions, breaking ABI compatibility with previous versions and which are not covered by the ITU, ISO/IEC standard. In March 2017, Google released the open source project
Guetzli Guetzli is a freely licensed JPEG encoder that Jyrki Alakujala, Robert Obryk, and Zoltán Szabadka have developed in Google's Zürich research branch. The encoder seeks to produce significantly smaller files than prior encoders at equivalent qu ...
, which trades off a much longer encoding time for smaller file size (similar to what
Zopfli Zopfli is a data compression library that performs Deflate, gzip and zlib data encoding. It achieves higher compression ratios than mainstream Deflate and zlib implementations at the cost of being slower. Google first released Zopfli in February ...
does for PNG and other lossless data formats). ITU, ISO/IEC formalized JPEG reference implementations in ITU-T Recommendation T.873 , ISO/IEC 10918-7 in 2021. ISO/IEC
Joint Photography Experts Group The Joint Photographic Experts Group (JPEG) is the joint committee between International Organization for Standardization, ISO/International Electrotechnical Commission, IEC ISO/IEC JTC 1, JTC 1/ISO/IEC JTC 1/SC 29, SC 29 and ITU-T Study Group 16 ...
maintains one of the two reference software implementations which can encode both base JPEG (ISO/IEC 10918-1 and 18477–1) and
JPEG XT JPEG XT (ISO/IEC 18477) is an image compression standard which specifies backward-compatible extensions of the base JPEG standard (ISO/IEC 10918-1 and ITU Rec. T.81). JPEG XT extends JPEG with support for higher integer bit depths, high dynami ...
extensions (ISO/IEC 18477 Parts 2 and 6–9), as well as
JPEG-LS Lossless JPEG is a 1993 addition to JPEG standard by the Joint Photographic Experts Group to enable lossless compression. However, the term may also be used to refer to all lossless compression schemes developed by the group, including JPEG 2000 an ...
(ISO/IEC 14495). A second reference implementation is
libJPEG-turbo libjpeg is a free library with functions for handling the JPEG image data format. It implements a JPEG codec (encoding and decoding) alongside various utilities for handling JPEG data. It is written in C and distributed as free software togeth ...
which is a derivate of the JPEG implementation of the Indedependent JPEG Group tuned towards high performance and compliance to the JPEG standard.


JPEG XT

JPEG XT (ISO/IEC 18477) was published in June 2015; it extends base JPEG format with support for higher integer bit depths (up to 16 bit), high dynamic range imaging and floating-point coding, lossless coding, and alpha channel coding. Extensions are backward compatible with the base JPEG/JFIF file format and 8-bit lossy compressed image. JPEG XT uses an extensible file format based on JFIF. Extension layers are used to modify the JPEG 8-bit base layer and restore the high-resolution image. Existing software is forward compatible and can read the JPEG XT binary stream, though it would only decode the base 8-bit layer.


JPEG XL

Since August 2017, JTC1/SC29/WG1 issued a series of draft calls for proposals on JPEG XLthe next generation image compression standard with substantially better compression efficiency (60% improvement) comparing to JPEG. The standard is expected to exceed the still image compression performance shown by
HEVC High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard designed as part of the MPEG-H project as a successor to the widely used Advanced Video Coding (AVC, H.264, or MPEG-4 Part 10). In compari ...
HM,
Daala Daala is a video coding format under development by the Xiph.Org Foundation under the lead of Timothy B. Terriberry mainly sponsored by the Mozilla Corporation. Like Theora and Opus, Daala is available free of any royalties and its reference imp ...
and
WebP WebP is an image file format developed by Google intended as a replacement for JPEG, PNG, and GIF file formats. It supports both lossy and lossless compression, as well as animation and alpha transparency. Google announced the WebP format i ...
, and unlike previous efforts attempting to replace JPEG, to provide lossless more efficient recompression transport and storage option for traditional JPEG images. The core requirements include support for very high-resolution images (at least 40 MP), 8–10 bits per component, RGB/YCbCr/
ICtCp ''ICTCP'', ''ICtCp'', or ''ITP'' is a color representation format specified in the Rec. ITU-R BT.2100 standard that is used as a part of the color image pipeline in video and digital photography systems for high dynamic range (HDR) and wide colo ...
color encoding, animated images, alpha channel coding,
Rec. 709 Rec. 709, also known as Rec.709, BT.709, and ITU 709, is a standard developed by ITU-R for image encoding and signal characteristics of high-definition television. The most recent version is BT.709-6 released in 2015. BT.709-6 defines the P ...
color space (
sRGB sRGB is a standard RGB (red, green, blue) color space that HP and Microsoft created cooperatively in 1996 to use on monitors, printers, and the World Wide Web. It was subsequently standardized by the International Electrotechnical Commission ( ...
) and gamma function (2.4-power),
Rec. 2100 ITU-R Recommendation BT.2100, more commonly known by the abbreviations Rec. 2100 or BT.2100, introduced high-dynamic-range television (HDR-TV) by recommending the use of the perceptual quantizer (PQ) or hybrid log–gamma (HLG) transfer func ...
wide color gamut In color reproduction, including computer graphics and photography, the gamut, or color gamut , is a certain ''complete subset'' of colors. The most common usage refers to the subset of colors which can be accurately represented in a given circ ...
color space (
Rec. 2020 ITU-R Recommendation BT.2020, more commonly known by the abbreviations Rec. 2020 or BT.2020, defines various aspects of ultra-high-definition television (UHDTV) with standard dynamic range (SDR) and wide color gamut (WCG), including picture ...
) and
high dynamic range High dynamic range (HDR) is a dynamic range higher than usual, synonyms are wide dynamic range, extended dynamic range, expanded dynamic range. The term is often used in discussing the dynamic range of various signals such as images, videos, au ...
transfer functions ( PQ and HLG), and high-quality compression of synthetic images, such as bitmap fonts and gradients. The standard should also offer higher bit depths (12–16 bit integer and floating point), additional color spaces and transfer functions (such as Log C from
Arri The Arri Group () is a German manufacturer of motion picture film equipment. Based in Munich, the company was founded in 1917. It produces professional motion picture cameras, lenses, lighting and post-production equipment. Hermann Simon menti ...
), embedded preview images, lossless alpha channel encoding, image region coding, and low-complexity encoding. Any patented technologies would be licensed on a
royalty-free Royalty-free (RF) material subject to copyright or other intellectual property rights may be used without the need to pay royalties or license fees for each use, per each copy or volume sold or some time period of use or sales. Computer standard ...
basis. The proposals were submitted by September 2018, leading to a committee draft in July 2019, with file format and core coding system were formally standardized on 13 October 2021 and 30 March 2022 respectively.


See also

*
Better Portable Graphics Better Portable Graphics (BPG) is a file format for coding digital images, which was created by programmer Fabrice Bellard in 2014. He has proposed it as a replacement for the JPEG image format as the more compression-efficient alternative in ter ...
, a format based on intra-frame encoding of the HEVC *
C-Cube C-Cube Microsystems was an early company in video compression technology as well as the implementation of that technology into semiconductor integrated circuits and systems. C-Cube was the first company to deliver on the market opportunity presen ...
, an early implementer of JPEG in chip form *
Comparison of graphics file formats This is a comparison of image file formats (graphics file formats). This comparison primarily features file formats for 2D images. General Ownership of the format and related information. Technical details See also * List of codecs Referen ...
*
Comparison of layout engines (graphics) This article compares browser engines, especially actively-software development, developed ones. Some of these engines have shared origins. For example, the WebKit engine was created by Fork (software development), forking the KHTML engine in 200 ...
*
Deblocking filter (video) A deblocking filter is a video filter applied to decoded compressed video to improve visual quality and prediction performance by smoothing the sharp edges which can form between macroblocks when block coding techniques are used. The filter aims ...
, the similar deblocking methods could be applied to JPEG *
Design rule for Camera File system Design rule for Camera File system (DCF) is a JEITA specification (number CP-3461) which defines a file system for digital cameras, including the directory structure, file naming method, character set, file format, and metadata format. It is cur ...
(DCF) *
File extensions A filename extension, file name extension or file extension is a suffix to the name of a computer file (e.g., .txt, .docx, .md). The extension indicates a characteristic of the file contents or its intended use. A filename extension is typically d ...
*
Graphics editing program In computer graphics, graphics software refers to a program or collection of programs that enable a person to manipulate images or models visually on a computer. Computer graphics can be classified into two distinct categories: raster graphics ...
*
High Efficiency Image File Format High Efficiency Image File Format (HEIF) is a container format for storing individual digital images and image sequences. The standard covers multimedia files that can also include other media streams, such as timed text, audio and video. HEI ...
, image container format for
HEVC High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard designed as part of the MPEG-H project as a successor to the widely used Advanced Video Coding (AVC, H.264, or MPEG-4 Part 10). In compari ...
and other image coding formats *
Lenna (test image) Lenna (or Lena) is a standard test image used in the field of image processing since 1973. It is a picture of the Swedish model Lena Forsén, shot by photographer Dwight Hooker, cropped from the centerfold of the November 1972 issue of ''Play ...
, the traditional standard image used to test image processing algorithms * Lossless Image Codec
FELICS FELICS, which stands for Fast Efficient & Lossless Image Compression System, is a lossless image compression algorithm that performs 5-times faster than the original lossless JPEG codec and achieves a similar compression ratio. History It was ...
*
Motion JPEG Motion JPEG (M-JPEG or MJPEG) is a video compression format in which each video frame or interlaced field of a digital video sequence is compressed separately as a JPEG image. Originally developed for multimedia PC applications, Motion JPE ...
*
WebP WebP is an image file format developed by Google intended as a replacement for JPEG, PNG, and GIF file formats. It supports both lossy and lossless compression, as well as animation and alpha transparency. Google announced the WebP format i ...


References


External links


JPEG Standard (JPEG ISO/IEC 10918-1 ITU-T Recommendation T.81)
at W3.org
Official Joint Photographic Experts Group (JPEG) site

JFIF File Format
at W3.org
Example images over the full range of quantization levels from 1 to 100
at visengi.com
JPEG decoder open source code, copyright (C) 1995–1997, Thomas G. Lane
{{DEFAULTSORT:JPEG Articles containing video clips Computer-related introductions in 1992 Discovery and invention controversies ITU-T recommendations IEC standards ISO standards Lossy compression algorithms Image compression Open formats Raster graphics file formats