The byte is a unit of digital information
that most commonly consists of eight bit
s. Historically, the byte was the number of bits used to encode a single character
of text in a computer
and for this reason it is the smallest addressable
unit of memory
in many computer architecture
s. To disambiguate arbitrarily sized bytes from the common 8-bit
protocol documents such as The Internet Protocol (RFC 791)(1981)
refer to an 8-bit byte as an octet
. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the endianness
. The first bit is number 0, making the eighth bit number 7.
The size of the byte has historically been hardware
-dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used.
The six-bit character code
was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words
of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes. In this era, bit groupings in the instruction stream were often referred to as ''syllables
'' or ''slab'', before the term ''byte'' became common.
The modern de facto standard
of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two
permitting the binary-encoded
values 0 through 255 for one byte—2 to the power 8 is 256.
The international standard IEC 80000-13
codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte.
Modern architectures typically use 32- or 64-bit words, built of four or eight bytes.
The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission
(IEC) and Institute of Electrical and Electronics Engineers
Internationally, the unit ''octet
'', symbol o, explicitly defines a sequence of eight bits, eliminating the ambiguity of the byte.
The term ''byte'' was coined by Werner Buchholz
in June 1956,
during the early design phase for the IBM Stretch
computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction.
It is a deliberate respelling of ''bite
'' to avoid accidental mutation to ''bit''.
Another origin of ''byte'' for bit groups smaller than a computer's word size, and in particular groups of four bits
, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz
and Dick Beeler on an air defense system called SAGE
at MIT Lincoln Laboratory
in 1956 or 1957, which was jointly developed by Rand
, MIT, and IBM.
Later on, Schwartz's language JOVIAL
actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31
Early computers used a variety of four-bit binary-coded decimal
(BCD) representations and the six-bit
codes for printable graphic patterns common in the U.S. Army
) and Navy
. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange
(ASCII) as the Federal Information Processing Standard
, which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control character
s to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media.
During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360
the eight-bit Extended Binary Coded Decimal Interchange Code
(EBCDIC), an expansion of their six-bit binary-coded decimal
(BCDIC) representations used in earlier card punches.
The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size,
while in detail the EBCDIC and ASCII encoding schemes are different.
In the early 1960s, AT&T
introduced digital telephony
on long-distance trunk line
s. These used the eight-bit μ-law encoding
. This large investment promised to reduce transmission costs for eight-bit data.
The development of eight-bit microprocessor
s in the 1970s popularized this storage size. Microprocessors such as the Intel 8008
, the direct predecessor of the 8080
and the 8086
, used in early personal computers, could also perform a small number of operations on the four-bit
pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble
, also ''nybble'', which is conveniently represented by a single hexadecimal
The term ''octet
'' is used to unambiguously specify a size of eight bits.
It is used extensively in protocol
Historically, the term ''octad'' or ''octade'' was used to denote eight bits as well at least in Western Europe;
however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips
The unit symbol for the byte is specified in IEC 80000-13
, IEEE 1541
and the Metric Interchange Format
as the upper-case character B.
In the International System of Quantities
(ISQ), B is the symbol of the ''bel
'', a unit of logarithmic power ratio named after Alexander Graham Bell
, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel
(dB), for signal strength
and sound pressure level
measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates.
The lowercase letter o for octet
is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French
, and is also combined with metric prefixes for multiples, for example ko and Mo.
The usage of the term ''octad(e)'' for eight bits is no longer common.
More than one system exists to define larger units based on the byte. Some systems are based on powers of 10
; other systems are based on powers of 2
. Nomenclature for these systems has been the subject of confusion. Systems based on powers of 10 reliably use standard SI prefix
', ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefix
es ('kibi', 'mebi', 'gibi, ...) and their corresponding symbols (Ki, Mi, Gi, ...) ''or'' they might use the prefixes K, M, and G, creating ambiguity.
While the numerical difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate significantly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based yottabyte is about 17% smaller than power-of-2-based yobibyte.
Units based on powers of 10
Definition of prefixes using powers of 10—in which 1 ''kilobyte'' (symbol kB) is defined to equal 1000 bytes—is recommended by the International Electrotechnical Commission
[Prefixes for Binary Multiples](_blank)
— The NIST Reference on Constants, Units, and Uncertainty
The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 10008
This definition is most commonly used for data transfer rates
in computer network
s, internal bus, hard drive and flash media transfer speeds, and for the capacities of most storage media
, particularly hard drive
-based storage, and DVD
s. It is also consistent with the other uses of the SI prefix
es in computing, such as CPU clock speeds
or measures of performance
Units based on powers of 2
A system of units based on powers of 2
in which 1 kibibyte (KiB) is equal to 1024 (i.e., 210
) bytes is defined by international standards IEC 80000-13 and supported by national and international standards bodies (BIPM
, IEC, NIST
). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 10248
An alternate system of nomenclature for the same units, in which 1 ''kilobyte'' (KB) is equal to 1024 bytes, 1 ''megabyte'' (MB) is equal to 10242
bytes and 1 ''gigabyte'' (GB) is equal to 10243
bytes is defined by a 1990s JEDEC
standard. Only the first three multiples (up to GB) are defined by the JEDEC standard. For TB and larger, standards recognise only the decimal definition. The JEDEC convention is prominently used by the Microsoft Windows
and random-access memory
capacity, such as main memory and CPU cache
size, and in marketing and billing by telecommunication companies, such as Vodafone
History of the conflicting definitions
Contemporary computer memory has a binary architecture
making a definition of memory units based on powers of 2 most practical. The use of the metric prefix ''kilo'' for binary multiples arose as a convenience, because 1024 is approximately 1000. This definition was popular in early decades of personal computing
, with products like the Tandon
floppy format (holding 368,640 bytes) being advertised as "360 KB", following the 1024-byte convention. It was not universal, however. The Shugart
SA-400 5-inch floppy disk
held 109,375 bytes unformatted, and was advertised as "110 Kbyte", using the 1000 convention. Likewise, the 8-inch DEC
RX01 floppy (1975) held 256,256 bytes formatted, and was advertised as "256k". Other disks were advertised using a ''mixture'' of the two definitions: notably, -inch HD disks advertised as "1.44 MB" in fact have a capacity of 1,440 KiB, the equivalent to 1.47 MB or 1.41 MiB.
In 1995, the International Union of Pure and Applied Chemistry
's Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes
for the powers of 1024.
In December 1998, the IEC
addressed such multiple usages and definitions by creating prefixes such as kibi, mebi, gibi, etc., to unambiguously denote powers of 1024. Thus the kibibyte (KiB), represents 210
bytes = 1024 bytes. These prefixes are now part of the International System of Quantities
. The IEC further specified that the kilobyte should only be used to refer to 1000 bytes. The IEC adopted the proposal and published the standard in January 1999.
[Amendment 2 to IEC International Standard IEC 60027-2: Letter symbols to be used in electrical technology - Part 2: Telecommunications and electronics.]
In 1999, Donald Knuth
suggested calling the kibibyte a "large kilobyte" (''KKB'').
Lawsuits over definition
Lawsuits arising from alleged consumer confusion over the binary and decimal definitions as applied to the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1,000,000,000 (109
) bytes (the decimal definition) rather than the binary definition (230
). Specifically, the courts held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' ..
The California Legislature has likewise adopted the decimal system for all 'transactions in this state.'"
Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital
Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity.
Seagate was sued on similar grounds and also settled.
Many programming language
s define the data type
programming languages define ''byte'' as an "''addressable unit of data storage large enough to hold any member of the basic character set of the execution environment''" (clause 3.6 of the C standard). The C standard requires that the integral data type ''unsigned
char'' must hold at least 256 different values, and is represented by at least eight bits (clause 22.214.171.124.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte.
In addition, the C and C++ standards require that there are no gaps between two bytes. This means every bit in memory is part of a byte.
data type is always defined as consisting of 8 bits and being a signed data type, holding values from −128 to 127.
.NET programming languages, such as C#, define both an unsigned
and a signed
, holding values from 0 to 255, and −128 to 127
In data transmission systems, the byte is defined as a contiguous sequence of bits in a serial data stream representing the smallest distinguished unit of data. A transmission unit might include start bits, stop bits, or parity bit
s, and thus could vary from 7 to 12 bits to contain a single 7-bit ASCII
* Data hierarchy
, Just a Bunch Of Bytes
* Octet (computing)
* Primitive data type
* Word (computer architecture)
* Ashley Taylor. “Bits and Bytes.” Stanford. https://web.stanford.edu/class/cs101/bits-bytes.html
Category:Units of information