ASCII ( ),
abbreviated from American Standard Code for Information Interchange, is a character encoding
standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment
, and other devices. Most modern character-encoding schemes are based on ASCII, although they support many additional characters.
The Internet Assigned Numbers Authority
(IANA) prefers the name US-ASCII for this character encoding.
ASCII is one of the IEEE milestones
ASCII was developed from telegraph code
. Its first commercial use was as a seven-bit teleprinter
code promoted by Bell data services. Work on the ASCII standard began on October 6, 1960, with the first meeting of the American Standards Association
's (ASA) (now the American National Standards Institute
or ANSI) X3.2 subcommittee. The first edition of the standard was published in 1963,
underwent a major revision during 1967,
and experienced its most recent update during 1986.
Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists, and added features for devices other than teleprinters.
The use of ASCII format for Network Interchange was described in 1969. That document was formally elevated to an Internet Standard in 2015.
Originally based on the English alphabet
, ASCII encodes 128 specified characters
into seven-bit integers as shown by the ASCII chart above.
Ninety-five of the encoded characters are printable: these include the digits ''0'' to ''9'', lowercase letters ''a'' to ''z'', uppercase letters ''A'' to ''Z'', and punctuation symbol
s. In addition, the original ASCII specification included 33 non-printing control code
s which originated with Teletype machine
s; most of these are now obsolete,
although a few are still commonly used, such as the carriage return
, line feed
For example, lowercase ''i
'' would be represented in the ASCII encoding by binary
1101001 = hexadecimal
69 (''i'' is the ninth letter) = decimal
The American Standard Code for Information Interchange (ASCII) was developed under the auspices of a committee of the American Standards Association
(ASA), called the X3 committee, by its X3.2 (later X3L2) subcommittee, and later by that subcommittee's X3.2.4 working group (now INCITS
). The ASA became the United States of America Standards Institute
and ultimately the American National Standards Institute
With the other special characters and control codes filled in, ASCII was published as ASA X3.4-1963,
leaving 28 code positions without any assigned meaning, reserved for future standardization, and one unassigned control code.
There was some debate at the time whether there should be more control characters rather than the lowercase alphabet.
The indecision did not last long: during May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lowercase characters to ''sticks''
6 and 7,
[Brief Report: Meeting of CCITT Working Party on the New Telegraph Alphabet, May 13–15, 1963.]
and International Organization for Standardization
TC 97 SC 2 voted during October to incorporate the change into its draft standard.
[Report of ISO/TC/97/SC 2 – Meeting of October 29–31, 1963.]
The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting. Locating the lowercase letters in ''sticks''
6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplified case-insensitive
character matching and the construction of keyboards and printers.
The X3 committee made other changes, including other new characters (the brace
and vertical bar
characters), renaming some control characters (SOM became start of header (SOH)) and moving or removing others (RU was removed).
ASCII was subsequently updated as USAS X3.4-1967,
then USAS X3.4-1968, ANSI X3.4-1977, and finally, ANSI X3.4-1986.
Revisions of the ASCII standard:
* ASA X3.4-1963
* ASA X3.4-1965 (approved, but not published, nevertheless used by IBM 2260
Display Stations and IBM 2848
* USAS X3.4-1967
* USAS X3.4-1968
* ANSI X3.4-1977
* ANSI X3.4-1986
* ANSI X3.4-1986 (R1992)
* ANSI X3.4-1986 (R1997)
* ANSI INCITS 4-1986 (R2002)
* ANSI INCITS 4-1986 (R2007)
* (ANSI) INCITS 4-19862012
* (ANSI) INCITS 4-19862017
In the X3.15 standard, the X3 committee also addressed how ASCII should be transmitted (least significant bit
and how it should be recorded on perforated tape. They proposed a 9-track
standard for magnetic tape, and attempted to deal with some punched card
The X3.2 subcommittee designed ASCII based on the earlier teleprinter encoding systems. Like other character encoding
s, ASCII specifies a correspondence between digital bit patterns and character
symbols (i.e. grapheme
s and control character
s). This allows digital
devices to communicate with each other and to process, store, and communicate character-oriented information such as written language. Before ASCII was developed, the encodings in use included 26 alphabetic
characters, 10 numerical digit
s, and from 11 to 25 special graphic symbols. To include all these, and control characters compatible with the Comité Consultatif International Téléphonique et Télégraphique
(CCITT) International Telegraph Alphabet No. 2
(ITA2) standard of 1924,
(1956), and early EBCDIC
(1963), more than 64 codes were required for ASCII.
ITA2 were in turn based on the 5-bit telegraph code Émile Baudot
invented in 1870 and patented in 1874.
The committee debated the possibility of a shift
function (like in ITA2
), which would allow more than 64 codes to be represented by a six-bit code
. In a shifted code, some character codes determine choices between options for the following character codes. It allows compact encoding, but is less reliable for data transmission
, as an error in transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided against shifting, and so ASCII required at least a seven-bit code.
The committee considered an eight-bit code, since eight bits (octet
s) would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal
. However, it would require all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission. Since perforated tape at the time could record eight bits in one position, it also allowed for a parity bit
for error checking
machines (with octets as the native data type) that did not use parity checking typically set the eighth bit to 0.
The code itself was patterned so that most control codes were together and all graphic codes were together, for ease of identification. The first two so-called ''ASCII sticks''
(32 positions) were reserved for control characters.
The "space" character
had to come before graphics to make sorting
easier, so it became position 20hex
for the same reason, many special signs commonly used as separators were placed before digits. The committee decided it was important to support uppercase 64-character alphabets
, and chose to pattern ASCII so it could be reduced easily to a usable 64-character set of graphic codes,
as was done in the DEC SIXBIT
code (1963). Lowercase
letters were therefore not interleaved with uppercase. To keep options available for lowercase letters and other graphics, the special and numeric codes were arranged before the letters, and the letter ''A'' was placed in position 41hex
to match the draft of the corresponding British standard.
The digits 0–9 are prefixed with 011, but the remaining 4 bits
correspond to their respective values in binary, making conversion with binary-coded decimal
Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters; an important subtlety is that these were based on ''mechanical'' typewriters, not ''electric'' typewriters.
Mechanical typewriters followed the standard set by the Remington No. 2 (1878), the first typewriter with a shift key, and the shifted values of
early typewriters omitted ''0'' and ''1'', using ''O'' (capital letter ''o'') and ''l'' (lowercase letter ''L'') instead, but
pairs became standard once 0 and 1 became common. Thus, in ASCII
were placed in the second stick,
positions 1–5, corresponding to the digits 1–5 in the adjacent stick.
The parentheses could not correspond to ''9'' and ''0'', however, because the place corresponding to ''0'' was taken by the space character. This was accommodated by removing
(underscore) from ''6'' and shifting the remaining characters, which corresponded to many European typewriters that placed the parentheses with ''8'' and ''9''. This discrepancy from typewriters led to bit-paired keyboard
s, notably the Teletype Model 33
, which used the left-shifted layout corresponding to ASCII, not to traditional mechanical typewriters. Electric typewriters, notably the IBM Selectric
(1961), used a somewhat different layout that has become standard on computers following the IBM PC
(1981), especially Model M
(1984) and thus shift values for symbols on modern keyboards do not correspond as closely to the ASCII table as earlier keyboards did. The
pair also dates to the No. 2, and the
pairs were used on some keyboards (others, including the No. 2, did not shift
(full stop) so they could be used in uppercase without unshifting). However, ASCII split the
pair (dating to No. 2), and rearranged mathematical symbols (varied conventions, commonly
:* ;+ -=
Some common characters were not included, notably
were included as diacritics for international use, and
for mathematical use, together with the simple line characters
(in addition to common
). The ''@'' symbol was not used in continental Europe and the committee expected it would be replaced by an accented ''À'' in the French variation, so the ''@'' was placed in position 40hex
, right before the letter A.
The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end of message
(EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU), a reserved device control (DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hamming distance
between their bit patterns.
ASCII-code order is also called ''ASCIIbetical'' order. Collation
of data is sometimes done in this order rather than "standard" alphabetical order (collating sequence
). The main deviations in ASCII order are:
* All uppercase come before lowercase letters; for example, "Z" precedes "a"
* Digits and many punctuation marks come before letters
An intermediate order converts uppercase letters to lowercase before comparing ASCII values.
ASCII reserves the first 32 codes (numbers 0–31 decimal) for control character
s: codes originally intended not to represent printable information, but rather to control devices (such as printers
) that make use of ASCII, or to provide meta-information
about data streams such as those stored on magnetic tape.
For example, character 10 represents the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". refers to control characters that do not include carriage return, line feed or white space
as non-whitespace control characters.
[ (NB. NO-WS-CTL.)]
Except for the control characters that prescribe elementary line-oriented formatting, ASCII does not define any mechanism for describing the structure or appearance of text within a document. Other schemes, such as markup language
s, address page and document layout and formatting.
The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this caused was sometimes intentional, for example where a character would be used slightly differently on a terminal link than on a data stream
, and sometimes accidental, for example with the meaning of "delete".
Probably the most influential single device on the interpretation of these characters was the Teletype Model 33
ASR, which was a printing terminal with an available paper tape
reader/punch option. Paper tape was a very popular medium for long-term program storage until the 1980s, less costly and in some ways less fragile than magnetic tape. In particular, the Teletype Model 33 machine assignments for codes 17 (Control-Q, DC1, also known as XON), 19 (Control-S, DC3, also known as XOFF), and 127 (Delete
) became de facto standards. The Model 33 was also notable for taking the description of Control-G (code 7, BEL, meaning audibly alert the operator) literally, as the unit contained an actual bell which it rang when it received a BEL character. Because the keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore
), a noncompliant use of code 15 (Control-O, Shift In) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually became neglected.
When a Teletype 33 ASR equipped with the automatic paper tape reader received a Control-S (XOFF, an abbreviation for transmit off), it caused the tape reader to stop; receiving Control-Q (XON, "transmit on") caused the tape reader to resume. This technique became adopted by several early computer operating systems as a "handshaking" signal warning a sender to stop transmission because of impending overflow; it persists to this day in many systems as a manual output control technique. On some systems Control-S retains its meaning but Control-Q is replaced by a second Control-S to resume output. The 33 ASR also could be configured to employ Control-R (DC2) and Control-T (DC4) to start and stop the tape punch; on some units equipped with this function, the corresponding control character lettering on the keycap above the letter was TAPE and
Delete & Backspace
The Teletype could not move the head backwards, so it did not put a key on the keyboard to send a BS (backspace). Instead there was a key marked that sent code 127 (DEL). The purpose of this key was to erase mistakes in a hand-typed paper tape: the operator had to push a button on the tape punch to back it up, then type the rubout, which punched all holes and replaced the mistake with a character that was intended to be ignored. Teletypes were commonly used for the less-expensive computers from Digital Equipment Corporation
, so these systems had to use the available key and thus the DEL code to erase the previous character.
Because of this, DEC video terminals (by default) sent the DEL code for the key marked "Backspace" while the key marked "Delete" sent an escape sequence, while many other terminals sent BS for the Backspace key. The Unix terminal driver could only use one code to erase the previous character, this could be set to BS ''or'' DEL, but not both, resulting in a long period of annoyance where users had to correct it depending on what terminal they were using (shells that allow line editing, such as ksh
, and zsh
, understand both). The assumption that no key sent a BS caused Control+H to be used for other purposes, such as the "help" prefix command in GNU Emacs
Many more of the control codes have been given meanings quite different from their original ones. The "escape" character (ESC, code 27), for example, was intended originally to allow sending other control characters as literals instead of invoking their meaning. This is the same meaning of "escape" encountered in URL encodings, C language
strings, and other systems where certain characters have a reserved meaning. Over time this meaning has been co-opted and has eventually been changed. In modern use, an ESC sent to the terminal usually indicates the start of a command sequence usually in the form of a so-called "ANSI escape code
" (or, more properly, a "Control Sequence Introducer
") from ECMA-48 (1972) and its successors, beginning with ESC followed by a "out-of-band_character_used_to_terminate_an_operation,_as_in_the_[[Text_Editor_and_Corrector.html" style="text-decoration: none;"class="mw-redirect" title="out-of-band data">out-of-band character used to terminate an operation, as in the TECO_and_TECO_and_[[vi">Text_Editor_and_Corrector">TECO_and_[[vi_[[text_editor.html" style="text-decoration: none;"class="mw-redirect" title="vi.html" style="text-decoration: none;"class="mw-redirect" title="Text Editor and Corrector">TECO and [[vi">Text Editor and Corrector">TECO and [[vi [[text editor">vi.html" style="text-decoration: none;"class="mw-redirect" title="Text Editor and Corrector">TECO and [[vi">Text Editor and Corrector">TECO and [[vi [[text editors. In [[graphical user interface (GUI) and [[window (computing)|windowing systems, ESC generally causes an application to abort its current operation or to [[exit (system call)|exit (terminate) altogether.
End of Line
The inherent ambiguity of many control characters, combined with their historical usage, created problems when transferring "plain text" files between systems. The best example of this is the newline problem on various operating systems. Teletype machines required that a line of text be terminated with both "Carriage Return" (which moves the printhead to the beginning of the line) and "Line Feed" (which advances the paper one line without moving the printhead). The name "Carriage Return" comes from the fact that on a manual typewriter the carriage holding the paper moved while the position where the typebars struck the ribbon remained stationary. The entire carriage had to be pushed (returned) to the right in order to position the left margin of the paper for the next line.
DEC operating systems (OS/8, RT-11, RSX-11, RSTS, TOPS-10, etc.) used both characters to mark the end of a line so that the console device (originally Teletype machines) would work. By the time so-called "glass TTYs" (later called CRTs or terminals) came along, the convention was so well established that backward compatibility necessitated continuing the convention. When Gary Kildall created CP/M he was inspired by some command line interface conventions used in DEC's RT-11. Until the introduction of PC DOS in 1981, IBM had no hand in this because their 1970s operating systems used EBCDIC instead of ASCII and they were oriented toward punch-card input and line printer output on which the concept of carriage return was meaningless. IBM's PC DOS (also marketed as MS-DOS by Microsoft) inherited the convention by virtue of being loosely based on CP/M, and Windows inherited it from MS-DOS.
Unfortunately, requiring two characters to mark the end of a line introduces unnecessary complexity and questions as to how to interpret each character when encountered alone. To simplify matters plain text data streams, including files, on Multics used line feed (LF) alone as a line terminator. Unix and Unix-like systems, and Amiga systems, adopted this convention from Multics. The original Macintosh OS, Apple DOS, and ProDOS, on the other hand, used carriage return (CR) alone as a line terminator; however, since Apple replaced these operating systems with the Unix-based macOS operating system, they now use line feed (LF) as well. The Radio Shack TRS-80 also used a lone CR to terminate lines.
Computers attached to the ARPANET included machines running operating systems such as TOPS-10 and TENEX using CR-LF line endings, machines running operating systems such as Multics using LF line endings, and machines running operating systems such as OS/360 that represented lines as a character count followed by the characters of the line and that used EBCDIC rather than ASCII. The Telnet protocol defined an ASCII "Network Virtual Terminal" (NVT), so that connections between hosts with different line-ending conventions and character sets could be supported by transmitting a standard text format over the network. Telnet used ASCII along with CR-LF line endings, and software using other conventions would translate between the local conventions and the NVT.
The File Transfer Protocol adopted the Telnet protocol, including use of the Network Virtual Terminal, for use when transmitting commands and transferring data in the default ASCII mode. This adds complexity to implementations of those protocols, and to other network protocols, such as those used for E-mail and the World Wide Web, on systems not using the NVT's CR-LF line-ending convention.
End of File/Stream
The PDP-6 monitor,
and its PDP-10 successor TOPS-10, used Control-Z (SUB) as an end-of-file indication for input from a terminal. Some operating systems such as CP/M tracked file length only in units of disk blocks and used Control-Z to mark the end of the actual text in the file. For these reasons, EOF, or end-of-file, was used colloquially and conventionally as a three-letter acronym for Control-Z instead of SUBstitute. The end-of-text code (ETX), also known as Control-C, was inappropriate for a variety of reasons, while using Z as the control code to end a file is analogous to it ending the alphabet and serves as a very convenient mnemonic aid. A historically common and still prevalent convention uses the ETX code convention to interrupt and halt a program via an input data stream, usually from a keyboard.
In C library and Unix conventions, the null character is used to terminate text strings; such null-terminated strings can be known in abbreviation as ASCIZ or ASCIIZ, where here Z stands for "zero".
Control code chart
Other representations might be used by specialist equipment, for example ISO 2047 graphics or hexadecimal numbers.
Codes 20hex to 7Ehex, known as the printable characters, represent letters, digits, punctuation marks, and a few miscellaneous symbols. There are 95 printable characters in total.
Code 20hex, the "space" character, denotes the space between words, as produced by the space bar of a keyboard. Since the space character is considered an invisible graphic (rather than a control character)
[ (NB. Almost identical wording to USAS X3.4-1968 except for the intro.)] it is listed in the table below instead of in the previous section.
Code 7Fhex corresponds to the non-printable "delete" (DEL) control character and is therefore omitted from this chart; it is covered in the previous section's chart. Earlier versions of ASCII used the up arrow instead of the caret (5Ehex) and the left arrow instead of the underscore (5Fhex).
Points which represented a different character in previous versions (the 1963 version and/or the 1965 draft) are shown boxed. Points assigned since the 1963 version but otherwise unchanged are shown lightly shaded relative to their legend colours.
ASCII was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone & Telegraph's TWX (TeletypeWriter eXchange) network. TWX originally used the earlier five-bit ITA2, which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escape sequence.
His British colleague Hugh McGregor Ross helped to popularize this work according to Bemer, "so much so that the code that was to become ASCII was first called the ''Bemer–Ross Code'' in Europe". Because of his extensive work on ASCII, Bemer has been called "the father of ASCII".
On March 11, 1968, U.S. President Lyndon B. Johnson mandated that all computers purchased by the United States Federal Government support ASCII, stating:
I have also approved recommendations of the Secretary of Commerce [Luther H. Hodges] regarding standards for recording the Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used in computer operations.
All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used.
ASCII was the most common character encoding on the World Wide Web until December 2007, when UTF-8 encoding surpassed it; UTF-8 is backward compatible with ASCII.
Variants and derivations
As computer technology spread throughout the world, different standards bodies and corporations developed many variations of ASCII to facilitate the expression of non-English languages that used Roman-based alphabets. One could class some of these variations as "ASCII extensions", although some misuse that term to represent all variants, including those that do not preserve ASCII's character-map in the 7-bit range. Furthermore, the ASCII extensions have also been mislabelled as ASCII.
From early in its development, ASCII was intended to be just one of several national variants of an international character code standard.
Other international standards bodies have ratified character encodings such as ISO 646 (1967) that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£). Almost every country needed an adapted version of ASCII, since ASCII suited the needs of only the US and a few other countries. For example, Canada had its own version that supported French characters.
Many other countries developed variants of ASCII to include non-English letters (e.g. é, ñ, ß, Ł), currency symbols (e.g. £, ¥), etc. See also YUSCII (Yugoslavia).
It would share most characters in common, but assign other locally useful characters to several code points reserved for "national use". However, the four years that elapsed between the publication of ASCII-1963 and ISO's first acceptance of an international recommendation during 1967
caused ASCII's choices for the national use characters to seem to be de facto standards for the world, causing confusion and incompatibility once other countries did begin to make their own assignments to these code points.
ISO/IEC 646, like ASCII, is a 7-bit character set. It does not make any additional codes available, so the same code points encoded different characters in different countries. Escape codes were defined to indicate which national variant applied to a piece of text, but they were rarely used, so it was often impossible to know what variant to work with and, therefore, which character a code represented, and in general, text-processing systems could cope with only one variant anyway.
Because the bracket and brace characters of ASCII were assigned to "national use" code points that were used for accented letters in other national variants of ISO/IEC 646, a German, French, or Swedish, etc. programmer using their national variant of ISO/IEC 646, rather than ASCII, had to write, and thus read, something such as
ä aÄiÜ = 'Ön'; ü
C trigraphs were created to solve this problem for ANSI C, although their late introduction and inconsistent implementation in compilers limited their use. Many programmers kept their computers on US-ASCII, so plain-text in Swedish, German etc. (for example, in e-mail or Usenet) contained "" and similar variants in the middle of words, something those programmers got used to. For example, a Swedish programmer mailing another programmer asking if they should go for lunch, could get "Nsar" as the answer, which should be "Nä jag har smörgåsar" meaning "No I've got sandwiches".
In Japan and Korea, still as of 2020-ies, a variation of ASCII is used, in which the backslash (5C hex) is rendered as ¥ (a Yen sign, in Japan) or ₩ (a Won sign, in Korea). This means that for example the file path C:\Users\Smith is shown as C:¥Users¥Smith (in Japan) or C:₩Users₩Smith (in Korea).
Eventually, as 8-, 16- and 32-bit (and later 64-bit) computers began to replace 12-, 18- and 36-bit computers as the norm, it became common to use an 8-bit byte to store each character in memory, providing an opportunity for extended, 8-bit relatives of ASCII. In most cases these developed as true extensions of ASCII, leaving the original character-mapping intact, but adding additional character definitions after the first 128 (i.e., 7-bit) characters.
Encodings include ISCII (India), VISCII (Vietnam). Although these encodings are sometimes referred to as ASCII, true ASCII is defined strictly only by the ANSI standard.
Most early home computer systems developed their own 8-bit character sets containing line-drawing and game glyphs, and often filled in some or all of the control characters from 0 to 31 with more graphics. Kaypro CP/M computers used the "upper" 128 characters for the Greek alphabet.
The PETSCII code Commodore International used for their 8-bit systems is probably unique among post-1970 codes in being based on ASCII-1963, instead of the more common ASCII-1967, such as found on the ZX Spectrum computer. Atari 8-bit computers and Galaksija computers also used ASCII variants.
The IBM PC defined code page 437, which replaced the control characters with graphic symbols such as smiley faces, and mapped additional graphic characters to the upper 128 positions. Operating systems such as DOS supported these code pages, and manufacturers of IBM PCs supported them in hardware. Digital Equipment Corporation developed the Multinational Character Set (DEC-MCS) for use in the popular VT220 terminal as one of the first extensions designed more for international languages than for block graphics. The Macintosh defined Mac OS Roman and Postscript also defined a set, both of these contained both international letters and typographic punctuation marks instead of graphics, more like modern character sets.
The ISO/IEC 8859 standard (derived from the DEC-MCS) finally provided a standard that most systems copied (at least as accurately as they copied ASCII, but with many substitutions). A popular further extension designed by Microsoft, Windows-1252 (often mislabeled as ISO-8859-1), added the typographic punctuation marks needed for traditional text printing. ISO-8859-1, Windows-1252, and the original 7-bit ASCII were the most common character encodings until 2008 when UTF-8 became more common.
ISO/IEC 4873 introduced 32 additional control codes defined in the 80–9F hexadecimal range, as part of extending the 7-bit ASCII encoding to become an 8-bit system.
Unicode and the ISO/IEC 10646 Universal Character Set (UCS) have a much wider array of characters and their various encoding forms have begun to supplant ISO/IEC 8859 and ASCII rapidly in many environments. While ASCII is limited to 128 characters, Unicode and the UCS support more characters by separating the concepts of unique identification (using natural numbers called ''code points'') and encoding (to 8-, 16- or 32-bit binary formats, called UTF-8, UTF-16 and UTF-32).
ASCII was incorporated into the Unicode (1991) character set as the first 128 symbols, so the 7-bit ASCII characters have the same numeric codes in both sets. This allows UTF-8 to be backward compatible with 7-bit ASCII, as a UTF-8 file containing only ASCII characters is identical to an ASCII file containing the same sequence of characters. Even more importantly, forward compatibility is ensured as software that recognizes only 7-bit ASCII characters as special and does not alter bytes with the highest bit set (as is often done to support 8-bit ASCII extensions such as ISO-8859-1) will preserve UTF-8 data unchanged.
* 3568 ASCII, an asteroid named after the character encoding
* Alt codes
* ASCII art
* ASCII Ribbon Campaign
* Basic Latin (Unicode block) (ASCII as a subset of Unicode)
* Extended ASCII
* HTML decimal character rendering
* Jargon File, a glossary of computer programmer slang which includes a list of common slang names for ASCII characters
* List of computer character sets
* List of Unicode characters
Category:Computer-related introductions in 1963
Category:Presentation layer protocols