HOME
The Info List - Han Unification


--- Advertisement ---



Han unification
Han unification
is an effort by the authors of Unicode
Unicode
and the Universal Character Set to map multiple character sets of the so-called CJK languages into a single set of unified characters. Han characters are a common feature of written Chinese (hanzi), Japanese (kanji), and Korean (hanja). Modern Chinese, Japanese and Korean typefaces typically use regional or historical variants of a given Han character. In the formulation of Unicode, an attempt was made to unify these variants by considering them different glyphs representing the same "grapheme", or orthographic unit, hence, "Han unification", with the resulting character repertoire sometimes contracted to Unihan.[citation needed] Unihan can also refer to the Unihan Database maintained by the Unicode Consortium, which provides information about all of the unified Han characters encoded in the Unicode
Unicode
Standard, including mappings to various national and industry standards, indices into standard dictionaries, encoded variants, pronunciations in various languages, and an English definition. The database is available to the public as text files[1] and via an interactive Web site.[2][3] The latter also includes representative glyphs and definitions for compound words drawn from the free Japanese EDICT and Chinese C EDICT dictionary projects (which are provided for convenience and are not a formal part of the Unicode
Unicode
Standard).

Contents

1 Rationale and controversy

1.1 Graphemes versus glyphs 1.2 Unihan "abstract characters" 1.3 Alternatives 1.4 Merger of All Equivalent Characters

2 Examples of language-dependent glyphs 3 Examples of some non-unified Han ideographs 4 Ideographic Variation Database (IVD) 5 Unicode
Unicode
ranges

5.1 International Ideographs Core

6 Unihan database files 7 See also 8 Notes 9 References

Rationale and controversy[edit]

This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (August 2007) (Learn how and when to remove this template message)

The Unicode
Unicode
Standard details the principles of Han unification.[4][5] The Ideographic Rapporteur Group (IRG), made up of experts from the Chinese-speaking countries, North and South Korea, Japan, Vietnam, and other countries, is responsible for the process. One possible rationale is the desire to limit the size of the full Unicode
Unicode
character set, where CJK characters as represented by discrete ideograms may approach or exceed 100,000 (while those required for ordinary literacy in any language are probably under 3,000). Version 1 of Unicode
Unicode
was designed to fit into 16 bits and only 20,940 characters (32%) out of the possible 65,536 were reserved for these CJK Unified Ideographs. Later Unicode
Unicode
has been extended to 21 bits allowing many more CJK characters (87,882 are assigned, with room for more). The article The secret life of Unicode, located on IBM DeveloperWorks attempts to illustrate part of the motivation for Han unification:

The problem stems from the fact that Unicode
Unicode
encodes characters rather than "glyphs," which are the visual representations of the characters. There are four basic traditions for East Asian character shapes: traditional Chinese, simplified Chinese, Japanese, and Korean. While the Han root character may be the same for CJK languages, the glyphs in common use for the same characters may not be, and new characters were invented in each country. For example, the traditional Chinese glyph for "grass" uses four strokes for the "grass" radical 艹, whereas the simplified Chinese, Japanese, and Korean glyphs use three. But there is only one Unicode point for the grass character (U+8349) regardless of writing system. Another example is the ideograph for "one" (壹, 壱, or 一), which is different in Chinese, Japanese, and Korean. Many people think that the three versions should be encoded differently.

In fact, the three ideographs for "one" are encoded separately in Unicode, as they are not considered national variants. The first and second are used on financial instruments to prevent tampering (they may be considered variants), while the third is the common form in all three countries. However, Han unification
Han unification
has also caused considerable controversy, particularly among the Japanese public, who, with the nation's literati, have a history of protesting the culling of historically and culturally significant variants.[6][7] (See Kanji
Kanji
§ Orthographic reform and lists of kanji. Today, the list of characters officially recognized for use in proper names continues to expand at a modest pace.) In year 1993, Japan Electronic Industries Development Association (JEIDA) have published a phamphlet titled "未来の文字コード体系に私達は不安をもっています" (We are feeling anxious for the future character encoding system JPNO 20985671), summarizing major criticism against the Han Unification approach adopted by Unicode. Graphemes versus glyphs[edit]

The Latin small "a" has widely differing glyphs that all represent concrete instances of the same abstract grapheme. Although a native reader of any language using the Latin script recognizes these two glyphs as the same grapheme, to others they might appear to be completely unrelated.

A grapheme is the smallest abstract unit of meaning in a writing system. Any grapheme has many possible glyph expressions, but all are recognized as the same grapheme by those with reading and writing knowledge of a particular writing system. Although Unicode
Unicode
typically assigns characters to code points to express the graphemes within a system of writing, the Unicode
Unicode
Standard (section 3.4 D7) does with caution:

An abstract character does not necessarily correspond to what a user thinks of as a "character" and should not be confused with a grapheme.

However, this quote refers to the fact that some graphemes are composed of several characters. So, for example, the character U+0061 a Latin Small Letter a combined with U+030A ◌̊ Combining Ring Above (i.e. the combination "å") might be understood by a user as a single grapheme while being composed of multiple Unicode
Unicode
abstract characters. In addition, Unicode
Unicode
also assigns some code points to a small number (other than for compatibility reasons) of formatting characters, whitespace characters, and other abstract characters that are not graphemes, but instead used to control the breaks between lines, words, graphemes and grapheme clusters. With the unified Han ideographs, the Unicode
Unicode
Standard makes a departure from prior practices in assigning abstract characters not as graphemes, but according to the underlying meaning of the grapheme: what linguists sometimes call sememes. This departure therefore is not simply explained by the oft quoted distinction between an abstract character and a glyph, but is more rooted in the difference between an abstract character assigned as a grapheme and an abstract character assigned as a sememe. In contrast, consider ASCII's unification of punctuation and diacritics, where graphemes with widely different meanings (for example, an apostrophe and a single quotation mark) are unified because the graphemes are the same. For Unihan the characters are not unified by their appearance, but by their definition or meaning. For a grapheme to be represented by various glyphs means that the grapheme has glyph variations that are usually determined by selecting one font or another or using glyph substitution features where multiple glyphs are included in a single font. Such glyph variations are considered by Unicode
Unicode
a feature of rich text protocols and not properly handled by the plain text goals of Unicode. However, when the change from one glyph to another constitutes a change from one grapheme to another—where a glyph cannot possibly still, for example, mean the same grapheme understood as the small letter "a"— Unicode
Unicode
separates those into separate code points. For Unihan the same thing is done whenever the abstract meaning changes, however rather than speaking of the abstract meaning of a grapheme (the letter "a"), the unification of Han ideographs assigns a new code point for each different meaning—even if that meaning is expressed by distinct graphemes in different languages. Although a grapheme such as "ö" might mean something different in English (as used in the word "coördinated") than it does in German, it is still the same grapheme and can be easily unified so that English and German can share a common abstract Latin writing system (along with Latin itself). This example also points to another reason that "abstract character" and grapheme as an abstract unit in a written language do not necessarily map one-to-one. In English the combining diaeresis, "¨", and the "o" it modifies may be seen as two separate graphemes, whereas in languages such as Swedish, the letter "ö" may be seen as a single grapheme. Similarly in English the dot on an "i" is understood as a part of the "i" grapheme whereas in other languages, such as Turkish, the dot may be seen as a separate grapheme added to the dotless "ı". To deal with the use of different graphemes for the same Unihan sememe, Unicode
Unicode
has relied on several mechanisms: especially as it relates to rendering text. One has been to treat it as simply a font issue so that different fonts might be used to render Chinese, Japanese or Korean. Also font formats such as OpenType
OpenType
allow for the mapping of alternate glyphs according to language so that a text rendering system can look to the user's environmental settings to determine which glyph to use. The problem with these approaches is that they fail to meet the goals of Unicode
Unicode
to define a consistent way of encoding multilingual text.[8] So rather than treat the issue as a rich text problem of glyph alternates, Unicode
Unicode
added the concept of variation selectors, first introduced in version 3.2 and supplemented in version 4.0.[9] While variation selectors are treated as combining characters, they have no associated diacritic or mark. Instead, by combining with a base character, they signal the two character sequence selects a variation (typically in terms of grapheme, but also in terms of underlying meaning as in the case of a location name or other proper noun) of the base character. This then is not a selection of an alternate glyph, but the selection of a grapheme variation or a variation of the base abstract character. Such a two-character sequence however can be easily mapped to a separate single glyph in modern fonts. Since Unicode
Unicode
has assigned 256 separate variation selectors, it is capable of assigning 256 variations for any Han ideograph. Such variations can be specific to one language or another and enable the encoding of plain text that includes such grapheme variations. Unihan "abstract characters"[edit] Since the Unihan standard encodes "abstract characters", not "glyphs", the graphical artifacts produced by Unicode
Unicode
have been considered temporary technical hurdles, and at most, cosmetic. However, again, particularly in Japan, due in part to the way in which Chinese characters were incorporated into Japanese writing systems historically, the inability to specify a particular variant was considered a significant obstacle to the use of Unicode
Unicode
in scholarly work. For example, the unification of "grass" (explained above), means that a historical text cannot be encoded so as to preserve its peculiar orthography. Instead, for example, the scholar would be required to locate the desired glyph in a specific typeface in order to convey the text as written, defeating the purpose of a unified character set. Unicode
Unicode
has responded to these needs by assigning variation selectors so that authors can select grapheme variations of particular ideographs (or even other characters).[9] Small differences in graphical representation are also problematic when they affect legibility or belong to the wrong cultural tradition. Besides making some Unicode
Unicode
fonts unusable for texts involving multiple "Unihan languages", names or other orthographically sensitive terminology might be displayed incorrectly. (Proper names tend to be especially orthographically conservative—compare this to changing the spelling of one's name to suit a language reform in the US or UK) While this may be considered primarily a graphical representation or rendering problem to be overcome by more artful fonts, the widespread use of Unicode
Unicode
would make it difficult to preserve such distinctions. The problem of one character representing semantically different concepts is also present in the Latin part of Unicode. The Unicode character for an apostrophe is the same as the character for a right single quote (’). On the other hand, the capital Latin letter "A" is not unified with the Greek letter "Α" (Alpha). This is, of course, desirable for reasons of compatibility, and deals with a much smaller alphabetic character set. While the unification aspect of Unicode
Unicode
is controversial in some quarters for the reasons given above, Unicode
Unicode
itself does now encode a vast number of seldom-used characters of a more-or-less antiquarian nature. Some of the controversy stems from the fact that the very decision of performing Han unification
Han unification
was made by the initial Unicode
Unicode
Consortium, which at the time was a consortium of North American companies and organizations (most of them in California),[10] but included no East Asian government representatives. The initial design goal was to create a 16-bit standard,[11] and Han unification
Han unification
was therefore a critical step for avoiding tens of thousands of character duplications. This 16-bit requirement was later abandoned, making the size of the character set less an issue today. The controversy later extended to the internationally representative ISO: the initial CJK-JRG group favored a proposal (DIS 10646) for a non-unified character set, "which was thrown out in favor of unification with the Unicode
Unicode
Consortium's unified character set by the votes of American and European ISO members" (even though the Japanese position was unclear).[12] Endorsing the Unicode
Unicode
Han unification
Han unification
was a necessary step for the heated ISO 10646/ Unicode
Unicode
merger. Much of the controversy surrounding Han unification
Han unification
is based on the distinction between glyphs, as defined in Unicode, and the related but distinct idea of graphemes. Unicode
Unicode
assigns abstract characters (graphemes), as opposed to glyphs, which are a particular visual representations of a character in a specific typeface. One character may be represented by many distinct glyphs, for example a "g" or an "a", both of which may have one loop (a, g) or two (a, g). Yet for a reader of Latin script based languages the two variations of the "a" character are both recognized as the same grapheme. Graphemes present in national character code standards have been added to Unicode, as required by Unicode's Source Separation rule, even where they can be composed of characters already available. The national character code standards existing in CJK languages are considerably more involved, given the technological limitations under which they evolved, and so the official CJK participants in Han unification
Han unification
may well have been amenable to reform. Unlike European versions, CJK Unicode
Unicode
fonts, due to Han unification, have large but irregular patterns of overlap, requiring language-specific fonts. Unfortunately, language-specific fonts also make it difficult to access to a variant which, as with the "grass" example, happens to appear more typically in another language style. (That is to say, it would be difficult to access "grass" with the four-stroke radical more typical of Traditional Chinese in a Japanese environment, which fonts would typically depict the three-stroke radical.) Unihan proponents tend to favor markup languages for defining language strings, but this would not ensure the use of a specific variant in the case given, only the language-specific font more likely to depict a character as that variant. (At this point, merely stylistic differences do enter in, as a selection of Japanese and Chinese fonts are not likely to be visually compatible.) Chinese users seem to have fewer objections to Han unification, largely because Unicode
Unicode
did not attempt to unify Simplified Chinese characters with Traditional Chinese characters. (Simplified Chinese characters are an invention of the People's Republic of China
People's Republic of China
and they are used among Chinese speakers in the PRC, Singapore, and Malaysia. Traditional Chinese characters
Traditional Chinese characters
are used in Hong Kong and Taiwan (Big5) and they are, with some differences, more familiar to Korean and Japanese users.) Unicode
Unicode
is seen as neutral with regards to this politically charged issue, and has encoded Simplified and Traditional Chinese glyphs separately (e.g. the ideograph for "discard" is 丟 U+4E1F for Traditional Chinese Big5 #A5E1 and 丢 U+4E22 for Simplified Chinese GB #2210). It is also noted that Traditional and Simplified characters should be encoded separately according to Unicode
Unicode
Han Unification rules, because they are distinguished in pre-existing PRC character sets. Furthermore, as with other variants, Traditional to Simplified characters is not a one-to-one relationship. Alternatives[edit] There are several alternative character sets that are not encoding according to the principle of Han Unification, and thus free from its restrictions:

CNS character set CCCII character set TRON Mojikyo

These region-dependent character sets are also seen as not affected by Han Unification because of their region-specific nature:

ISO/IEC 2022 (based on sequence codes to switch between Chinese, Japanese, Korean character sets – hence without unification) Big5 extensions GCCS and its successor HKSCS

However, none of these alternative standards has been as widely adopted as Unicode, which is now the base character set for many new standards and protocols, internationally adopted, and is built into the architecture of operating systems (Microsoft Windows, Apple macOS, and many Unix-like
Unix-like
systems), programming languages (Perl, Python, C#, Java, Common Lisp, APL, C, C++), and libraries (IBM International Components for Unicode
Unicode
(ICU) along with the Pango, Graphite, Scribe, Uniscribe, and ATSUI rendering engines), font formats ( TrueType
TrueType
and OpenType) and so on. In March 1989, (B)TRON-based system was adopted by Japanese government organizations "Center for Educational Computing" as the system of choice for school education including compulsory education.[13] However, in April, a report titled "1989 National Trade Estimate Report on Foreign Trade Barriers" from Office of the United States Trade Representative have specifically listed the system as a trade barrier in Japan. The report claimed that the adoption of TRON-based system by Japanese government is advantageous to Japanese manufacturers, and thus excluding US Operating Systems from the huge new market, specifically the report have listed MS-DOS, OS/2 and UNIX as examples. The Office of USTR was allegedly under Microsoft's influence as its former officer Tom Robertson was then offered a lucrative position by Microsoft.[14] While the TRON system itself have been subsequently removed from the list of sanction by the Section 301 of the Trade Act of 1974 after protests by the organization in May 1989, the trade dispute have caused Ministry of International Trade and Industry to accept opinion from Masayoshi Son
Masayoshi Son
to cancel Center of Educational Computing's selection of TRON-based system for the use of education computers.[15] The incident is regarded as a symbolic event for the lost in momentum and eventual demise of the BTRON system, which have led to the widespread adoption of MS-DOS system in the Japan and the eventual adoption of Unicode
Unicode
system that ship with their successors. Merger of All Equivalent Characters[edit] There has not been any push for full semantic unification of all semantically-linked characters, though the idea would treat the respective users of East Asian languages the same, whether they write in Korean, Simplified Chinese, Traditional Chinese, Kyūjitai Japanese, Shinjitai
Shinjitai
Japanese or Vietnamese. Instead of some variants getting unique codepoints while other groups of variants have to share single codepoints, all variants could be reliably expressed only with metadata tags (e.g., CSS formatting in webpages). The burden would be on all those who use differing versions of 直, 別, 兩, 兔, whether that difference be due to simplification, international variance or intra-national variance. However, for some platforms (e.g., smartphones), a device may come with only one font pre-installed. The system font must make a decision for the default glyph for each codepoint and these glyphs can differ greatly, indicating different underlying graphemes. Consequently, relying on language markup across the board as an approach is beset with two major issues. First, there are contexts where language markup is not available (code commits, plain text). Second, any solution would require every operating system to come pre-installed with many glyphs for semantically identical characters that have many variants. In addition to the standard character sets in Simplified Chinese, Traditional Chinese, Korean, Vietnamese, Kyūjitai Japanese and Shinjitai
Shinjitai
Japanese, there also exist "ancient" forms of characters that are of interest to historians, linguists and philologists. Unicode's Unihan database has already drawn connections between many characters. The Unicode
Unicode
database catalogs the connections between variant characters with unique codepoints already. However, for characters with a shared codepoint, the reference glyph image is usually biased toward the Traditional Chinese version. Also, the decision of whether to classify pairs as semantic variants or z-variants is not always consistent or clear, despite rationalizations in the handbook.[16] So-called semantic variants of 丟 (U+4E1F) and 丢 (U+4E22) are examples that Unicode
Unicode
gives as differing in a significant way in their abstract shapes, while Unicode
Unicode
lists 佛 and 仏 as z-variants, differing only in font styling. Paradoxically, Unicode
Unicode
considers 兩 and 両 to be near identical z-variants while at the same time classifying them as significantly different semantic variants. There are also cases of some pairs of characters being simultaneously semantic variants and specialized semantic variants and simplified variants: 個 (U+500B) and 个 (U+4E2A). There are cases of non-mutual equivalence. For example, the Unihan database entry for 亀 (U+4E80) considers 龜 (U+9F9C) to be its z-variant, but the entry for 龜 does not list 亀 as a z-variant, even though 龜 was obviously already in the database at the time that the entry for 亀 was written. Some clerical errors led to doubling of 100% identical characters such as 﨣 (U+FA23) and 𧺯 (U+27EAF). If your default font has glyphs encoded to both points so that one font is used for both, they should appear 100% identical. These cases are listed as z-variants (despite having no variance at all). Intentionally duplicated characters were added to facilitate bit-for-bit round-trip conversion. Because round-trip conversion was an early selling point of Unicode, this meant that if a national standard in use unnecessarily duplicated a character, Unicode
Unicode
had to do the same. Unicode
Unicode
calls these intentional duplications "compatibility variants" as with 漢 (U+FA9A) which calls 漢 (U+6F22) its compatibility variant. As long as your browser uses the same font for both, they should appear 100% identical. Sometimes, as in the case of 車 with U+8ECA and U+F902, the added compatibility character lists the already present version of 車 as both its compatibility variant and its z-variant. The compatibility variant field overrides the z-variant field, forcing normalization under all forms, including canonical equivalence. Despite the name, compatibility variants are actually canonically equivalent and are united in any Unicode
Unicode
normalization scheme and not only under compatibility normalization.[a] This is similar to how the Angstrom symbol is canonically equivalent to a pre-composed Capital Latin Letter A with Ring Above (Å). Much software (like the editing software, for example,) will replace all canonically equivalent characters that are discouraged (the Angstrom symbol) with the recommended equivalent (Capital Latin Letter A with Ring Above [Å]). Despite the name, CJK "compatibility variants" are canonically equivalent characters and not compatibility characters. 漢 (U+FA9A) was added to the database later than 漢 (U+6F22) was and its entry informs the user of the compatibility information. On the other hand, 漢 (U+6F22) does not have this equivalence listed in this entry. Unicode
Unicode
demands that all entries, once admitted, cannot change compatibility or equivalence so that normalization rules for already existing characters do not change. Some pairs of Traditional and Simplified are also considered to be semantic variants. According to Unicode's definitions, it makes sense that all simplifications (that do not result in wholly different characters being merged for their homophony) will be a form of semantic variant. Unicode
Unicode
classifies 丟 and 丢 as each other's respective traditional and simplified variants and also as each other's semantic variants. However, while Unicode
Unicode
classifies 億 (U+5104)and 亿 (U+4EBF) as each other's respective traditional and simplified variants, Unicode
Unicode
does not consider 億 and 亿 to be semantic variants of each other. Unicode
Unicode
claims that "Ideally, there would be no pairs of z-variants in the Unicode
Unicode
Standard."[17] This would make it seem that the goal is to at least unify all minor variants, compatibility redundancies and accidental redundancies, leaving the differentiation to fonts and to language tags. This conflicts with the stated goal of Unicode
Unicode
to take away that overhead, and to allow any number of any of the world's scripts to be on the same document with one encoding system. Chapter One of the handbook states that "With Unicode, the information technology industry has replaced proliferating character sets with data stability, global interoperability and data interchange, simplified software, and reduced development costs. While taking the ASCII
ASCII
character set as its starting point, the Unicode
Unicode
Standard goes far beyond ASCII’s limited ability to encode only the upper- and lowercase letters A through Z. It provides the capacity to encode all characters used for the written languages of the world -- more than 1 million characters can be encoded. No escape sequence or control code is required to specify any character in any language. The Unicode character encoding treats alphabetic characters, ideographic characters, and symbols equivalently, which means they can be used in any mixture and with equal facility."[18] That leaves us with settling on one unified reference grapheme for all z-variants, which is contentious since few outside of Japan would recognize 佛 and 仏 as equivalent. Even within Japan, the variants are on different sides of a major simplification called Shinjitai. Unicode
Unicode
would effectively make the PRC's simplification of 侣 (U+4FA3) and 侶 (U+4FB6) a monumental difference by comparison. Such a plan would also eliminate the very visually distinct variations for characters like 直 (U+76F4) and 雇 (U+96C7). One would expect that all simplified characters would simultaneously also be z-variants or semantic variants with their traditional counterparts, but many are neither. It is easier to explain the strange case that semantic variants can be simultaneously both semantic variants and specialized variants when Unicode's definition is that specialized semantic variants have the same meaning only in certain contexts. Languages use them differently. A pair whose characters are 100% drop-in replacements for each other in Japanese may not be so flexible in Chinese. Thus, any comprehensive merger of recommended codepoints would have to maintain some variants that differ only slightly in appearance even if the meaning is 100% the same for all contexts in one language, because in another language the two characters may not be 100% drop-in replacements. Examples of language-dependent glyphs[edit] In each row of the following table, the same character is repeated in all five columns. However, each column is marked (by the lang attribute) as being in a different language: Chinese (two varieties: simplified and traditional), Japanese, Korean, or Vietnamese. The browser should select, for each character, a glyph (from a font) suitable to the specified language. (Besides actual character variation—look for differences in stroke order, number, or direction—the typefaces may also reflect different typographical styles, as with serif and non-serif alphabets.) This only works for fallback glyph selection if you have CJK fonts installed on your system and the font selected to display this article does not include glyphs for these characters.

Code point Chinese (simplified) (zh-Hans) Chinese (traditional) (zh-Hant) Japanese (ja) Korean (ko) Vietnamese (vi-nom) English

U+4ECA 今 今 今 今 今 now

U+4EE4 令 令 令 令 令 cause/command

U+514D 免 免 免 免 免 exempt/spare

U+5165 入 入 入 入 入 enter

U+5168 全 全 全 全 全 all/total

U+5177 具 具 具 具 具 tool

U+5203 刃 刃 刃 刃 刃 knife edge

U+5316 化 化 化 化 化 transform/change

U+5916 外 外 外 外 外 outside

U+60C5 情 情 情 情 情 feeling

U+624D 才 才 才 才 才 talent

U+62B5 抵 抵 抵 抵 抵 arrive/resist

U+6B21 次 次 次 次 次 secondary/follow

U+6D77 海 海 海 海 海 sea

U+76F4 直 直 直 直 直 direct/straight

U+771F 真 真 真 真 真 true

U+795E 神 神 神 神 神 god

U+7A7A 空 空 空 空 空 empty/air

U+8005 者 者 者 者 者 one who does/-ist/-er

U+8349 草 草 草 草 草 grass

U+89D2 角 角 角 角 角 edge/horn

U+9053 道 道 道 道 道 way/path/road

U+96C7 雇 雇 雇 雇 雇 employ

U+9AA8 骨 骨 骨 骨 骨 bone

No character variant that is exclusive to Korean or Vietnamese has received a unique codepoint, whereas almost all Shinjitai
Shinjitai
Japanese variants or Simplified Chinese variants each have unique codepoints and unambiguous reference glyphs in the Unicode
Unicode
standard. In the twentieth century, East Asian countries made their own respective encoding standards. Within each standard, there coexisted variants with unique code points, hence the unique code points in Unicode
Unicode
for certain sets of variants. Taking Simplified Chinese as an example, the two character variants of 內 (U+5167) and 内 (U+5185) differ in exactly the same way as do the Korean and non-Korean variants of 全 (U+5168). Each respective variant of the first character has either 入 (U+5165) or 人 (U+4EBA). Each respective variant of the second character has either 入 (U+5165) or 人 (U+4EBA). Both variants of the first character got their own unique codepoints. However, the two variants of the second character had to share the same codepoint. The justification Unicode
Unicode
gives is that the national standards body in the PRC made unique codepoints for the two variations of the first character 內/内, whereas Korea never made separate codepoints for the unique variants of 全. There is a reason for this that has nothing to do with how the domestic bodies view the characters themselves. China went through a process in the twentieth century that changed (if not simplified) several characters. During this transition, there was a need to be able encode both variants within the same document. Korean has always used the variant of 全 with the 入 (U+5165) radical on top. Therefore, it had no reason to encode both variants. Korean language
Korean language
documents made in the twentieth century had little reason to represent both versions in the same document. The same argument for unification could be made for Latin and Cyrillic -- the American English encoding system known as ASCII
ASCII
never encoded the Cyrillic А (U+0410) differently from the Latin A (U+0041) -- but we know that ASCII
ASCII
was never intended to display both Latin and Cyrillic in the same document. Similarly, Korean encoding standards never had the aim of displaying Korean and Japanese and Chinese and Cyrillic and Ethiopian all within a single document. Almost all of the variants that the PRC developed or standardized got unique codepoints owing simply to the fortune of the Simplified Chinese transition carrying through into the computing age. This privilege however, seems to apply inconsistently. While most simplifications performed in Japan and mainland China with codepoints in national standards, including characters simplified differently in each country, did make it into Unicode
Unicode
as unique codepoints. 62 Shinjitai
Shinjitai
"simplified" characters with unique codepoints in Japan got merged with their Kyūjitai
Kyūjitai
traditional equivalents, like 海. This can cause problems for the language tagging strategy. There is no universal tag for the traditional and "simplified" versions of Japanese as there are for Chinese. Thus, any Japanese writer wanting to display the Kyūjitai
Kyūjitai
form of 海 may have to tag the character as "Traditional Chinese" or trust that the recipient's Japanese font uses only the Kyūjitai
Kyūjitai
glyphs, but tags of Traditional Chinese and Simplified Chinese may be necessary to show the two forms side-by-side in a Japanese textbook. This would preclude one from using the same font for an entire document, however. There are two unique codepoints for 海 in Unicode, but only for "compatibility reasons". Any Unicode-conformant font must display the Kyūjitai
Kyūjitai
and Shinjitai versions' equivalent codepoints in Unicode
Unicode
as the same. Unofficially, a font may display 海 differently with U+6D77 as the Shinjitai version and U+FA45 as the Kyūjitai
Kyūjitai
version (which is identical to the traditional version in written Chinese and Korean).[a] The radical 糸 (U+7CF8) is used in characters like 紅/红, with two variants, the second form being simply the cursive form. The radical components of 紅 (U+7D05) and 红 (U+7EA2) are semantically identical and the glyphs differ only in the latter using a cursive version of the 糸 component. However, in mainland China, the standards bodies wanted to standardize the cursive form when used in characters like 红. Because this change happened relatively recently, there was a transition period. Both 紅 (U+7D05) and 红 (U+7EA2) got separate codepoints in the PRC's text encoding standards bodies so Chinese-language documents could use both version. The two variants each received unique codepoints in Unicode
Unicode
as well. The case of the radical 艸 (U+8278) proves how arbitrary the state of affairs is. When used to compose characters like 草 (U+8349), the radical was placed at the top, but had two different forms. Traditional Chinese and Korean use a four-stroke version. At the top of 草 should be something that looks like "+ +". Simplified Chinese, Kyūjitai
Kyūjitai
Japanese and Shinjitai
Shinjitai
Japanese use a three-stroke version (艹). The PRC's text encoding bodies did not encode the two variants differently. The fact that almost every other change brought about by the PRC, no matter how minor, did warrant a unique codepoint suggests that this exception may have been unintentional. Unicode
Unicode
copied the existing standards as is, preserving such irregularities. The Unicode
Unicode
Consortium has recognized errors in other instances. The myriad Unicode
Unicode
blocks for CJK Han Ideographs have redundancies in original standards, redundancies brought about by flawed importation of the original standards, as well as accidental mergers that are later corrected, providing precedent for dis-unifying characters. For native speakers, variants can be unintelligible or be unacceptable in educated contexts. English speakers in America, or anywhere for that matter, may understand a handwritten note saying "4P5 kg" as "495 kg", but writing the nine backwards (so it looks like a "P") can be jarring and would be considered incorrect in any school. Likewise, to users of one CJK language reading a document with "foreign" glyphs: variants of 骨 can appear as mirror images, 者 can be missing a stroke/have an extraneous stroke, and 令 may be unreadable or be confused with 今 depending on which variant of 令 is used. Examples of some non-unified Han ideographs[edit] For more striking variants, Unicode
Unicode
has encoded variant characters, making it unnecessary to switch between fonts or lang attributes. In the following table, each row compares variants that have been assigned different code points.[2] Note that for characters such as 入 (U+5165), the only way to display the two variants is to change font (or lang attribute) as described in the previous table. However, for 內 (U+5167), there is an alternate character 内 (U+5185) as illustrated below. For some characters, like 兌/兑 (U+514C/U+5151), either method can be used to display the different glyphs.

Simplified Traditional Japanese Other variant English

U+4E22 丢 U+4E1F 丟

to lose

U+4E24 两 U+5169 兩 U+4E21 両 U+34B3 㒳 two, both

U+4E58 乘

U+4E57 乗 U+6909 椉 to ride

U+4EA7 产 U+7522 產 U+7523 産

give birth

U+4FA3 侣 U+4FB6 侶

companion

U+5151 兑 U+514C 兌

to cash

U+5185 内 U+5167 內

inside

U+522B 别 U+5225 別

to leave

U+7985 禅 U+79AA 禪 U+7985 禅

meditation (Zen)

U+7A0E 税 U+7A05 稅

taxes

U+7EA2 红 U+7D05 紅

red

U+7EAA 纪 U+7D00 紀

discipline

U+997F 饿 U+9913 餓

hungry

U+9AD8 高

U+9AD9 髙 high

U+9F9F 龟 U+9F9C 龜 U+4E80 亀

tortoise

Sources: MBDG Chinese-English Dictionary

Ideographic Variation Database (IVD)[edit] Main article: Variant form (Unicode) In order to resolve issues brought by Han unification, a Unicode Technical Standard known as Unicode
Unicode
Ideographic Variation Database have been created to resolve the problem of specifying specific glyph in plain text environment.[19]. By registering glyph collections into Ideographic Variation Database (IVD), it is possible to use Ideographic Variation Selectors to form Ideographic Variation Sequence (IVS) to specify or restrict the apporipate glyph in text processing in Unicode
Unicode
environment. Unicode
Unicode
ranges[edit] Main article: CJK Unified Ideographs Ideographic characters assigned by Unicode
Unicode
appear in the following blocks:

CJK Unified Ideographs
CJK Unified Ideographs
(4E00–9FFF) (Otherwise known as URO, abbreviation of Unified Repertoire and Ordering)[20] CJK Unified Ideographs
CJK Unified Ideographs
Extension A (3400–4DBF) CJK Unified Ideographs
CJK Unified Ideographs
Extension B (20000–2A6DF) CJK Unified Ideographs
CJK Unified Ideographs
Extension C (2A700–2B73F) CJK Unified Ideographs
CJK Unified Ideographs
Extension D (2B740–2B81F) CJK Unified Ideographs
CJK Unified Ideographs
Extension E (2B820–2CEAF) CJK Unified Ideographs
CJK Unified Ideographs
Extension F (2CEB0–2EBEF) CJK Compatibility Ideographs (F900–FAFF) (the twelve characters at FA0E, FA0F, FA11, FA13, FA14, FA1F, FA21, FA23, FA24, FA27, FA28 and FA29 are actually "unified ideographs" not "compatibility ideographs")

Unicode
Unicode
includes support of CJKV radicals, strokes, punctuation, marks and symbols in the following blocks:

CJK Radicals Supplement (2E80–2EFF) CJK Strokes (31C0–31EF) CJK Symbols and Punctuation (3000–303F) Ideographic Description Characters (2FF0–2FFF)

Additional compatibility (discouraged use) characters appear in these blocks:

CJK Compatibility (3300–33FF) CJK Compatibility Forms (FE30–FE4F) CJK Compatibility Ideographs (F900–FAFF) CJK Compatibility Ideographs Supplement (2F800–2FA1F) Enclosed CJK Letters and Months (3200–32FF) Enclosed Ideographic Supplement (1F200–1F2FF) Kangxi Radicals (2F00–2FDF)

These compatibility characters (excluding the twelve unified ideographs in the CJK Compatibility Ideographs block) are included for compatibility with legacy text handling systems and other legacy character sets. They include forms of characters for vertical text layout and rich text characters that Unicode
Unicode
recommends handling through other means. International Ideographs Core[edit] Main article: International Ideographs Core International Ideographs Core (IICore) is a subset of 9810 ideographs derived from the CJK Unified Ideographs
CJK Unified Ideographs
tables, designed to be implemented in devices with limited memory, input/output capability, and/or applications where the use of complete ISO 10646 ideographs repertoire is not feasible. There are 9810 characters in current standard.[21] Unihan database files[edit] The Unihan project has always made an effort to make available their build database.[1] The libUnihan project provides a normalized SQLite Unihan database and corresponding C library.[22] All tables in this database are in fifth normal form. libUnihan is released as LGPL, while its database, UnihanDb, is released as MIT License. See also[edit]

Chinese character
Chinese character
encoding GB 18030 Sinicization Z-variant List of CJK fonts Allography Variant Chinese character

Notes[edit]

^ a b implements a code normalization that makes it impossible to display both characters but both can be accessed at the Unihan database.

References[edit]

^ a b "Unihan.zip". The Unicode
Unicode
Standard. Unicode
Unicode
Consortium.  ^ a b "Unihan Database Lookup". The Unicode
Unicode
Standard. Unicode Consortium.  ^ "Unihan Database Lookup: Sample lookup for 中". The Unicode Standard. Unicode
Unicode
Consortium.  ^ "Chapter 18: East Asia, Principles of Han Unification" (PDF). The Unicode
Unicode
Standard. Unicode
Unicode
Consortium.  ^ Whistler, Ken (2010-10-25). " Unicode
Unicode
Technical Note 26: On the Encoding of Latin, Greek, Cyrillic, and Han".  ^ Unicode
Unicode
Revisited Steven J. Searle; Web Master, TRON Web ^ "IVD/IVSとは - 文字情報基盤整備事業". mojikiban.ipa.go.jp.  ^ "Chapter 1: Introduction" (PDF). The Unicode
Unicode
Standard. Unicode Consortium.  ^ a b "Ideographic Variation Database". Unicode
Unicode
Consortium.  ^ "Early Years of Unicode". Unicode
Unicode
Consortium.  ^ Becker, Joseph D. (1998-08-29). " Unicode
Unicode
88" (PDF).  ^ " Unicode
Unicode
in Japan: Guide to a technical and psychological struggle". Archived from the original on 2009-06-27. CS1 maint: BOT: original-url status unknown (link) ^ 小林紀興『松下電器の果し状』1章 ^ Krikke, Jan. "The Most Popular Operating System in the World". LinuxInsider.com.  ^ 大下英治 『孫正義 起業の若き獅子』(ISBN 4-06-208718-9)pp. 285-294 ^ "UAX #38: Unicode
Unicode
Han Database (Unihan)". www.unicode.org.  ^ <https://www.unicode.org/reports/tr38/> Retrieved: Mar. 19, 2017. ^ <https://www.unicode.org/versions/Unicode10.0.0/ch01.pdf> Retrieved: Mar. 19, 2017. ^ "UTS #37: Unicode
Unicode
Ideographic Variation Database". www.unicode.org.  ^ "URO". blogs.adobe.com.  ^ "OGCIO : Download Area : International Ideographs Core (IICORE) Comparison Utility". www.ogcio.gov.hk.  ^ (陳定彞), Ding-Yi Chen. "libUnihan - A library for Unihan character database in fifth normal form". libunihan.sourceforge.net. 

v t e

CJK ideographs in Unicode[a]

Block name Code points Used Chart range   Plane Han unification Scripts contained in block

CJK Unified Ideographs  ″  ″  ″ CJK Unified Ideographs
CJK Unified Ideographs
Extension A CJK Unified Ideographs
CJK Unified Ideographs
Extension B  ″  ″  ″  ″  ″  ″ CJK Unified Ideographs
CJK Unified Ideographs
Extension C CJK Unified Ideographs
CJK Unified Ideographs
Extension D CJK Unified Ideographs
CJK Unified Ideographs
Extension E CJK Unified Ideographs
CJK Unified Ideographs
Extension F CJK Radicals Supplement Kangxi Radicals Ideographic Description Characters CJK Symbols and Punctuation CJK Strokes Enclosed CJK Letters and Months CJK Compatibility CJK Compatibility Ideographs CJK Compatibility Forms Enclosed Ideographic Supplement CJK Compatibility Ideographs Supplement

20,992

6,592 42,720

4,160 224 5,776 7,488 128 224 16 64 48 256 256 512 32 256 544

20,971

6,582 42,711

4,149 222 5,762 7,473 115 214 12 64 36 254 256 472 32 64 542

4E00–62FF 6300–77FF 7800–8CFF 8D00–9FFF 3400–4DBF 20000–215FF 21600–230FF 23100–245FF 24600–260FF 26100–275FF 27600–290FF 29100–2A6DF 2A700–2B73F 2B740–2B81F 2B820–2CEAF 2CEB0–2EBEF 2E80–2EFF 2F00–2FDF 2FF0–2FFF 3000–303F 31C0–31EF 3200–32FF 3300–33FF F900–FAFF FE30–FE4F 1F200–1F2FF 2F800–2FA1F

1/4 2/4 3/4 4/4

1/7 2/7 3/7 4/7 5/7 6/7 7/7

0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 2 SIP 2 SIP 2 SIP 2 SIP 2 SIP 2 SIP 2 SIP 2 SIP 2 SIP 2 SIP 2 SIP 0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 1 SMP 2 SIP

Unified Unified Unified Unified Unified Unified Unified Unified Unified Unified Unified Unified Unified Unified Unified Unified Not unified Not unified Not unified Not unified Not unified Not unified Not unified 12 are unified Not unified Not unified Not unified

Han Han Han Han Han Han Han Han Han Han Han Han Han Han Han Han Han Han Common Han, Hangul, Common, Inherited Common Hangul, Katakana, Common Katakana, Common Han Common Hiragana, Common Han

Totals 90,288 89,931       87,882  

^ As of version 10.0

v t e

Unicode

Unicode

Unicode
Unicode
Consortium ISO/IEC 10646 (Universal Character Set) Versions

Code points

Blocks Universal Character Set Character charts Character property Planes Private Use Areas

Characters

Special
Special
purpose

BOM Combining Grapheme
Grapheme
Joiner Left-to-right mark / Right-to-left mark Soft hyphen Word joiner Zero-width joiner Zero-width non-joiner Zero-width space

Lists

Characters CJK Unified Ideographs Combining character Duplicate characters Numerals Scripts Spaces Symbols Halfwidth and fullwidth

Processing

Algorithms

Bi-directional text Collation

ISO 14651

Equivalence Variation sequences International Ideographs Core

Comparison

BOCU-1 CESU-8 Punycode SCSU UTF-1 UTF-7 UTF-8 UTF-9/UTF-18 UTF-16/UCS-2 UTF-32/UCS-4 UTF-EBCDIC

On pairs of code points

Combining character Compatibility characters Duplicate characters Equivalence Homoglyph Precomposed character

list

Z-variant Variation sequences Regional Indicator Symbol Fitzpatrick modifiers

Usage

Domain names (IDN) Email Fonts HTML

entity references numeric references

Input International Ideographs Core

Related standards

Common Locale Data Repository (CLDR) GB 18030 ISO/IEC 8859 ISO 15924

Related topics

Anomalies ConScript Unicode
Unicode
Registry Ideographic Rapporteur Group International Components for Unicode People involved with Unicode Han unification

Scripts and symbols in Unicode

Common and inherited scripts

Combining marks Diacritics Punctuation Space Numbers

Modern scripts

Adlam Arabic

diacritics

Armenian Balinese Bamum Batak Bengali Bopomofo Braille Buhid Burmese Canadian Aboriginal Chakma Cham Cherokee CJK Unified Ideographs
CJK Unified Ideographs
(Han) Cyrillic Deseret Devanagari Ge'ez Georgian Greek Gujarati Gurmukhī Hangul Hanja Hanunó'o Hebrew

diacritics

Hiragana Javanese Kanji Kannada Katakana Kayah Li Khmer Lao Latin Lepcha Limbu Lisu (Fraser) Lontara Malayalam Masaram Gondi Mende Kikakui Miao (Pollard) Mongolian Mro N'Ko New Tai Lue Newa Nushu Ol Chiki Oriya Osage Osmanya Pahawh Hmong Pau Cin Hau Rejang Samaritan Saurashtra Shavian Sinhala Sorang Sompeng Sundanese Sylheti Nagari Syriac Tagbanwa Tai Le Tai Tham Tai Viet Tamil Telugu Thaana Thai Tibetan Tifinagh Tirhuta Vai Warang Citi Yi

Ancient and historic scripts

Ahom Anatolian hieroglyphs Ancient North Arabian Avestan Bassa Vah Bhaiksuki Brāhmī Carian Caucasian Albanian Coptic Cuneiform Cypriot Egyptian hieroglyphs Elbasan Glagolitic Gothic Grantha Hatran Imperial Aramaic Inscriptional Pahlavi Inscriptional Parthian Kaithi Kharosthi Khojki Khudawadi Linear A Linear B Lycian Lydian Mahajani Mandaic Manichaean Marchen Meetei Mayek Meroitic Modi Multani Nabataean Ogham Old Hungarian Old Italic Old Permic Old Persian cuneiform Old Turkic Palmyrene 'Phags-pa Phoenician Psalter Pahlavi Runic Śāradā Siddham South Arabian Soyombo Tagalog (Baybayin) Takri Tangut Ugaritic Zanabazar Square

Notational scripts

Duployan SignWriting

Symbols

Cultural, political, and religious symbols Currency Mathematical operators and symbols Phonetic symbols (including IPA) Emoji

v t e

Character encodings

Early telecommunications

ASCII ISO/IEC 646 ISO/IEC 6937 T.61 BCDIC Baudot code Morse code

Telegraph code Wabun code

Special
Special
telegraphy codes

Non-Latin Chinese Cyrillic

Needle telegraph codes

ISO/IEC 8859

-1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16

Bibliographic use

ANSEL ISO 5426 / 5426-2 / 5427 / 5428 / 6438 / 6861 / 6862 / 10585 / 10586 / 10754 / 11822 MARC-8

National standards

ArmSCII BraSCII CNS 11643 ELOT 927 GOST 10859 GB 18030 HKSCS ISCII JIS X 0201 JIS X 0208 JIS X 0212 JIS X 0213 KOI-7 KPS 9566 KS X 1001 PASCII SI 960 TIS-620 TSCII VISCII YUSCII

EUC

CN JP KR TW

ISO/IEC 2022

CN JP KR CCCII

MacOS
MacOS
code pages ("scripts")

Arabic Celtic CentEuro ChineseSimp / EUC-CN ChineseTrad / Big5 Croatian Cyrillic Devanagari Dingbats Esperanto Farsi (Persian) Gaelic Greek Gujarati Gurmukhi Hebrew Iceland Japanese / ShiftJIS Korean / EUC-KR Latin-1 Roman Romanian Sámi Symbol Thai / TIS-620 Turkish Ukrainian

DOS code pages

100 111 112 113 151 152 161 162 163 164 165 166 210 220 301 437 449 489 620 667 668 707 708 709 710 711 714 715 720 721 737 768 770 771 772 773 774 775 776 777 778 790 850 851 852 853 854 855/872 856 857 858 859 860 861 862 863 864/17248 865 866/808 867 868 869 874/1161/1162 876 877 878 881 882 883 884 885 891 895 896 897 898 899 900 903 904 906 907 909 910 911 926 927 928 929 932 934 936 938 941 942 943 944 946 947 948 949 950/1370 951 966 991 1034 1039 1040 1041 1042 1043 1044 1046 1086 1088 1092 1093 1098 1108 1109 1114 1115 1116 1117 1118 1119 1125/848 1126 1127 1131/849 1139 1167 1168 1300 1351 1361 1362 1363 1372 1373 1374 1375 1380 1381 1385 1386 1391 1392 1393 1394 Kamenický Mazovia CWI-2 KOI8 MIK Iran System

IBM AIX
IBM AIX
code pages

367 371 806 813 819 895 896 912 913 914 915 916 919 920 921/901 922/902 923 952 953 954 955 956 957 958 959 960 961 963 964 965 970 971 1004 1006 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1029 1036 1089 1111 1124 1129/1163 1133 1350 1382 1383

IBM Apple MacIntosh emulations

1275 1280 1281 1282 1283 1284 1285 1286

IBM Adobe emulations

1038 1276 1277

IBM DEC emulations

1020 1021 1023 1090 1100 1101 1102 1103 1104 1105 1106 1107 1287 1288

IBM HP emulations

1050 1051 1052 1053 1054 1055 1056 1057 1058

Windows code pages

CER-GS 874/1162 (TIS-620) 932/943 (Shift JIS) 936/1386 (GBK) 950/1370 (Big5) 949/1363 (EUC-KR) 1169 1174 Extended Latin-8 1200 (UTF-16LE) 1201 (UTF-16BE) 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1261 1270 54936 (GB18030)

EBCDIC
EBCDIC
code pages

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37/1140 37-2 38 39 40 251 252 254 256 257 258 259 260 264 273/1141 274 275 276 277/1142 278/1143 279 280/1144 281 282 283 284/1145 285/1146 286 287 288 289 290 293 297/1147 298 300 310 320 321 322 330 351 352 353 355 357 358 359 360 361 363 382 383 384 385 386 387 388 389 390 391 392 393 394 395 410 420/16804 421 423 424/8616/12712 425 435 500/1148 803 829 833 834 835 836 837 838/838 839 870/1110/1153 871/1149 875/4971/9067 880 881 882 883 884 885 886 887 888 889 890 892 893 905 918 924 930/1390 931 933/1364 935/1388 937/1371 939/1399 1001 1002 1003 1005 1007 1024 1025/1154 1026/1155 1027 1028 1030 1031 1032 1033 1037 1047 1068 1069 1070 1071 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1087 1091 1097 1112/1156 1113 1122/1157 1123/1158 1130/1164 1132 1136 1137 1150 1151 1152 1159 1165 1166 1278 1279 1303 1364 1376 1377 JEF KEIS

Platform specific

Acorn Adobe Standard Adobe Latin 1 Apple II ATASCII Atari ST BICS Casio calculators CDC CPC DEC Radix-50 DEC MCS/NRCS DG International ELWRO-Junior FIELDATA GEM GEOS GSM 03.38 HP Roman Extension HP Roman-8 HP Roman-9 HP FOCAL HP RPL LICS LMBCS Mattel Aquarius MSX NEC APC NeXT PCW PETSCII Sharp calculators TI calculators TRS-80 Ventura International Ventura Symbol WISCII XCCS ZX80 ZX81 ZX Spectrum

Unicode / ISO/IEC 10646

UTF-1 UTF-7 UTF-8 UTF-16
UTF-16
(UTF-16LE/UTF-16BE) / UCS-2 UTF-32 (UTF-32LE/UTF-32BE) / UCS-4 UTF-EBCDIC GB 18030 BOCU-1 CESU-8 SCSU

Miscellaneous code pages

ABICOMP APL ARIB STD-B24 Cork HZ INIS INIS-8 ISO-IR-111 ISO-IR-182 ISO-IR-200 ISO-IR-201 ISO-IR-209 Johab LGR LY1 OML OMS OMX OT1 OT2 OT3 OT4 T2A T2B T2C T2D T3 T4 T5 TS1 TS3 U X2 SEASCII TACE16 TRON UTF-5 UTF-6 WTF-8

Related topics

Code page Control character (C0 C1) CCSID Character encodings in HTML Charset detection Han unification Hardware ISO 6429/IEC 6429/ANSI X3.64 Mojibake

.