Origin and developmentUnicode has the explicit aim of transcending the limitations of traditional character encodings, such as those defined by the ISO/IEC 8859 standard, which find wide usage in various countries of the world but remain largely incompatible with each other. Many traditional character encodings share a common problem in that they allow bilingual computer processing (usually using Latin characters and the local script), but not multilingual computer processing (computer processing of arbitrary scripts mixed with each other). Unicode, in intent, encodes the underlying characters—graphemes and grapheme-like units—rather than the variant glyphs (renderings) for such characters. In the case of Chinese characters, this sometimes leads to controversies over distinguishing the underlying character from its variant glyphs (see Han unification). In text processing, Unicode takes the role of providing a unique ''code point''—a number, not a glyph—for each character. In other words, Unicode represents a character in an abstract way and leaves the visual rendering (size, shape, font, or style) to other software, such as a web browser or word processor. This simple aim becomes complicated, however, because of concessions made by Unicode's designers in the hope of encouraging a more rapid adoption of Unicode. The first 256 code points were made identical to the content of ISO/IEC 8859-1 so as to make it trivial to convert existing western text. Many essentially identical characters were encoded multiple times at different code points to preserve distinctions used by legacy encodings and therefore, allow conversion from those encodings to Unicode (and back) without losing any information. For example, the "Halfwidth and Fullwidth Forms (Unicode block), fullwidth forms" section of code points encompasses a full duplicate of the Latin alphabet because Chinese, Japanese, and Korean (CJK) fonts contain two versions of these letters, "fullwidth" matching the width of the CJK characters, and normal width. For other examples, see duplicate characters in Unicode.
HistoryBased on experiences with the Xerox Character Code Standard (XCCS) since 1980, the origins of Unicode date to 1987, when Joe Becker (Unicode), Joe Becker from Xerox with Lee Collins (software engineer), Lee Collins and Mark Davis (Unicode), Mark Davis from Apple Inc., Apple, started investigating the practicalities of creating a universal character set. With additional input from Peter Fenwick and Dave Opstad, Joe Becker published a draft proposal for an "international/multilingual text character encoding system in August 1988, tentatively called Unicode". He explained that "[t]he name 'Unicode' is intended to suggest a unique, unified, universal encoding". In this document, entitled ''Unicode 88'', Becker outlined a 16-bit character model:
Unicode is intended to address the need for a workable, reliable world text encoding. Unicode could be roughly described as "wide-body ASCII" that has been stretched to 16 bits to encompass the characters of all the world's living languages. In a properly engineered design, 16 bits per character are more than sufficient for this purpose.His original 16-bit design was based on the assumption that only those scripts and characters in modern use would need to be encoded:
Unicode gives higher priority to ensuring utility for the future than to preserving past antiquities. Unicode aims in the first instance at the characters published in modern text (e.g. in the union of all newspapers and magazines printed in the world in 1988), whose number is undoubtedly far below 214 = 16,384. Beyond those modern-use characters, all others may be defined to be obsolete or rare; these are better candidates for private-use registration than for congesting the public list of generally useful Unicodes.In early 1989, the Unicode working group expanded to include Ken Whistler and Mike Kernaghan of Metaphor, Karen Smith-Yoshimura and Joan Aliprand of Research Libraries Group, RLG, and Glenn Wright of Sun Microsystems, and in 1990, Michel Suignard and Asmus Freytag from Microsoft and Rick McGowan of NeXT joined the group. By the end of 1990, most of the work on mapping existing character encoding standards had been completed, and a final review draft of Unicode was ready. The Unicode Consortium was incorporated in California on 3 January 1991, and in October 1991, the first volume of the Unicode standard was published. The second volume, covering Han ideographs, was published in June 1992. In 1996, a surrogate character mechanism was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. This increased the Unicode codespace to over a million code points, which allowed for the encoding of many historic scripts (e.g., Egyptian hieroglyphs) and thousands of rarely used or obsolete characters that had not been anticipated as needing encoding. Among the characters not originally intended for Unicode are rarely used Kanji or Chinese characters, many of which are part of personal and place names, making them rarely used, but much more essential than envisioned in the original architecture of Unicode. The Microsoft TrueType specification version 1.0 from 1992 used the name ''Apple Unicode'' instead of ''Unicode'' for the Platform ID in the naming table.
Unicode ConsortiumThe Unicode Consortium is a nonprofit organization that coordinates Unicode's development. Full members include most of the main computer software and hardware companies with any interest in text-processing standards, including Adobe Inc., Adobe, Apple Inc., Apple, Facebook Inc., Facebook, Google, IBM, Microsoft, Netflix, and SAP SE. Over the years several countries or government agencies have been members of the Unicode Consortium. Presently only the Ministry of Endowments and Religious Affairs (Oman) is a full member with voting rights. The Consortium has the ambitious goal of eventually replacing existing character encoding schemes with Unicode and its standard Unicode Transformation Format (UTF) schemes, as many of the existing schemes are limited in size and scope and are incompatible with multilingualism, multilingual environments.
Scripts coveredUnicode covers almost all scripts ( s) in current use today. A total of 154 scripts are included in the latest version of Unicode (covering alphabets, abugidas and Syllabary, syllabaries), although there are still scripts that are not yet encoded, particularly those mainly used in historical, liturgical, and academic contexts. Further additions of characters to the already encoded scripts, as well as symbols, in particular for mathematics and musical notation, music (in the form of notes and rhythmic symbols), also occur. The Unicode Roadmap Committee (Michael Everson, Rick McGowan, Ken Whistler, V.S. Umamaheswaran) maintain the list of scripts that are candidates or potential candidates for encoding and their tentative code block assignments on th
VersionsUnicode is developed in conjunction with the International Organization for Standardization and shares the character repertoire with ISO/IEC 10646: the Universal Character Set. Unicode and ISO/IEC 10646 function equivalently as character encodings, but ''The Unicode Standard'' contains much more information for implementers, covering—in depth—topics such as bitwise encoding, and rendering. The Unicode Standard enumerates a multitude of character properties, including those needed for supporting Bi-directional text, bidirectional text. The two standards do use slightly different terminology. The Unicode Consortium first published ''The Unicode Standard'' in 1991 (version 1.0), and has published new versions on a regular basis since then. The ''latest version'', and so the single valid one, is version 13.0. It was released in March 2020, and is fully published at the consortium's website. In April 2020, prognosed version 14.0 was postponed by six months from its initial release of March 2021 due to the COVID-19 pandemic, to September 2021, It will add at least 5 new scripts, as well as 37 new emoji characters. As a formal and complete book publication (including the code charts), the latest version was version 5.0 (2006). Since version 5.2 the core specification of the standard has been published as a print-on-demand paperback. The entire text of each version of the standard, including the core specification, standard annexes and code charts, is freely available in PDF format on the Unicode website. Thus far, the following major and minor versions of the Unicode standard have been published. Update versions, which do not include any changes to character repertoire, are signified by the third number (e.g., "version 4.0.1") and are omitted in the table below.
Architecture and terminologyThe Unicode Standard defines a ''codespace,'' a set of numerical values ranging from 0 through 10FFFFhexadecimal, 16, called ''code point, code points'' and denoted as U+0000 through U+10FFFF ("U+" plus the code point value in hexadecimal, prepended with leading zero, leading zeros as necessary to result in a minimum of four digits, ''e. g.'', U+00F7 for the division sign, ÷, versus U+13254 for the Egyptian hieroglyph designating a List of hieroglyphs#O, reed shelter or a c:Winding wall (h hieroglyph), winding wall ), respectively. Of these 216 + 220 defined code points, the code points from U+D800 through U+DFFF, which are used to encode surrogate pairs in UTF-16, are reserved by the Standard and may not be used to encode valid characters, resulting in a net total of 216 − 211 + 220 = 1,112,064 possible code points corresponding to valid Unicode characters. Not all of these code points necessarily correspond to visible characters; several, for example, are assigned to control codes such as carriage return.
Code planes and blocksThe Unicode codespace is divided into seventeen ''planes'', numbered 0 to 16: All code points in the BMP are accessed as a single code unit in UTF-16 encoding and can be encoded in one, two or three bytes in UTF-8. Code points in Planes 1 through 16 (''supplementary planes'') are accessed as surrogate pairs in UTF-16 and encoded in four bytes in UTF-8. Within each plane, characters are allocated within named ''Block (Unicode), blocks'' of related characters. Although blocks are an arbitrary size, they are always a multiple of 16 code points and often a multiple of 128 code points. Characters required for a given script may be spread out over several different blocks.
General Category propertyEach code point has a single Character property (Unicode)#General Category, General Category property. The major categories are denoted: Letter, Mark, Number, Punctuation, Symbol, Separator and Other. Within these categories, there are subdivisions. In most cases other properties must be used to sufficiently specify the characteristics of a code point. The possible General Categories are: Code points in the range U+D800–U+DBFF (1,024 code points) are known as high-surrogate code points, and code points in the range U+DC00–U+DFFF (1,024 code points) are known as low-surrogate code points. A high-surrogate code point followed by a low-surrogate code point form a surrogate pair in UTF-16 to represent code points greater than U+FFFF. These code points otherwise cannot be used (this rule is ignored often in practice especially when not using UTF-16). A small set of code points are guaranteed never to be used for encoding characters, although applications may make use of these code points internally if they wish. There are sixty-six of these noncharacters: U+FDD0–U+FDEF and any code point ending in the value FFFE or FFFF (i.e., U+FFFE, U+FFFF, U+1FFFE, U+1FFFF, ... U+10FFFE, U+10FFFF). The set of noncharacters is stable, and no new noncharacters will ever be defined. Like surrogates, the rule that these cannot be used is often ignored, although the operation of the byte order mark assumes that U+FFFE will never be the first code point in a text. Excluding surrogates and noncharacters leaves 1,111,998 code points available for use. Private-use code points are considered to be assigned characters, but they have no interpretation specified by the Unicode standard so any interchange of such characters requires an agreement between sender and receiver on their interpretation. There are three private-use areas in the Unicode codespace: * Private Use Area: U+E000–U+F8FF (6,400 characters), * Supplementary Private Use Area-A: U+F0000–U+FFFFD (65,534 characters), * Supplementary Private Use Area-B: U+100000–U+10FFFD (65,534 characters). Graphic characters are characters defined by Unicode to have particular semantics, and either have a visible glyph shape or represent a visible space. As of Unicode 13.0 there are 143,696 graphic characters. Format characters are characters that do not have a visible appearance, but may have an effect on the appearance or behavior of neighboring characters. For example, and may be used to change the default shaping behavior of adjacent characters (e.g., to inhibit ligatures or request ligature formation). There are 163 format characters in Unicode 13.0. Sixty-five code points (U+0000–U+001F and U+007F–U+009F) are reserved as control codes, and correspond to the C0 and C1 control codes defined in ISO/IEC 6429. U+0009 (Tab), U+000A (Line Feed), and U+000D (Carriage Return) are widely used in Unicode-encoded texts. In practice the C1 code points are often improperly-translated (mojibake) as the legacy Windows-1252 characters used by some English and Western European texts. Graphic characters, format characters, control code characters, and private use characters are known collectively as ''assigned characters''. Reserved code points are those code points which are available for use, but are not yet assigned. As of Unicode 13.0 there are 830,606 reserved code points.
Abstract charactersThe set of graphic and format characters defined by Unicode does not correspond directly to the repertoire of ''abstract characters'' that is representable under Unicode. Unicode encodes characters by associating an abstract character with a particular code point. However, not all abstract characters are encoded as a single Unicode character, and some abstract characters may be represented in Unicode by a sequence of two or more characters. For example, a Latin small letter "i" with an ogonek, a dot above, and an acute accent, which is required in Lithuanian language, Lithuanian, is represented by the character sequence U+012F, U+0307, U+0301. Unicode maintains a list of uniquely named character sequences for abstract characters that are not directly encoded in Unicode. All graphic, format, and private use characters have a unique and immutable name by which they may be identified. This immutability has been guaranteed since Unicode version 2.0 by the Name Stability policy. In cases where the name is seriously defective and misleading, or has a serious typographical error, a formal alias may be defined, and applications are encouraged to use the formal alias in place of the official character name. For example, has the formal alias , and has the formal alias .
Ready-made versus composite charactersUnicode includes a mechanism for modifying characters that greatly extends the supported glyph repertoire. This covers the use of combining diacritical marks that may be added after the base character by the user. Multiple combining diacritics may be simultaneously applied to the same character. Unicode also contains precomposed character, precomposed versions of most letter/diacritic combinations in normal use. These make conversion to and from legacy encodings simpler, and allow applications to use Unicode as an internal text format without having to implement combining characters. For example, ''é'' can be represented in Unicode as #Upluslink, U+0065 () followed by U+0301 (), but it can also be represented as the precomposed character U+00E9 (). Thus, in many cases, users have multiple ways of encoding the same character. To deal with this, Unicode provides the mechanism of canonical equivalence. An example of this arises with Hangul, the Korean alphabet. Unicode provides a mechanism for composing Hangul syllables with their individual subcomponents, known as Hangul Jamo. However, it also provides 11,172 combinations of precomposed syllables made from the most common jamo. The CJK characters currently have codes only for their precomposed form. Still, most of those characters comprise simpler elements (called Radical_(Chinese_characters), radicals), so in principle Unicode could have decomposed them as it did with Hangul. This would have greatly reduced the number of required code points, while allowing the display of virtually every conceivable character (which might do away with some of the problems caused by Han unification). A similar idea is used by some input methods, such as Cangjie method, Cangjie and Wubi method, Wubi. However, attempts to do this for character encoding have stumbled over the fact that Chinese characters do not decompose as simply or as regularly as Hangul does. A set of Radical (Chinese character), radicals was provided in Unicode 3.0 (CJK radicals between U+2E80 and U+2EFF, KangXi radicals in U+2F00 to U+2FDF, and ideographic description characters from U+2FF0 to U+2FFB), but the Unicode standard (ch. 12.2 of Unicode 5.2) warns against using Ideographic Description Sequences, ideographic description sequences as an alternate representation for previously encoded characters:
LigaturesMany scripts, including and Devanagari, Devanāgarī, have special orthographic rules that require certain combinations of letterforms to be combined into special ligature (typography), ligature forms. The rules governing ligature formation can be quite complex, requiring special script-shaping technologies such as ACE (Arabic Calligraphic Engine by DecoType in the 1980s and used to generate all the Arabic examples in the printed editions of the Unicode Standard), which became the proof of concept for OpenType (by Adobe and Microsoft), Graphite (SIL), Graphite (by SIL International), or Apple Advanced Typography, AAT (by Apple). Instructions are also embedded in fonts to tell the operating system how to properly output different character sequences. A simple solution to the placement of combining marks or diacritics is assigning the marks a width of zero and placing the glyph itself to the left or right of the left sidebearing (depending on the direction of the script they are intended to be used with). A mark handled this way will appear over whatever character precedes it, but will not adjust its position relative to the width or height of the base glyph; it may be visually awkward and it may overlap some glyphs. Real stacking is impossible, but can be approximated in limited cases (for example, Thai top-combining vowels and tone marks can just be at different heights to start with). Generally this approach is only effective in monospaced fonts, but may be used as a fallback rendering method when more complex methods fail.
Standardized subsetsSeveral subsets of Unicode are standardized: Microsoft Windows since Windows NT 4.0 supports WGL-4 with 657 characters, which is considered to support all contemporary European languages using the Latin, Greek, or Cyrillic script. Other standardized subsets of Unicode include the Multilingual European Subsets: MES-1 (Latin scripts only, 335 characters), MES-2 (Latin, Greek and Cyrillic 1062 characters) and MES-3A & MES-3B (two larger subsets, not shown here). Note that MES-2 includes every character in MES-1 and WGL-4. Rendering software which cannot process a Unicode character appropriately often displays it as an open rectangle, or the Unicode "replacement character" (U+FFFD, �), to indicate the position of the unrecognized character. Some systems have made attempts to provide more information about such characters. Apple's Last Resort font will display a substitute glyph indicating the Unicode range of the character, and the SIL International's Unicode fallback font, Unicode Fallback font will display a box showing the hexadecimal scalar value of the character.
Mapping and encodingsSeveral mechanisms have been specified for storing a series of code points as a series of bytes. Unicode defines two mapping methods: the ''Unicode Transformation Format'' (UTF) encodings, and the ''Universal Coded Character Set'' (UCS) encodings. An encoding maps (possibly a subset of) the range of Unicode ''code points'' to sequences of values in some fixed-size range, termed ''code units''. All UTF encodings map code points to a unique sequence of bytes. The numbers in the names of the encodings indicate the number of bits per code unit (for UTF encodings) or the number of bytes per code unit (for UCS encodings and UTF-1). UTF-8 and UTF-16 are the most commonly used encodings. UCS-2 is an obsolete subset of UTF-16; UCS-4 and UTF-32 are functionally equivalent. UTF encodings include: * UTF-1, a retired predecessor of UTF-8, maximizes compatibility with ISO/IEC 2022, ISO 2022, no longer part of ''The Unicode Standard'' * UTF-7, an obsolete 7-bit encoding sometimes used in e-mail (not part of ''The Unicode Standard'', but only documented as an informational Request for Comments, RFC, i.e., not on the Internet Standards Track) * UTF-8, uses one to four bytes for each code point, maximizes compatibility with ASCII * UTF-EBCDIC, similar to UTF-8 but designed for compatibility with EBCDIC (not part of ''The Unicode Standard'') * UTF-16, uses one or two 16-bit code units per code point, cannot encode surrogates * UTF-32, uses one 32-bit code unit per code point UTF-8 uses one to four bytes per code point and, being compact for Latin scripts and ASCII-compatible, provides the ''de facto'' standard encoding for interchange of Unicode text. It is used by FreeBSD and most recent Linux distributions as a direct replacement for legacy encodings in general text handling. The UCS-2 and UTF-16 encodings specify the Unicode Byte Order Mark (BOM) for use at the beginnings of text files, which may be used for byte ordering detection (or endianness, byte endianness detection). The BOM, code point U+FEFF has the important property of unambiguity on byte reorder, regardless of the Unicode encoding used; U+FFFE (the result of byte-swapping U+FEFF) does not equate to a legal character, and U+FEFF in other places, other than the beginning of text, conveys the zero-width non-break space (a character with no appearance and no effect other than preventing the formation of ligature (typography), ligatures). The same character converted to UTF-8 becomes the byte sequence
EF BB BF. The Unicode Standard allows that the BOM "can serve as signature for UTF-8 encoded text where the character set is unmarked". Some software developers have adopted it for other encodings, including UTF-8, in an attempt to distinguish UTF-8 from local 8-bit code pages. However , the UTF-8 standard, recommends that byte order marks be forbidden in protocols using UTF-8, but discusses the cases where this may not be possible. In addition, the large restriction on possible patterns in UTF-8 (for instance there cannot be any lone bytes with the high bit set) means that it should be possible to distinguish UTF-8 from other character encodings without relying on the BOM. In UTF-32 and UCS-4, one 32-bit code unit serves as a fairly direct representation of any character's code point (although the endianness, which varies across different platforms, affects how the code unit manifests as a byte sequence). In the other encodings, each code point may be represented by a variable number of code units. UTF-32 is widely used as an internal representation of text in programs (as opposed to stored or transmitted text), since every Unix operating system that uses the GNU Compiler Collection, gcc compilers to generate software uses it as the standard "wide character" encoding. Some programming languages, such as Seed7, use UTF-32 as internal representation for strings and characters. Recent versions of the Python (programming language), Python programming language (beginning with 2.2) may also be configured to use UTF-32 as the representation for Unicode strings, effectively disseminating such encoding in high-level programming language, high-level coded software. Punycode, another encoding form, enables the encoding of Unicode strings into the limited character set supported by the ASCII-based Domain Name System (DNS). The encoding is used as part of IDNA, which is a system enabling the use of Internationalized Domain Names in all scripts that are supported by Unicode. Earlier and now historical proposals include UTF-5 and UTF-6. GB 18030, GB18030 is another encoding form for Unicode, from the Standardization Administration of China. It is the official character set of the People's Republic of China (PRC). Binary Ordered Compression for Unicode, BOCU-1 and Standard Compression Scheme for Unicode, SCSU are Unicode compression schemes. The April Fools' Day RFC of 2005 specified two parody UTF encodings, UTF-9 and UTF-18.
Operating systemsUnicode has become the dominant scheme for internal processing and storage of text. Although a great deal of text is still stored in legacy encodings, Unicode is used almost exclusively for building new information processing systems. Early adopters tended to use UCS-2 (the fixed-width two-byte precursor to UTF-16) and later moved to UTF-16 (the variable-width current standard), as this was the least disruptive way to add support for non-BMP characters. The best known such system is Windows NT (and its descendants, Windows 2000, Windows XP, Windows Vista, Windows 7, Windows 8 and Windows 10), which uses UTF-16 as the sole internal character encoding. The Java virtual machine, Java and .NET Framework, .NET bytecode environments, macOS, and KDE also use it for internal representation. Partial support for Unicode can be installed on Windows 9x through the Microsoft Layer for Unicode. UTF-8 (originally developed for Plan 9 from Bell Labs, Plan 9) has become the main storage encoding on most Unix-like operating systems (though others are also used by some libraries) because it is a relatively easy replacement for traditional extended ASCII character sets. UTF-8 is also the most common Unicode encoding used in HTML documents on the World Wide Web. Multilingual text-rendering engines which use Unicode include Uniscribe and DirectWrite for Microsoft Windows, ATSUI and Core Text for macOS, and Pango for GTK+ and the GNOME desktop.
Input methodsBecause keyboard layouts cannot have simple key combinations for all characters, several operating systems provide alternative input methods that allow access to the entire repertoire. ISO/IEC 14755, which standardises methods for entering Unicode characters from their code points, specifies several methods. There is the ''Basic method'', where a ''beginning sequence'' is followed by the hexadecimal representation of the code point and the ''ending sequence''. There is also a ''screen-selection entry method'' specified, where the characters are listed in a table in a screen, such as with a character map program. Online tools for finding the code point for a known character include Unicode Lookup by Jonathan Hedley and Shapecatcher by Benjamin Milde. In Unicode Lookup, one enters a search key (e.g. "fractions"), and a list of corresponding characters with their code points is returned. In Shapecatcher, based on Shape context, one draws the character in a box and a list of characters approximating the drawing, with their code points, is returned.
WebAll W3C recommendations have used Unicode as their ''document character set'' since HTML 4.0. Web browsers have supported Unicode, especially UTF-8, for many years. There used to be display problems resulting primarily from typeface, font related issues; e.g. v 6 and older of Microsoft Internet Explorer did not render many code points unless explicitly told to use a font that contains them. Although syntax rules may affect the order in which characters are allowed to appear, XML (including XHTML) documents, by definition, comprise characters from most of the Unicode code points, with the exception of: * most of the C0 and C1 control codes, C0 control codes, * the permanently unassigned code points D800–DFFF, * FFFE or FFFF. HTML characters manifest either directly as bytes according to document's encoding, if the encoding supports them, or users may write them as numeric character references based on the character's Unicode code point. For example, the references
말(or the same numeric values expressed in hexadecimal, with
s the prefix) should display on all browsers as Δ, Й, ק ,م, ๗, あ, 叶, 葉, and 말. When specifying Uniform Resource Identifier, URIs, for example as URLs in HTTP requests, non-ASCII characters must be percent encoding, percent-encoded.
FontsUnicode is not in principle concerned with fonts ''per se'', seeing them as implementation choices. Any given character may have many allographs, from the more common bold, italic and base letterforms to complex decorative styles. A font is "Unicode compliant" if the glyphs in the font can be accessed using code points defined in the Unicode standard. The standard does not specify a minimum number of characters that must be included in the font; some fonts have quite a small repertoire. Free and retail fonts based on Unicode are widely available, since TrueType and OpenType support Unicode. These font formats map Unicode code points to glyphs, but TrueType font is restricted to 65,535 glyphs. List of typefaces, Thousands of fonts exist on the market, but fewer than a dozen fonts—sometimes described as "pan-Unicode" fonts—attempt to support the majority of Unicode's character repertoire. Instead, Unicode-based List of Unicode fonts, fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. Several reasons justify this approach: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to demand resources in computing environments; and operating systems and applications show increasing intelligence in regard to obtaining glyph information from separate font files as needed, i.e., font substitution. Furthermore, designing a consistent set of rendering instructions for tens of thousands of glyphs constitutes a monumental task; such a venture passes the point of diminishing returns for most typefaces.
NewlinesUnicode partially addresses the newline problem that occurs when trying to read a text file on different platforms. Unicode defines a large number of Newline#Unicode, characters that conforming applications should recognize as line terminators. In terms of the newline, Unicode introduced and . This was an attempt to provide a Unicode solution to encoding paragraphs and lines semantically, potentially replacing all of the various platform solutions. In doing so, Unicode does provide a way around the historical platform dependent solutions. Nonetheless, few if any Unicode solutions have adopted these Unicode line and paragraph separators as the sole canonical line ending characters. However, a common approach to solving this issue is through newline normalization. This is achieved with the Cocoa text system in Mac OS X and also with W3C XML and HTML recommendations. In this approach every possible newline character is converted internally to a common newline (which one does not really matter since it is an internal operation just for rendering). In other words, the text system can correctly treat the character as a newline, regardless of the input's actual encoding.
Philosophical and completeness criticismsHan unification (the identification of forms in the East Asian languages which one can treat as stylistic variations of the same historical character) has become one of the most controversial aspects of Unicode, despite the presence of a majority of experts from all three regions in the Ideographic Research Group (IRG), which advises the Consortium and ISO on additions to the repertoire and on Han unification. Unicode has been criticized for failing to separately encode older and alternative forms of kanji which, critics argue, complicates the processing of ancient Japanese and uncommon Japanese names. This is often due to the fact that Unicode encodes characters rather than glyphs (the visual representations of the basic character that often vary from one language to another). Unification of glyphs leads to the perception that the languages themselves, not just the basic character representation, are being merged.The secret life of Unicode: A peek at Unicode's soft underbelly
Mapping to legacy character setsUnicode was designed to provide code-point-by-code-point round-trip format conversion to and from any preexisting character encodings, so that text files in older character sets can be converted to Unicode and then back and get back the same file, without employing context-dependent interpretation. That has meant that inconsistent legacy architectures, such as combining character, combining diacritics and precomposed characters, both exist in Unicode, giving more than one method of representing some text. This is most pronounced in the three different encoding forms for Korean Hangul. Since version 3.0, any precomposed characters that can be represented by a combining sequence of already existing characters can no longer be added to the standard in order to preserve interoperability between software using different versions of Unicode. Injective mappings must be provided between characters in existing legacy character sets and characters in Unicode to facilitate conversion to Unicode and allow interoperability with legacy software. Lack of consistency in various mappings between earlier Japanese encodings such as Shift-JIS or EUC-JP and Unicode led to round-trip format conversion mismatches, particularly the mapping of the character JIS X 0208 '～' (1-33, WAVE DASH), heavily used in legacy database data, to either (in Microsoft Windows) or (other vendors). Some Japanese computer programmers objected to Unicode because it requires them to separate the use of and , which was mapped to 0x5C in JIS X 0201, and a lot of legacy code exists with this usage. (This encoding also replaces tilde '~' 0x7E with macron '¯', now 0xAF.) The separation of these characters exists in ISO 8859-1, from long before Unicode.
Indic scriptsIndic scripts such as Tamil script, Tamil and Devanagari are each allocated only 128 code points, matching the ISCII standard. The correct rendering of Unicode Indic text requires transforming the stored logical order characters into visual order and the forming of ligatures (aka conjuncts) out of components. Some local scholars argued in favor of assignments of Unicode code points to these ligatures, going against the practice for other writing systems, though Unicode contains some Arabic and other ligatures for backward compatibility purposes only. Encoding of any new ligatures in Unicode will not happen, in part because the set of ligatures is font-dependent, and Unicode is an encoding independent of font variations. The same kind of issue arose for the Tibetan script in 2003 when the Standardization Administration of China proposed encoding 956 precomposed Tibetan syllables, but these were rejected for encoding by the relevant ISO committee (ISO/IEC JTC 1/SC 2). Thai alphabet support has been criticized for its ordering of Thai characters. The vowels เ, แ, โ, ใ, ไ that are written to the left of the preceding consonant are in visual order instead of phonetic order, unlike the Unicode representations of other Indic scripts. This complication is due to Unicode inheriting the TIS-620, Thai Industrial Standard 620, which worked in the same way, and was the way in which Thai had always been written on keyboards. This ordering problem complicates the Unicode collation process slightly, requiring table lookups to reorder Thai characters for collation. Even if Unicode had adopted encoding according to spoken order, it would still be problematic to collate words in dictionary order. E.g., the word "perform" starts with a consonant cluster "สด" (with an inherent vowel for the consonant "ส"), the vowel แ-, in spoken order would come after the ด, but in a dictionary, the word is collated as it is written, with the vowel following the ส.
Combining charactersCharacters with diacritical marks can generally be represented either as a single precomposed character or as a decomposed sequence of a base letter plus one or more non-spacing marks. For example, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) should be rendered identically, both appearing as an e with a Macron (diacritic), macron and acute accent, but in practice, their appearance may vary depending upon what rendering engine and fonts are being used to display the characters. Similarly, dot (diacritic), underdots, as needed in the romanization of Indo-Aryan languages, Indic, will often be placed incorrectly.. Unicode characters that map to precomposed glyphs can be used in many cases, thus avoiding the problem, but where no precomposed character has been encoded the problem can often be solved by using a specialist Unicode font such as Charis SIL that uses Graphite (SIL), Graphite, OpenType, or Apple Advanced Typography, AAT technologies for advanced rendering features.
AnomaliesThe Unicode standard has imposed rules intended to guarantee stability. Depending on the strictness of a rule, a change can be prohibited or allowed. For example, a "name" given to a code point cannot and will not change. But a "script" property is more flexible, by Unicode's own rules. In version 2.0, Unicode changed many code point "names" from version 1. At the same moment, Unicode stated that from then on, an assigned name to a code point will never change anymore. This implies that when mistakes are published, these mistakes cannot be corrected, even if they are trivial (as happened in one instance with the spelling for in a character name). In 2006 a list of anomalies in character names was first published, and, as of April 2017, there were 94 characters with identified issues, for example: * : This is a small letter. The capital is * : Does not join graphemes. * : This is not a Yi syllable, but a Yi iteration mark. * : ''bracket'' is spelled incorrectly.
See also* Comparison of Unicode encodings * Religious and political symbols in Unicode * International Components for Unicode (ICU), now as ICU-TC a part of Unicode * List of binary codes * List of Unicode characters * List of XML and HTML character entity references * Open-source Unicode typefaces * Standards related to Unicode * Unicode symbols * Universal Coded Character Set * Lotus Multi-Byte Character Set (LMBCS), a parallel development with similar intentions
Further reading* ''The Unicode Standard, Version 3.0'', The Unicode Consortium, Addison-Wesley Longman, Inc., April 2000. * ''The Unicode Standard, Version 4.0'', The Unicode Consortium, Addison-Wesley Professional, 27 August 2003. * ''The Unicode Standard, Version 5.0, Fifth Edition'', The Unicode Consortium, Addison-Wesley Professional, 27 October 2006. * Julie D. Allen. ''The Unicode Standard, Version 6.0'', The Unicode Consortium, Mountain View, 2011, ,
External links* *