Translation Memory
   HOME
*





Translation Memory
A translation memory (TM) is a database that stores "segments", which can be sentences, paragraphs or sentence-like units (headings, titles or elements in a list) that have previously been translated, in order to aid human translators. The translation memory stores the source text and its corresponding translation in language pairs called “translation units”. Individual words are handled by terminology bases and are not within the domain of TM. Software programs that use translation memories are sometimes known as translation memory managers (TMM) or translation memory systems (TM systems, not to be confused with a Translation management system (TMS), which is another type of software focused on managing process of translation). Translation memories are typically used in conjunction with a dedicated computer assisted translation (CAT) tool, word processing program, terminology management systems, multilingual dictionary, or even raw machine translation output. Research indicat ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Translation
Translation is the communication of the Meaning (linguistic), meaning of a #Source and target languages, source-language text by means of an Dynamic and formal equivalence, equivalent #Source and target languages, target-language text. The English language draws a terminology, terminological distinction (which does not exist in every language) between ''translating'' (a written text) and ''Language interpretation, interpreting'' (oral or Sign language, signed communication between users of different languages); under this distinction, translation can begin only after the appearance of writing within a language community. A translator always risks inadvertently introducing source-language words, grammar, or syntax into the target-language rendering. On the other hand, such "spill-overs" have sometimes imported useful source-language calques and loanwords that have enriched target languages. Translators, including early translators of sacred texts, have helped shape the very l ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gengo
Gengo was a web-based translation platform headquartered in Tokyo. History Gengo was founded in 2008 by Matthew Romaine and Robert Laing. Prior to starting Gengo, Romaine was an audio research engineer and translator with Sony Corporation, and Laing headed Moresided, a UK-based design agency. Romaine thought of the concept for Gengo due to his experience being asked to translate documents in Japanese and English at Sony, despite originally being hired as an engineer. Prior to its early 2012 rebranding, the company was known as "myGengo." In April 2010, the company launched their API, allowing developers to integrate Gengo’s translation platform into third-party applications, web sites and widgets. Romaine initially served as CTO of the company. He replaced fellow co-founder Robert Laing as CEO in 2015. In March 2018, the company launched Gengo AI, an on-demand platform that provides crowdsourced multilingual training data to machine learning developers. In January 2019, Ge ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Comparison Of Computer-assisted Translation Tools
A number of computer-assisted translation software and websites exists for various platforms and access types. According to a 2006 survey undertaken by Imperial College of 874 translation professionals from 54 countries, primary tool usage was reported as follows: Trados (35%), Wordfast (17%), Déjà Vu (16%), SDL Trados 2006 (15%), SDLX (4%), (3%), OmegaT (3%), others (7%). () The list below includes only some of the existing available software and website platforms. See also *Machine translation *Comparison of machine translation applications References {{Reflist * Language software Translation Computer-assisted translation Computer-aided translation (CAT), also referred to as computer-assisted translation or computer-aided human translation (CAHT), is the use of software to assist a human translator in the translation process. The translation is created by a huma ...
...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Translate Toolkit
The Translate Toolkit is a localization and translation toolkit. It provides a set of tools for working with localization file formats and files that might need localization. The toolkit also provides an API on which to develop other localization tools. The toolkit is written in the Python programming language. It is free software originally developed and released by Translate.org.za in 2002 and is now maintained by Translate.org.za and community developers. Translate Toolkit uses Enchant as spellchecker. History The toolkit was originally developed as the mozpotools by David Fraser for Translate.org.za. Translate.org.za had focused on translating KDE which used Gettext PO files for localization. With an internal change to focus on end-user, cross-platform, OSS software, the organisation decided to localize the Mozilla Suite. This required using new tools and new formats that were not as rich as Gettext PO. Thus mozpotools was created to convert the Mozilla DTD and .prop ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

GNU Gettext
In computing, gettext is an internationalization and localization (i18n and l10n) system commonly used for writing multilingual programs on Unix-like computer operating systems. One of the main benefits of gettext is that it separates programming from translating. The most commonly used implementation of gettext is GNU gettext, released by the GNU Project in 1995. The runtime library is libintl. gettext provides an option to use different strings for any number of plural forms of nouns, but this feature has no support for grammatical gender. History Initially, POSIX provided no means of localizing messages. Two proposals were raised in the late 1980s, the 1988 Uniforum gettext and the 1989 X/Open catgets (XPG-3 § 5). Sun Microsystems implemented the first gettext in 1993. The Unix and POSIX developers never really agreed on what kind of interface to use (the other option is the X/Open catgets), so many C libraries, including glibc, implemented both. , whether gettext should be ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gettext
In computing, gettext is an internationalization and localization (i18n and l10n) system commonly used for writing multilingual programs on Unix-like computer operating systems. One of the main benefits of gettext is that it separates programming from translating. The most commonly used implementation of gettext is GNU gettext, released by the GNU Project in 1995. The runtime library is libintl. gettext provides an option to use different strings for any number of plural forms of nouns, but this feature has no support for grammatical gender. History Initially, POSIX provided no means of localizing messages. Two proposals were raised in the late 1980s, the 1988 Uniforum gettext and the 1989 X/Open catgets (XPG-3 § 5). Sun Microsystems implemented the first gettext in 1993. The Unix and POSIX developers never really agreed on what kind of interface to use (the other option is the X/Open catgets), so many C libraries, including glibc, implemented both. , whether gettext should be ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




XLIFF
XLIFF (XML Localization Interchange File Format) is an XML-based bitext format created to standardize the way localizable data are passed between and among tools during a localization process and a common format for CAT tool exchange. The XLIFF Technical Committee (TC) first convened at OASIS in December 2001 (first meeting in January 2002), but the first fully ratified version of XLIFF appeared as XLIFF Version 1.2 in February 2008. Its current specification is v2.1 released on 2018-02-13, which is backwards compatible with v2.0 released on 2014-08-05. The specification is aimed at the localization industry. It specifies elements and attributes to store content extracted from various original file formats and its corresponding translation. The goal was to abstract the localization skills from the engineering skills related to specific formats such as HTML. XLIFF is part of the Open Architecture for XML Authoring and Localization (OAXAL) reference architecture. XLIFF 2.0 and hi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Segmentation Rules EXchange
Segmentation Rules eXchange or SRX is an XML-based standard that was maintained by Localization Industry Standards Association, until it became insolvent in 2011, and then by the Globalization and Localization Association (GALA). SRX provides a common way to describe how to segment text for translation and other language-related processes. It was created when it was realized that TMX was less useful than expected in certain instances due to differences in how tools segment text. SRX is intended to enhance the TMX standard so that translation memory (TM) data that is exchanged between applications can be used more effectively. Having the segmentation rules available that were used when a TM was created increases the usefulness of the TM data. Implementation difficulties SRX make use of the ICU Regular Expression syntax, but not all programming languages support all ICU expressions, making implementing SRX in some languages difficult or impossible. Java is an example of this. Versi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Universal Terminology EXchange
UTX (Universal Terminology eXchange) is a simple glossary format. UTX is developed by AAMT (Asia-Pacific Association for Machine Translation). A tab-separated text format that contains minimal information, such as source language entry, target language entry, and part-of-speech entry. UTX is intended to facilitate rapid creation and quick exchanges of human-readable and machine-readable glossaries. Initially, UTX was created to absorb the differences between various user dictionary formats for machine translation. The scope of the format was later expanded to include other purposes, such as glossaries for human translations, natural language processing, thesaurus, text-to-speech, input method, etc. UTX could be used to improve the efficiency of localization for open source projects. UTX Converter UTX Converter was developed as an open source project by AAMT. UTX Converter is available for free. It has the following functions: * Functions for UTX ** The format check of a U ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


ISO 12620
Linguistic categories include * Lexical category, a part of speech such as ''noun'', ''preposition'', etc. * Syntactic category, a similar concept which can also include phrasal categories * Grammatical category, a grammatical feature such as ''tense'', ''gender'', etc. The definition of linguistic categories is a major concern of linguistic theory, and thus, the definition and naming of categories varies across different theoretical frameworks and grammatical traditions for different languages. The operationalization of linguistic categories in lexicography, computational linguistics, natural language processing, corpus linguistics, and terminology management typically requires resource-, problem- or application-specific definitions of linguistic categories. In Cognitive linguistics it has been argued that linguistic categories have a prototype structure like that of the categories of common words in a language.John R Taylor (1995) ''Linguistic Categorization: Prototypes in Ling ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Localization Industry Standards Association
Localization Industry Standards Association or LISA was a Swiss-based trade body concerning the translation of computer software (and associated materials) into multiple natural languages, which existed from 1990 to February 2011. It counted among its members most of the large information technology companies of the period, including Adobe, Cisco, Hewlett-Packard, IBM, McAfee, Nokia, Novell and Xerox. LISA played a significant role in representing its partners at the International Organization for Standardization (ISO), and the TermBase eXchange (TBX) standard developed by LISA was submitted to ISO in 2007 and became ISO 30042:2008. LISA also had a presence at the W3C. A number of the LISA standards are used by the OASIS Open Architecture for XML Authoring and Localization framework. LISA shut down on 28 February 2011, and its website went offline shortly afterwards. In the wake of the closure of LISA, the European Telecommunications Standards Institute started an Industry Spec ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


TermBase EXchange
TermBase eXchange (TBX) is an international standard (ISO 30042:2019) for the representation of structured concept-oriented terminological data, copublished by ISO and the Localization Industry Standards Association (LISA).ISO 30042:2008: Systems to manage terminology, knowledge and content -- TermBase eXchange (TBX). International Organization for Standardization, http://www.iso.org/iso/catalogue_detail.htm?csnumber=45797"LISA OSCAR Standards", GALA website. http://www.gala-global.org/lisa-oscar-standards"TermBase eXchange". https://www.gala-global.org/sites/default/files/migrated-pages/docs/tbx_oscar_0.pdf Originally released in 2002 by LISA's OSCAR special interest group, TBX was adopted by ISO TC 37 in 2008. In 201ISO 30042:2008was withdrawn and revised b It is currently available as an ISO standard and as an open, industry standard, available at no charge. TBX defines an XML format for the exchange of terminology data, and is "an industry standard for terminology exchange". Se ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]