Text-to-speech
   HOME

TheInfoList



OR:

Speech synthesis is the artificial production of human
speech Speech is a human vocal communication using language. Each language uses Phonetics, phonetic combinations of vowel and consonant sounds that form the sound of its words (that is, all English words sound different from all French words, even if ...
. A computer system used for this purpose is called a speech synthesizer, and can be implemented in
software Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work. At the lowest programming level, executable code consists ...
or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render
symbolic linguistic representation A symbolic linguistic representation is a representation of an utterance that uses symbols to represent linguistic information about the utterance, such as information about phonetics, phonology, morphology, syntax, or semantics. Symbolic linguist ...
s like
phonetic transcription Phonetic transcription (also known as phonetic script or phonetic notation) is the visual representation of speech sounds (or ''phones'') by means of symbols. The most common type of phonetic transcription uses a phonetic alphabet, such as the ...
s into speech. The reverse process is
speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the m ...
. Synthesized speech can be created by
concatenating In formal language theory and computer programming, string concatenation is the operation of joining character strings end-to-end. For example, the concatenation of "snow" and "ball" is "snowball". In certain formalisations of concatenati ...
pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores
phones A telephone is a telecommunications device that permits two or more users to conduct a conversation when they are too far apart to be easily heard directly. A telephone converts sound, typically and most efficiently the human voice, into ele ...
or
diphone In phonetics, a diphone is an adjacent pair of phones in an utterance. For example, in aɪfəʊn the diphones are a ɪ f ə ʊ n The term is usually used to refer to a recording of the transition between two phones. In the following d ...
s provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the
vocal tract The vocal tract is the cavity in human bodies and in animals where the sound produced at the sound source ( larynx in mammals; syrinx in birds) is filtered. In birds it consists of the trachea, the syrinx, the oral cavity, the upper part of th ...
and other human voice characteristics to create a completely "synthetic" voice output. The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood clearly. An intelligible text-to-speech program allows people with
visual impairment Visual impairment, also known as vision impairment, is a medical definition primarily measured based on an individual's better eye visual acuity; in the absence of treatment such as correctable eyewear, assistive devices, and medical treatment ...
s or
reading disabilities A reading disability is a condition in which a person displays difficulty reading. Examples of reading disabilities include: developmental dyslexia, alexia (acquired dyslexia), and hyperlexia (word-reading ability well above normal for age and I ...
to listen to written words on a home computer. Many computer operating systems have included speech synthesizers since the early 1990s. A text-to-speech system (or "engine") is composed of two parts: a front-end and a back-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often called ''
text normalization Text normalization is the process of transforming text into a single canonical form that it might not have had before. Normalizing text before storing or processing it allows for separation of concerns, since input is guaranteed to be consis ...
'', ''pre-processing'', or ''
tokenization Tokenization may refer to: * Tokenization (lexical analysis) in language processing * Tokenization (data security) in the field of data security * Word segmentation * Tokenism Tokenism is the practice of making only a perfunctory or symbolic ...
''. The front-end then assigns
phonetic transcription Phonetic transcription (also known as phonetic script or phonetic notation) is the visual representation of speech sounds (or ''phones'') by means of symbols. The most common type of phonetic transcription uses a phonetic alphabet, such as the ...
s to each word, and divides and marks the text into prosodic units, like
phrase In syntax and grammar, a phrase is a group of words or singular word acting as a grammatical unit. For instance, the English expression "the very happy squirrel" is a noun phrase which contains the adjective phrase "very happy". Phrases can consi ...
s, clauses, and sentences. The process of assigning phonetic transcriptions to words is called ''text-to-phoneme'' or ''
grapheme In linguistics, a grapheme is the smallest functional unit of a writing system. The word ''grapheme'' is derived and the suffix ''-eme'' by analogy with ''phoneme'' and other names of emic units. The study of graphemes is called ''graphemics' ...
-to-phoneme'' conversion. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation that is output by the front-end. The back-end—often referred to as the ''synthesizer''—then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of the ''target prosody'' (pitch contour, phoneme durations), which is then imposed on the output speech.


History

Long before the invention of
electronic Electronic may refer to: *Electronics, the science of how to control electric energy in semiconductor * ''Electronics'' (magazine), a defunct American trade journal *Electronic storage, the storage of data using an electronic device *Electronic co ...
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, and scientific measurements. Signal processing techniq ...
, some people tried to build machines to emulate human speech. Some early legends of the existence of " Brazen Heads" involved Pope
Silvester II Pope Sylvester II ( – 12 May 1003), originally known as Gerbert of Aurillac, was a French-born scholar and teacher who served as the bishop of Rome and ruled the Papal States from 999 to his death. He endorsed and promoted study of Arab and Gre ...
(d. 1003 AD),
Albertus Magnus Albertus Magnus (c. 1200 – 15 November 1280), also known as Saint Albert the Great or Albert of Cologne, was a German Dominican friar, philosopher, scientist, and bishop. Later canonised as a Catholic saint, he was known during his li ...
(1198–1280), and Roger Bacon (1214–1294). In 1779 the
German German(s) may refer to: * Germany (of or related to) ** Germania (historical use) * Germans, citizens of Germany, people of German ancestry, or native speakers of the German language ** For citizens of Germany, see also German nationality law **Ge ...
-
Danish Danish may refer to: * Something of, from, or related to the country of Denmark People * A national or citizen of Denmark, also called a "Dane," see Demographics of Denmark * Culture of Denmark * Danish people or Danes, people with a Danish a ...
scientist
Christian Gottlieb Kratzenstein Christian Gottlieb Kratzenstein (30 January 1723, Wernigerode – 6 July 1795, Copenhagen) was a German-born doctor, physicist and engineer. From 1753 to the end of his life he was a professor at the University of Copenhagen where he served as ...
won the first prize in a competition announced by the Russian Imperial Academy of Sciences and Arts for models he built of the human
vocal tract The vocal tract is the cavity in human bodies and in animals where the sound produced at the sound source ( larynx in mammals; syrinx in birds) is filtered. In birds it consists of the trachea, the syrinx, the oral cavity, the upper part of th ...
that could produce the five long
vowel A vowel is a syllabic speech sound pronounced without any stricture in the vocal tract. Vowels are one of the two principal classes of speech sounds, the other being the consonant. Vowels vary in quality, in loudness and also in quantity (leng ...
sounds (in
International Phonetic Alphabet The International Phonetic Alphabet (IPA) is an alphabetic system of phonetic transcription, phonetic notation based primarily on the Latin script. It was devised by the International Phonetic Association in the late 19th century as a standa ...
notation: , , , and ).History and Development of Speech Synthesis
Helsinki University of Technology, Retrieved on November 4, 2006
There followed the
bellows A bellows or pair of bellows is a device constructed to furnish a strong blast of air. The simplest type consists of a flexible bag comprising a pair of rigid boards with handles joined by flexible leather sides enclosing an approximately airtigh ...
-operated " acoustic-mechanical speech machine" of
Wolfgang von Kempelen Johann Wolfgang Ritter von Kempelen de Pázmánd ( hu, Kempelen Farkas; 23 January 1734 – 26 March 1804) was a Hungarian author and inventor, known for his chess-playing "automaton" hoax The Turk and for his speaking machine. Personal lif ...
of Pressburg, Hungary, described in a 1791 paper. This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837,
Charles Wheatstone Sir Charles Wheatstone FRS FRSE DCL LLD (6 February 1802 – 19 October 1875), was an English scientist and inventor of many scientific breakthroughs of the Victorian era, including the English concertina, the stereoscope (a device for di ...
produced a "speaking machine" based on von Kempelen's design, and in 1846, Joseph Faber exhibited the "
Euphonia Euphonias are members of the genus ''Euphonia'', a group of Neotropical birds in the finch family. They and the chlorophonias comprise the subfamily Euphoniinae. The genus name is of Greek origin and refers to the birds' pleasing song, meanin ...
". In 1923 Paget resurrected Wheatstone's design. In the 1930s
Bell Labs Nokia Bell Labs, originally named Bell Telephone Laboratories (1925–1984), then AT&T Bell Laboratories (1984–1996) and Bell Labs Innovations (1996–2007), is an American industrial Research and development, research and scientific developm ...
developed the
vocoder A vocoder (, a portmanteau of ''voice'' and ''encoder'') is a category of speech coding that analyzes and synthesizes the human voice signal for audio data compression, multiplexing, voice encryption or voice transformation. The vocoder was ...
, which automatically analyzed speech into its fundamental tones and resonances. From his work on the vocoder,
Homer Dudley Homer W. Dudley (14 November 1896– 18 September 1980) was a pioneering electronic and acoustic engineer who created the first electronic voice synthesizer for Bell Labs in the 1930s and led the development of a method of sending secure voice tra ...
developed a keyboard-operated voice-synthesizer called
The Voder The Bell Telephone Laboratory's Voder (from ''Voice Operating Demonstrator'') was the first attempt to electronically synthesize human speech by breaking it down into its acoustic components. It was invented by Homer Dudley in 1937–1938 and dev ...
(Voice Demonstrator), which he exhibited at the
1939 New York World's Fair The 1939–40 New York World's Fair was a world's fair held at Flushing Meadows–Corona Park in Queens, New York, United States. It was the second-most expensive American world's fair of all time, exceeded only by St. Louis's Louisiana Purchas ...
. Dr. Franklin S. Cooper and his colleagues at
Haskins Laboratories Haskins Laboratories, Inc. is an independent 501(c) non-profit corporation, founded in 1935 and located in New Haven, Connecticut, since 1970. Haskins has formal affiliation agreements with both Yale University and the University of Connecticut; ...
built the
Pattern playback The pattern playback is an early talking device that was built by Dr. Franklin S. Cooper and his colleagues, including John M. Borst and Caryl Haskins, at Haskins Laboratories in the late 1940s and completed in 1950. There were several different v ...
in the late 1940s and completed it in 1950. There were several different versions of this hardware device; only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device,
Alvin Liberman Alvin Meyer Liberman (; May 10, 1917 – January 13, 2000) was born in St. Joseph, Missouri. Liberman was an American psychologist. His ideas set the agenda for fifty years of psychological research in speech perception. Biography Liberman rece ...
and colleagues discovered acoustic cues for the perception of
phonetic Phonetics is a branch of linguistics that studies how humans produce and perceive sounds, or in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. ...
segments (consonants and vowels).


Electronic devices

The first computer-based speech-synthesis systems originated in the late 1950s. Noriko Umeda ''et al.'' developed the first general English text-to-speech system in 1968, at the
Electrotechnical Laboratory The , or AIST, is a Japanese research facility headquartered in Tokyo, and most of the workforce is located in Tsukuba Science City, Ibaraki, and in several cities throughout Japan. The institute is managed to integrate scientific and engineeri ...
in Japan. In 1961, physicist
John Larry Kelly, Jr John Larry Kelly Jr. (December 26, 1923 – March 18, 1965), was an American scientist who worked at Bell Labs. From a "system he'd developed to analyze information transmitted over networks," from Claude Shannon's earlier work on information t ...
and his colleague
Louis Gerstman Louis "Lou" Gerstman (April 22, 1930 - March 17, 1992) was an American neuropsychologist best known for his work in speech synthesis. He was a co-inventor, along with John Kelly, of the computer portrayed as HAL 9000 in the film '' 2001: A Space O ...
used an IBM 704 computer to synthesize speech, an event among the most prominent in the history of
Bell Labs Nokia Bell Labs, originally named Bell Telephone Laboratories (1925–1984), then AT&T Bell Laboratories (1984–1996) and Bell Labs Innovations (1996–2007), is an American industrial Research and development, research and scientific developm ...
. Kelly's voice recorder synthesizer (
vocoder A vocoder (, a portmanteau of ''voice'' and ''encoder'') is a category of speech coding that analyzes and synthesizes the human voice signal for audio data compression, multiplexing, voice encryption or voice transformation. The vocoder was ...
) recreated the song "
Daisy Bell "Daisy Bell (Bicycle Built for Two)" is a song written in 1892 by British songwriter Harry Dacre with the well-known chorus "Daisy, Daisy / Give me your answer, do. / I'm half crazy / all for the love of you", ending with the words "a bicycle bu ...
", with musical accompaniment from
Max Mathews Max Vernon Mathews (November 13, 1926 in Columbus, Nebraska, USA – April 21, 2011 in San Francisco, CA, USA) was a pioneer of computer music. Biography Mathews studied electrical engineering at the California Institute of Technology and the Ma ...
. Coincidentally, Arthur C. Clarke was visiting his friend and colleague John Pierce at the Bell Labs Murray Hill facility. Clarke was so impressed by the demonstration that he used it in the climactic scene of his screenplay for his novel '' 2001: A Space Odyssey'', where the
HAL 9000 HAL 9000 is a fictional artificial intelligence character and the main antagonist in Arthur C. Clarke's ''Space Odyssey'' series. First appearing in the 1968 film '' 2001: A Space Odyssey'', HAL ( Heuristically programmed ALgorithmic computer) ...
computer sings the same song as astronaut Dave Bowman puts it to sleep. Despite the success of purely electronic speech synthesis, research into mechanical speech-synthesizers continues.
Linear predictive coding Linear predictive coding (LPC) is a method used mostly in audio signal processing and speech processing for representing the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model. ...
(LPC), a form of
speech coding Speech coding is an application of data compression of digital audio signals containing speech. Speech coding uses speech-specific parameter estimation using audio signal processing techniques to model the speech signal, combined with generic d ...
, began development with the work of
Fumitada Itakura is a Japanese scientist. He did pioneering work in statistical signal processing, and its application to speech analysis, synthesis and coding, including the development of the linear predictive coding (LPC) and line spectral pairs (LSP) method ...
of Nagoya University and Shuzo Saito of
Nippon Telegraph and Telephone , commonly known as NTT, is a Japanese telecommunications company headquartered in Tokyo, Japan. Ranked 55th in Fortune Global 500, ''Fortune'' Global 500, NTT is the fourth largest telecommunications company in the world in terms of revenue, as w ...
(NTT) in 1966. Further developments in LPC technology were made by Bishnu S. Atal and Manfred R. Schroeder at
Bell Labs Nokia Bell Labs, originally named Bell Telephone Laboratories (1925–1984), then AT&T Bell Laboratories (1984–1996) and Bell Labs Innovations (1996–2007), is an American industrial Research and development, research and scientific developm ...
during the 1970s. LPC was later the basis for early speech synthesizer chips, such as the
Texas Instruments LPC Speech Chips The Texas Instruments LPC Speech Chips are a series of speech synthesizer digital signal processor integrated circuits created by Texas Instruments beginning in 1978. They continued to be developed and marketed for many years, though the speech d ...
used in the Speak & Spell toys from 1978. In 1975, Fumitada Itakura developed the
line spectral pairs Line spectral pairs (LSP) or line spectral frequencies (LSF) are used to represent linear prediction coefficients (LPC) for transmission over a channel. LSPs have several properties (e.g. smaller sensitivity to quantization noise) that make them s ...
(LSP) method for high-compression speech coding, while at NTT. From 1975 to 1981, Itakura studied problems in speech analysis and synthesis based on the LSP method. In 1980, his team developed an LSP-based speech synthesizer chip. LSP is an important technology for speech synthesis and coding, and in the 1990s was adopted by almost all international speech coding standards as an essential component, contributing to the enhancement of digital speech communication over mobile channels and the internet. In 1975,
MUSA Musa may refer to: Places * Mūša, a river in Lithuania and Latvia * Musa, Azerbaijan, a village in Yardymli Rayon * Musa, Iran, a village in Ilam Province * Musa, Chaharmahal and Bakhtiari, Iran *Musa, Kerman, Iran * Musa, Bukan, West Azerbaija ...
was released, and was one of the first Speech Synthesis systems. It consisted of a stand-alone computer hardware and a specialized software that enabled it to read Italian. A second version, released in 1978, was also able to sing Italian in an "
a cappella ''A cappella'' (, also , ; ) music is a performance by a singer or a singing group without instrumental accompaniment, or a piece intended to be performed in this way. The term ''a cappella'' was originally intended to differentiate between Ren ...
" style. Dominant systems in the 1980s and 1990s were the
DECtalk DECtalk was a speech synthesizer and text-to-speech technology developed by Digital Equipment Corporation in 1983, based largely on the work of Dennis Klatt at MIT, whose source-filter algorithm was variously known as KlattTalk or MITalk. U ...
system, based largely on the work of Dennis Klatt at MIT, and the Bell Labs system; the latter was one of the first multilingual language-independent systems, making extensive use of
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
methods.
Handheld A mobile device (or handheld computer) is a computer small enough to hold and operate in the hand. Mobile devices typically have a flat LCD or OLED screen, a touchscreen interface, and digital or physical buttons. They may also have a physical ...
electronics featuring speech synthesis began emerging in the 1970s. One of the first was the Telesensory Systems Inc. (TSI) ''Speech+'' portable calculator for the blind in 1976. Other devices had primarily educational purposes, such as the Speak & Spell toy produced by
Texas Instruments Texas Instruments Incorporated (TI) is an American technology company headquartered in Dallas, Texas, that designs and manufactures semiconductors and various integrated circuits, which it sells to electronics designers and manufacturers globall ...
in 1978. Fidelity released a speaking version of its electronic chess computer in 1979. The first
video game Video games, also known as computer games, are electronic games that involves interaction with a user interface or input device such as a joystick, controller, keyboard, or motion sensing device to generate visual feedback. This fee ...
to feature speech synthesis was the 1980
shoot 'em up Shoot 'em ups (also known as shmups or STGs ) are a sub-genre of action games. There is no consensus as to which design elements compose a shoot 'em up; some restrict the definition to games featuring spacecraft and certain types of chara ...
arcade game An arcade game or coin-op game is a coin-operated entertainment machine typically installed in public businesses such as restaurants, bars and amusement arcades. Most arcade games are presented as primarily games of skill and include arcade v ...
, '' Stratovox'' (known in Japan as ''Speak & Rescue''), from Sun Electronics. The first
personal computer game A personal computer game, also known as a PC game or computer game, is a type of video game played on a personal computer (PC) rather than a video game console or arcade machine. Its defining characteristics include: more diverse and user-dete ...
with speech synthesis was '' Manbiki Shoujo'' (''Shoplifting Girl''), released in 1980 for the
PET 2001 The Commodore PET is a line of personal computers produced starting in 1977 by Commodore International. A single all-in-one case combines a MOS Technology 6502 microprocessor, Commodore BASIC in read-only memory, keyboard, monochrome monitor, an ...
, for which the game's developer, Hiroshi Suzuki, developed a "''zero cross''" programming technique to produce a synthesized speech waveform. Another early example, the arcade version of '' Berzerk'', also dates from 1980. The
Milton Bradley Company Milton Bradley Company or simply Milton Bradley (MB) was an American board game manufacturer established by Milton Bradley in Springfield, Massachusetts, in 1860. In 1920, it absorbed the game production of McLoughlin Brothers, formerly the ...
produced the first multi-player electronic game using voice synthesis, '' Milton'', in the same year. Early electronic speech-synthesizers sounded robotic and were often barely intelligible. The quality of synthesized speech has steadily improved, but output from contemporary speech synthesis systems remains clearly distinguishable from actual human speech. Synthesized voices typically sounded male until 1990, when
Ann Syrdal Ann Kristen Syrdal (December 13, 1945July 24, 2020) was an American psychologist and computer science researcher who worked with speech synthesis technology. She developed the first female-sounding voice synthesizer. Early life Syrdal was bor ...
, at AT&T Bell Laboratories, created a female voice. Kurzweil predicted in 2005 that as the cost-performance ratio caused speech synthesizers to become cheaper and more accessible, more people would benefit from the use of text-to-speech programs.


Synthesizer technologies

The most important qualities of a speech synthesis system are ''naturalness'' and '' intelligibility''. Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood. The ideal speech synthesizer is both natural and intelligible. Speech synthesis systems usually try to maximize both characteristics. The two primary technologies generating synthetic speech waveforms are ''concatenative synthesis'' and ''
formant In speech science and phonetics, a formant is the broad spectral maximum that results from an acoustic resonance of the human vocal tract. In acoustics, a formant is usually defined as a broad peak, or local maximum, in the spectrum. For harmoni ...
synthesis''. Each technology has strengths and weaknesses, and the intended uses of a synthesis system will typically determine which approach is used.


Concatenation synthesis

Concatenative synthesis is based on the concatenation (stringing together) of segments of recorded speech. Generally, concatenative synthesis produces the most natural-sounding synthesized speech. However, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. There are three main sub-types of concatenative synthesis.


Unit selection synthesis

Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual
phones A telephone is a telecommunications device that permits two or more users to conduct a conversation when they are too far apart to be easily heard directly. A telephone converts sound, typically and most efficiently the human voice, into ele ...
,
diphone In phonetics, a diphone is an adjacent pair of phones in an utterance. For example, in aɪfəʊn the diphones are a ɪ f ə ʊ n The term is usually used to refer to a recording of the transition between two phones. In the following d ...
s, half-phones,
syllable A syllable is a unit of organization for a sequence of speech sounds typically made up of a syllable nucleus (most often a vowel) with optional initial and final margins (typically, consonants). Syllables are often considered the phonological "bu ...
s,
morpheme A morpheme is the smallest meaningful Constituent (linguistics), constituent of a linguistic expression. The field of linguistics, linguistic study dedicated to morphemes is called morphology (linguistics), morphology. In English, morphemes are ...
s,
word A word is a basic element of language that carries an semantics, objective or pragmatics, practical semantics, meaning, can be used on its own, and is uninterruptible. Despite the fact that language speakers often have an intuitive grasp of w ...
s,
phrase In syntax and grammar, a phrase is a group of words or singular word acting as a grammatical unit. For instance, the English expression "the very happy squirrel" is a noun phrase which contains the adjective phrase "very happy". Phrases can consi ...
s, and sentences. Typically, the division into segments is done using a specially modified speech recognizer set to a "forced alignment" mode with some manual correction afterward, using visual representations such as the
waveform In electronics, acoustics, and related fields, the waveform of a signal is the shape of its graph as a function of time, independent of its time and magnitude scales and of any displacement in time.David Crecraft, David Gorham, ''Electro ...
and
spectrogram A spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time. When applied to an audio signal, spectrograms are sometimes called sonographs, voiceprints, or voicegrams. When the data are represen ...
. An index of the units in the speech database is then created based on the segmentation and acoustic parameters like the
fundamental frequency The fundamental frequency, often referred to simply as the ''fundamental'', is defined as the lowest frequency of a periodic waveform. In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. In ...
( pitch), duration, position in the syllable, and neighboring phones. At
run time Run(s) or RUN may refer to: Places * Run (island), one of the Banda Islands in Indonesia * Run (stream), a stream in the Dutch province of North Brabant People * Run (rapper), Joseph Simmons, now known as "Reverend Run", from the hip-hop group ...
, the desired target utterance is created by determining the best chain of candidate units from the database (unit selection). This process is typically achieved using a specially weighted decision tree. Unit selection provides the greatest naturalness, because it applies only a small amount of
digital signal processing Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are ...
(DSP) to the recorded speech. DSP often makes recorded speech sound less natural, although some systems use a small amount of signal processing at the point of concatenation to smooth the waveform. The output from the best unit-selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, maximum naturalness typically require unit-selection speech databases to be very large, in some systems ranging into the gigabytes of recorded data, representing dozens of hours of speech. Also, unit selection algorithms have been known to select segments from a place that results in less than ideal synthesis (e.g. minor words become unclear) even when a better choice exists in the database. Recently, researchers have proposed various automated methods to detect unnatural segments in unit-selection speech synthesis systems.


Diphone synthesis

Diphone synthesis uses a minimal speech database containing all the
diphone In phonetics, a diphone is an adjacent pair of phones in an utterance. For example, in aɪfəʊn the diphones are a ɪ f ə ʊ n The term is usually used to refer to a recording of the transition between two phones. In the following d ...
s (sound-to-sound transitions) occurring in a language. The number of diphones depends on the
phonotactics Phonotactics (from Ancient Greek "voice, sound" and "having to do with arranging") is a branch of phonology that deals with restrictions in a language on the permissible combinations of phonemes. Phonotactics defines permissible syllable struc ...
of the language: for example, Spanish has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the target prosody of a sentence is superimposed on these minimal units by means of
digital signal processing Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are ...
techniques such as
linear predictive coding Linear predictive coding (LPC) is a method used mostly in audio signal processing and speech processing for representing the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model. ...
,
PSOLA PSOLA (Pitch Synchronous Overlap and Add) is a digital signal processing technique used for speech processing and more specifically speech synthesis. It can be used to modify the pitch and duration of a speech signal. It was invented around 198 ...
or MBROLA. or more recent techniques such as pitch modification in the source domain using discrete cosine transform. Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining, although it continues to be used in research because there are a number of freely available software implementations. An early example of Diphone synthesis is a teaching robot, Leachim, that was invented by Michael J. Freeman. Leachim contained information regarding class curricular and certain biographical information about the students whom it was programmed to teach. It was tested in a fourth grade classroom in
the Bronx, New York The Bronx () is a borough of New York City, coextensive with Bronx County, in the state of New York. It is south of Westchester County; north and east of the New York City borough of Manhattan, across the Harlem River; and north of the New York ...
.


Domain-specific synthesis

Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports. The technology is very simple to implement, and has been in commercial use for a long time, in devices like talking clocks and calculators. The level of naturalness of these systems can be very high because the variety of sentence types is limited, and they closely match the prosody and intonation of the original recordings. Because these systems are limited by the words and phrases in their databases, they are not general-purpose and can only synthesize the combinations of words and phrases with which they have been preprogrammed. The blending of words within naturally spoken language however can still cause problems unless the many variations are taken into account. For example, in
non-rhotic Rhoticity in English is the pronunciation of the historical rhotic consonant by English speakers. The presence or absence of rhoticity is one of the most prominent distinctions by which varieties of English can be classified. In rhotic variet ...
dialects of English the ''"r"'' in words like ''"clear"'' is usually only pronounced when the following word has a vowel as its first letter (e.g. ''"clear out"'' is realized as ). Likewise in French, many final consonants become no longer silent if followed by a word that begins with a vowel, an effect called
liaison Liaison means communication between two or more groups, or co-operation or working together. Liaison or liaisons may refer to: General usage * Affair, an unfaithful sexual relationship * Collaboration * Co-operation Arts and entertainment * Li ...
. This alternation cannot be reproduced by a simple word-concatenation system, which would require additional complexity to be context-sensitive.


Formant synthesis

Formant In speech science and phonetics, a formant is the broad spectral maximum that results from an acoustic resonance of the human vocal tract. In acoustics, a formant is usually defined as a broad peak, or local maximum, in the spectrum. For harmoni ...
synthesis does not use human speech samples at runtime. Instead, the synthesized speech output is created using additive synthesis and an acoustic model (
physical modelling synthesis Physical modelling synthesis refers to sound synthesis methods in which the waveform of the sound to be generated is computed using a mathematical model, a set of equations and algorithms to simulate a physical source of sound, usually a musica ...
). Parameters such as
fundamental frequency The fundamental frequency, often referred to simply as the ''fundamental'', is defined as the lowest frequency of a periodic waveform. In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. In ...
, voicing, and
noise Noise is unwanted sound considered unpleasant, loud or disruptive to hearing. From a physics standpoint, there is no distinction between noise and desired sound, as both are vibrations through a medium, such as air or water. The difference aris ...
levels are varied over time to create a
waveform In electronics, acoustics, and related fields, the waveform of a signal is the shape of its graph as a function of time, independent of its time and magnitude scales and of any displacement in time.David Crecraft, David Gorham, ''Electro ...
of artificial speech. This method is sometimes called ''rules-based synthesis''; however, many concatenative systems also have rules-based components. Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech. However, maximum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have advantages over concatenative systems. Formant-synthesized speech can be reliably intelligible, even at very high speeds, avoiding the acoustic glitches that commonly plague concatenative systems. High-speed synthesized speech is used by the visually impaired to quickly navigate computers using a screen reader. Formant synthesizers are usually smaller programs than concatenative systems because they do not have a database of speech samples. They can therefore be used in
embedded system An embedded system is a computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is ''embedded'' as ...
s, where
memory Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembered, ...
and
microprocessor A microprocessor is a computer processor where the data processing logic and control is included on a single integrated circuit, or a small number of integrated circuits. The microprocessor contains the arithmetic, logic, and control circu ...
power are especially limited. Because formant-based systems have complete control of all aspects of the output speech, a wide variety of prosodies and intonations can be output, conveying not just questions and statements, but a variety of emotions and tones of voice. Examples of non-real-time but highly accurate intonation control in formant synthesis include the work done in the late 1970s for the
Texas Instruments Texas Instruments Incorporated (TI) is an American technology company headquartered in Dallas, Texas, that designs and manufactures semiconductors and various integrated circuits, which it sells to electronics designers and manufacturers globall ...
toy Speak & Spell, and in the early 1980s
Sega is a Japanese multinational corporation, multinational video game and entertainment company headquartered in Shinagawa, Tokyo. Its international branches, Sega of America and Sega Europe, are headquartered in Irvine, California and London, r ...
arcade Arcade most often refers to: * Arcade game, a coin-operated game machine ** Arcade cabinet, housing which holds an arcade game's hardware ** Arcade system board, a standardized printed circuit board * Amusement arcade, a place with arcade games * ...
machines and in many
Atari, Inc. Atari, Inc. was an American video game developer and home computer company founded in 1972 by Nolan Bushnell and Ted Dabney. Atari was a key player in the formation of the video arcade and video game industry. Based primarily around the Sunny ...
arcade games using the TMS5220 LPC Chips. Creating proper intonation for these projects was painstaking, and the results have yet to be matched by real-time text-to-speech interfaces.


Articulatory synthesis

Articulatory synthesis Articulatory synthesis refers to computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there. The shape of the vocal tract can be controlled in a number of ways which us ...
refers to computational techniques for synthesizing speech based on models of the human
vocal tract The vocal tract is the cavity in human bodies and in animals where the sound produced at the sound source ( larynx in mammals; syrinx in birds) is filtered. In birds it consists of the trachea, the syrinx, the oral cavity, the upper part of th ...
and the articulation processes occurring there. The first articulatory synthesizer regularly used for laboratory experiments was developed at
Haskins Laboratories Haskins Laboratories, Inc. is an independent 501(c) non-profit corporation, founded in 1935 and located in New Haven, Connecticut, since 1970. Haskins has formal affiliation agreements with both Yale University and the University of Connecticut; ...
in the mid-1970s by
Philip Rubin Philip E. Rubin (born May 22, 1949) is an American cognitive scientist, technologist, and science administrator known for raising the visibility of behavioral and cognitive science, neuroscience, and ethical issues related to science, techn ...
, Tom Baer, and Paul Mermelstein. This synthesizer, known as ASY, was based on vocal tract models developed at
Bell Laboratories Nokia Bell Labs, originally named Bell Telephone Laboratories (1925–1984), then AT&T Bell Laboratories (1984–1996) and Bell Labs Innovations (1996–2007), is an American industrial research and scientific development company owned by mult ...
in the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues. Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. A notable exception is the
NeXT Next may refer to: Arts and entertainment Film * ''Next'' (1990 film), an animated short about William Shakespeare * ''Next'' (2007 film), a sci-fi film starring Nicolas Cage * '' Next: A Primer on Urban Painting'', a 2005 documentary film Lit ...
-based system originally developed and marketed by Trillium Sound Research, a spin-off company of the
University of Calgary The University of Calgary (U of C or UCalgary) is a public research university located in Calgary, Alberta, Canada. The University of Calgary started in 1944 as the Calgary branch of the University of Alberta, founded in 1908, prior to being ins ...
, where much of the original research was conducted. Following the demise of the various incarnations of NeXT (started by Steve Jobs in the late 1980s and merged with Apple Computer in 1997), the Trillium software was published under the GNU General Public License, with work continuing as
gnuspeech Gnuspeech is an extensible text-to-speech computer software package that produces artificial speech output based on real-time articulatory speech synthesis by rules. That is, it converts text strings into phonetic descriptions, aided by a prono ...
. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's "distinctive region model". More recent synthesizers, developed by Jorge C. Lucero and colleagues, incorporate models of vocal fold biomechanics, glottal aerodynamics and acoustic wave propagation in the bronchi, trachea, nasal and oral cavities, and thus constitute full systems of physics-based speech simulation.


HMM-based synthesis

HMM-based synthesis is a synthesis method based on
hidden Markov model A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ob ...
s, also called Statistical Parametric Synthesis. In this system, the
frequency spectrum The power spectrum S_(f) of a time series x(t) describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, ...
(
vocal tract The vocal tract is the cavity in human bodies and in animals where the sound produced at the sound source ( larynx in mammals; syrinx in birds) is filtered. In birds it consists of the trachea, the syrinx, the oral cavity, the upper part of th ...
),
fundamental frequency The fundamental frequency, often referred to simply as the ''fundamental'', is defined as the lowest frequency of a periodic waveform. In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. In ...
(voice source), and duration ( prosody) of speech are modeled simultaneously by HMMs. Speech
waveforms In electronics, acoustics, and related fields, the waveform of a signal is the shape of its graph as a function of time, independent of its time and magnitude scales and of any displacement in time.David Crecraft, David Gorham, ''Electron ...
are generated from HMMs themselves based on the
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stat ...
criterion.


Sinewave synthesis

Sinewave synthesis Sinewave synthesis, or sine wave speech, is a technique for synthesizing speech by replacing the formants (main bands of energy) with pure tone whistles. The first sinewave synthesis program (''SWS'') for the automatic creation of stimuli for perce ...
is a technique for synthesizing speech by replacing the
formants In speech science and phonetics, a formant is the broad spectral maximum that results from an acoustic resonance of the human vocal tract. In acoustics, a formant is usually defined as a broad peak, or local maximum, in the spectrum. For harmoni ...
(main bands of energy) with pure tone whistles.


Deep learning-based synthesis

Deep learning speech synthesis Deep learning speech synthesis uses Deep Neural Networks (DNN) to produce artificial speech from text (text-to-speech) or spectrum (vocoder). The deep neural networks are trained using a large amount of recorded speech and, in the case of a text-t ...
uses
deep neural network Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. D ...
s (DNN) to produce artificial speech from text (text-to-speech) or spectrum (vocoder). The deep neural networks are trained using a large amount of recorded speech and, in the case of a text-to-speech system, the associated labels and/or input text. The DNN-based speech synthesizers are approaching the naturalness of the human voice. Examples of disadvantages of the method are low robustness when the data are not sufficient, lack of controllability and low performance in auto-regressive models. Some of the limitations (like lack of controllability) can be solved by future research. Currently, Tacotron2 + Waveglow requires only a few dozen hours of training material on recorded speech to produce a very high quality voice. However, for tonal languages, such as Chinese or Taiwanese language, there are different levels of tone sandhi required and sometimes the output of speech synthesizer may result in the mistakes of tone sandhi.


Audio deepfakes


Challenges


Text normalization challenges

The process of normalizing text is rarely straightforward. Texts are full of heteronyms,
number A number is a mathematical object used to count, measure, and label. The original examples are the natural numbers 1, 2, 3, 4, and so forth. Numbers can be represented in language with number words. More universally, individual numbers c ...
s, and
abbreviation An abbreviation (from Latin ''brevis'', meaning ''short'') is a shortened form of a word or phrase, by any method. It may consist of a group of letters or words taken from the full version of the word or phrase; for example, the word ''abbrevia ...
s that all require expansion into a phonetic representation. There are many spellings in English which are pronounced differently based on context. For example, "My latest project is to learn how to better project my voice" contains two pronunciations of "project". Most text-to-speech (TTS) systems do not generate
semantic Semantics (from grc, σημαντικός ''sēmantikós'', "significant") is the study of reference, meaning, or truth. The term can be used to refer to subfields of several distinct disciplines, including philosophy, linguistics and comput ...
representations of their input texts, as processes for doing so are unreliable, poorly understood, and computationally ineffective. As a result, various
heuristic A heuristic (; ), or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate ...
techniques are used to guess the proper way to disambiguate
homograph A homograph (from the el, ὁμός, ''homós'', "same" and γράφω, ''gráphō'', "write") is a word that shares the same written form as another word but has a different meaning. However, some dictionaries insist that the words must also ...
s, like examining neighboring words and using statistics about frequency of occurrence. Recently TTS systems have begun to use HMMs (discussed above) to generate "
parts of speech In grammar, a part of speech or part-of-speech (abbreviated as POS or PoS, also known as word class or grammatical category) is a category of words (or, more generally, of lexical items) that have similar grammatical properties. Words that are ass ...
" to aid in disambiguating homographs. This technique is quite successful for many cases such as whether "read" should be pronounced as "red" implying past tense, or as "reed" implying present tense. Typical error rates when using HMMs in this fashion are usually below five percent. These techniques also work well for most European languages, although access to required training Text corpus, corpora is frequently difficult in these languages. Deciding how to convert numbers is another problem that TTS systems have to address. It is a simple programming challenge to convert a number into words (at least in English), like "1325" becoming "one thousand three hundred twenty-five". However, numbers occur in many different contexts; "1325" may also be read as "one three two five", "thirteen twenty-five" or "thirteen hundred and twenty five". A TTS system can often infer how to expand a number based on surrounding words, numbers, and punctuation, and sometimes the system provides a way to specify the context if it is ambiguous. Roman numerals can also be read differently depending on context. For example, "Henry VIII" reads as "Henry the Eighth", while "Chapter VIII" reads as "Chapter Eight". Similarly, abbreviations can be ambiguous. For example, the abbreviation "in" for "inches" must be differentiated from the word "in", and the address "12 St John St." uses the same abbreviation for both "Saint" and "Street". TTS systems with intelligent front ends can make educated guesses about ambiguous abbreviations, while others provide the same result in all cases, resulting in nonsensical (and sometimes comical) outputs, such as "Ulysses S. Grant" being rendered as "Ulysses South Grant".


Text-to-phoneme challenges

Speech synthesis systems use two basic approaches to determine the pronunciation of a word based on its spelling, a process which is often called text-to-phoneme or
grapheme In linguistics, a grapheme is the smallest functional unit of a writing system. The word ''grapheme'' is derived and the suffix ''-eme'' by analogy with ''phoneme'' and other names of emic units. The study of graphemes is called ''graphemics' ...
-to-phoneme conversion (phoneme is the term used by Linguistics, linguists to describe distinctive sounds in a language). The simplest approach to text-to-phoneme conversion is the dictionary-based approach, where a large dictionary containing all the words of a language and their correct pronunciations is stored by the program. Determining the correct pronunciation of each word is a matter of looking up each word in the dictionary and replacing the spelling with the pronunciation specified in the dictionary. The other approach is rule-based, in which pronunciation rules are applied to words to determine their pronunciations based on their spellings. This is similar to the "sounding out", or synthetic phonics, approach to learning reading. Each approach has advantages and drawbacks. The dictionary-based approach is quick and accurate, but completely fails if it is given a word which is not in its dictionary. As dictionary size grows, so too does the memory space requirements of the synthesis system. On the other hand, the rule-based approach works on any input, but the complexity of the rules grows substantially as the system takes into account irregular spellings or pronunciations. (Consider that the word "of" is very common in English, yet is the only word in which the letter "f" is pronounced .) As a result, nearly all speech synthesis systems use a combination of these approaches. Languages with a phonemic orthography have a very regular writing system, and the prediction of the pronunciation of words based on their spellings is quite successful. Speech synthesis systems for such languages often use the rule-based method extensively, resorting to dictionaries only for those few words, like foreign names and loanwords, whose pronunciations are not obvious from their spellings. On the other hand, speech synthesis systems for languages like English, which have extremely irregular spelling systems, are more likely to rely on dictionaries, and to use rule-based methods only for unusual words, or words that aren't in their dictionaries.


Evaluation challenges

The consistent evaluation of speech synthesis systems may be difficult because of a lack of universally agreed objective evaluation criteria. Different organizations often use different speech data. The quality of speech synthesis systems also depends on the quality of the production technique (which may involve analogue or digital recording) and on the facilities used to replay the speech. Evaluating speech synthesis systems has therefore often been compromised by differences between production techniques and replay facilities. Since 2005, however, some researchers have started to evaluate speech synthesis systems using a common speech dataset.


Prosodics and emotional content

A study in the journal ''Speech Communication'' by Amy Drahota and colleagues at the University of Portsmouth, UK, reported that listeners to voice recordings could determine, at better than chance levels, whether or not the speaker was smiling. It was suggested that identification of the vocal features that signal emotional content may be used to help make synthesized speech sound more natural. One of the related issues is modification of the pitch contour of the sentence, depending upon whether it is an affirmative, interrogative or exclamatory sentence. One of the techniques for pitch modification uses discrete cosine transform in the source domain (linear prediction residual). Such pitch synchronous pitch modification techniques need a priori pitch marking of the synthesis speech database using techniques such as epoch extraction using dynamic Plosive, plosion index applied on the integrated linear prediction residual of the Voice (phonetics), voiced regions of speech.


Dedicated hardware

* Icophone * General Instrument SP0256-AL2 * National Semiconductor DT1050 Digitalker (Mozer – Forrest Mozer) *
Texas Instruments LPC Speech Chips The Texas Instruments LPC Speech Chips are a series of speech synthesizer digital signal processor integrated circuits created by Texas Instruments beginning in 1978. They continued to be developed and marketed for many years, though the speech d ...
EE Times.
TI will exit dedicated speech-synthesis chips, transfer products to Sensory
." June 14, 2001.


Hardware and software systems

Popular systems offering speech synthesis as a built-in capability.


Texas Instruments

In the early 1980s, TI was known as a pioneer in speech synthesis, and a highly popular plug-in speech synthesizer module was available for the TI-99/4 and 4A. Speech synthesizers were offered free with the purchase of a number of cartridges and were used by many TI-written video games (games offered with speech during this promotion included ''Alpiner (video game), Alpiner'' and ''Parsec (video game), Parsec''). The synthesizer uses a variant of linear predictive coding and has a small in-built vocabulary. The original intent was to release small cartridges that plugged directly into the synthesizer unit, which would increase the device's built-in vocabulary. However, the success of software text-to-speech in the Terminal Emulator II cartridge canceled that plan.


Mattel

The Mattel Intellivision game console offered the Intellivoice Voice Synthesis module in 1982. It included the General Instrument SP0256, SP0256 Narrator speech synthesizer chip on a removable cartridge. The Narrator had 2kB of Read-Only Memory (ROM), and this was utilized to store a database of generic words that could be combined to make phrases in Intellivision games. Since the Orator chip could also accept speech data from external memory, any additional words or phrases needed could be stored inside the cartridge itself. The data consisted of strings of analog-filter coefficients to modify the behavior of the chip's synthetic vocal-tract model, rather than simple digitized samples.


SAM

Also released in 1982, Software Automatic Mouth was the first commercial all-software voice synthesis program. It was later used as the basis for Macintalk. The program was available for non-Macintosh Apple computers (including the Apple II, and the Lisa), various Atari models and the Commodore 64. The Apple version preferred additional hardware that contained DACs, although it could instead use the computer's one-bit audio output (with the addition of much distortion) if the card was not present. The Atari made use of the embedded POKEY audio chip. Speech playback on the Atari normally disabled interrupt requests and shut down the ANTIC chip during vocal output. The audible output is extremely distorted speech when the screen is on. The Commodore 64 made use of the 64's embedded SID audio chip.


Atari

Arguably, the first speech system integrated into an operating system was the 1400XL/1450XL personal computers designed by
Atari, Inc. Atari, Inc. was an American video game developer and home computer company founded in 1972 by Nolan Bushnell and Ted Dabney. Atari was a key player in the formation of the video arcade and video game industry. Based primarily around the Sunny ...
using the Votrax SC01 chip in 1983. The 1400XL/1450XL computers used a Finite State Machine to enable World English Spelling text-to-speech synthesis. Unfortunately, the 1400XL/1450XL personal computers never shipped in quantity. The Atari ST computers were sold with "stspeech.tos" on floppy disk.


Apple

The first speech system integrated into an operating system that shipped in quantity was Apple Computer's PlainTalk#Original MacInTalk, MacInTalk. The software was licensed from third-party developers Joseph Katz and Mark Barton (later, SoftVoice, Inc.) and was featured during the 1984 introduction of the Macintosh computer. This January demo required 512 kilobytes of RAM memory. As a result, it could not run in the 128 kilobytes of RAM the first Mac actually shipped with. So, the demo was accomplished with a prototype 512k Mac, although those in attendance were not told of this and the synthesis demo created considerable excitement for the Macintosh. In the early 1990s Apple expanded its capabilities offering system wide text-to-speech support. With the introduction of faster PowerPC-based computers they included higher quality voice sampling. Apple also introduced
speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the m ...
into its systems which provided a fluid command set. More recently, Apple has added sample-based voices. Starting as a curiosity, the speech system of Apple Macintosh has evolved into a fully supported program, PlainTalk, for people with vision problems. VoiceOver was for the first time featured in 2005 in Mac OS X Tiger (10.4). During 10.4 (Tiger) and first releases of 10.5 (Mac OS X Leopard, Leopard) there was only one standard voice shipping with Mac OS X. Starting with 10.6 (Mac OS X Snow Leopard, Snow Leopard), the user can choose out of a wide range list of multiple voices. VoiceOver voices feature the taking of realistic-sounding breaths between sentences, as well as improved clarity at high read rates over PlainTalk. Mac OS X also includes say (software), say, a Command-line interface, command-line based application that converts text to audible speech. The AppleScript Standard Additions includes a say verb that allows a script to use any of the installed voices and to control the pitch, speaking rate and modulation of the spoken text.


Amazon

Used in Amazon Alexa, Alexa and as Software as a service, Software as a Service in AWS (from 2017).


AmigaOS

The second operating system to feature advanced speech synthesis capabilities was AmigaOS, introduced in 1985. The voice synthesis was licensed by Commodore International from SoftVoice, Inc., who also developed the original MacinTalk text-to-speech system. It featured a complete system of voice emulation for American English, with both male and female voices and "stress" indicator markers, made possible through the Amiga's audio chipset. The synthesis system was divided into a translator library which converted unrestricted English text into a standard set of phonetic codes and a narrator device which implemented a formant model of speech generation.. AmigaOS also featured a high-level "AmigaOS#Speech synthesis, Speak Handler", which allowed command-line users to redirect text output to speech. Speech synthesis was occasionally used in third-party programs, particularly word processors and educational software. The synthesis software remained largely unchanged from the first AmigaOS release and Commodore eventually removed speech synthesis support from AmigaOS 2.1 onward. Despite the American English phoneme limitation, an unofficial version with multilingual speech synthesis was developed. This made use of an enhanced version of the translator library which could translate a number of languages, given a set of rules for each language.


Microsoft Windows

Modern Microsoft Windows, Windows desktop systems can use Speech Application Programming Interface#SAPI 1-4 API family, SAPI 4 and Speech Application Programming Interface#SAPI 5 API family, SAPI 5 components to support speech synthesis and
speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the m ...
. SAPI 4.0 was available as an optional add-on for Windows 95 and Windows 98. Windows 2000 added Microsoft Narrator, Narrator, a text-to-speech utility for people who have visual impairment. Third-party programs such as JAWS for Windows, Window-Eyes, Non-visual Desktop Access, Supernova and System Access can perform various text-to-speech tasks such as reading text aloud from a specified website, email account, text document, the Windows clipboard, the user's keyboard typing, etc. Not all programs can use speech synthesis directly. Some programs can use plug-ins, extensions or add-ons to read text aloud. Third-party programs are available that can read text from the system clipboard. Microsoft Speech Server is a server-based package for voice synthesis and recognition. It is designed for network use with web applications and call centers.


Votrax

From 1971 to 1996, Votrax produced a number of commercial speech synthesizer components. A Votrax synthesizer was included in the first generation Kurzweil Reading Machine for the Blind.


Text-to-speech systems

Text-to-speech (TTS) refers to the ability of computers to read text aloud. A TTS engine converts written text to a phonemic representation, then converts the phonemic representation to waveforms that can be output as sound. TTS engines with different languages, dialects and specialized vocabularies are available through third-party publishers.


Android

Version 1.6 of Android (operating system), Android added support for speech synthesis (TTS).


Internet

Currently, there are a number of application software, applications, Plug-in (computing), plugins and gadgets that can read messages directly from an e-mail client and web pages from a web browser or Google Toolbar. Some specialized software can narrate RSS, RSS-feeds. On one hand, online RSS-narrators simplify information delivery by allowing users to listen to their favourite news sources and to convert them to podcasts. On the other hand, on-line RSS-readers are available on almost any personal computer connected to the Internet. Users can download generated audio files to portable devices, e.g. with a help of podcast receiver, and listen to them while walking, jogging or commuting to work. A growing field in Internet based TTS is web-based assistive technology, e.g. 'Browsealoud' from a UK company and Readspeaker. It can deliver TTS functionality to anyone (for reasons of accessibility, convenience, entertainment or information) with access to a web browser. The non-profit project Wikipedia:WikiProject Spoken Wikipedia/Pediaphon, Pediaphon was created in 2006 to provide a similar web-based TTS interface to the Wikipedia. Other work is being done in the context of the W3C through the W3C Audio Incubator Group with the involvement of The BBC and Google Inc.


Open source

Some open-source software systems are available, such as: * RHVoice (speech synthetizer), RHVoice with support for multiple languages. * Festival Speech Synthesis System which uses diphone-based synthesis, as well as more modern and better-sounding techniques. * eSpeak which supports a broad range of languages. *
gnuspeech Gnuspeech is an extensible text-to-speech computer software package that produces artificial speech output based on real-time articulatory speech synthesis by rules. That is, it converts text strings into phonetic descriptions, aided by a prono ...
which uses articulatory synthesis from the Free Software Foundation. * MaryTTS, web based and open source.


Others

* Following the commercial failure of the hardware-based Intellivoice, gaming developers sparingly used software synthesis in later games. Earlier systems from Atari, such as the Atari 5200 (Baseball) and the Atari 2600 (Quadrun and Open Sesame), also had games utilizing software synthesis. * Some e-book readers, such as the Amazon Kindle, Samsung E6, PocketBook eReader Pro, enTourage eDGe, and the Bebook Neo. * The BBC Micro incorporated the Texas Instruments TMS5220 speech synthesis chip, * Some models of Texas Instruments home computers produced in 1979 and 1981 (TI-99/4A, Texas Instruments TI-99/4 and TI-99/4A) were capable of text-to-phoneme synthesis or reciting complete words and phrases (text-to-dictionary), using a very popular Speech Synthesizer peripheral. TI used a proprietary codec to embed complete spoken phrases into applications, primarily video games. * IBM's OS/2 Warp, OS/2 Warp 4 included VoiceType, a precursor to IBM ViaVoice. * Global Positioning System, GPS Navigation units produced by Garmin, Magellan Navigation, Magellan, TomTom and others use speech synthesis for automobile navigation. * Yamaha Corporation, Yamaha produced a music synthesizer in 1999, the Yamaha FS1R which included a Formant synthesis capability. Sequences of up to 512 individual vowel and consonant formants could be stored and replayed, allowing short vocal phrases to be synthesized.


Digital sound-alikes

At the 2018 Conference on Neural Information Processing Systems (NeurIPS) researchers from Google presented the work 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis', which Transfer learning, transfers learning from speaker recognition, speaker verification to achieve text-to-speech synthesis, that can be made to sound almost like anybody from a speech sample of only 5 seconds. Also researchers from Baidu Research presented a voice cloning system with similar aims at the 2018 NeurIPS conference, though the result is rather unconvincing. By 2019 the digital sound-alikes found their way to the hands of criminals as NortonLifeLock, Symantec researchers know of 3 cases where digital sound-alikes technology has been used for crime. This increases the stress on the disinformation situation coupled with the facts that * Human image synthesis since the early 2000s has improved beyond the point of human's inability to tell a real human imaged with a real camera from a simulation of a human imaged with a simulation of a camera. * 2D video forgery techniques were presented in 2016 that allow Real-time computing#Near real-time, near real-time counterfeiting of facial expressions in existing 2D video. * In SIGGRAPH 2017 an audio driven digital look-alike of upper torso of Barack Obama was presented by researchers from University of Washington. It was driven only by a voice track as source data for the animation after the training phase to acquire lip sync and wider facial information from training material consisting of 2D videos with audio had been completed. In March 2020, a freeware web application called 15.ai that generates high-quality voices from an assortment of fictional characters from a variety of media sources was released. Initial characters included GLaDOS from ''Portal (series), Portal'', Twilight Sparkle and Fluttershy from the show ''My Little Pony: Friendship Is Magic'', and the Tenth Doctor from ''Doctor Who''.


Speech synthesis markup languages

A number of markup languages have been established for the rendition of text as speech in an XML-compliant format. The most recent is Speech Synthesis Markup Language (SSML), which became a W3C recommendation in 2004. Older speech synthesis markup languages include Java Speech Markup Language (JSML) and SABLE. Although each of these was proposed as a standard, none of them have been widely adopted. Speech synthesis markup languages are distinguished from dialogue markup languages. VoiceXML, for example, includes tags related to speech recognition, dialogue management and touchtone dialing, in addition to text-to-speech markup.


Applications

Speech synthesis has long been a vital assistive technology tool and its application in this area is significant and widespread. It allows environmental barriers to be removed for people with a wide range of disabilities. The longest application has been in the use of screen readers for people with visual impairment, but text-to-speech systems are now commonly used by people with dyslexia and other Reading disability, reading disabilities as well as by pre-literate children. They are also frequently employed to aid those with severe speech impairment usually through a dedicated voice output communication aid. Work to personalize a synthetic voice to better match a person's personality or historical voice is becoming available. A noted application, of speech synthesis, was the Reading machine, Kurzweil Reading Machine for the Blind which incorporated text-to-phonetics software based on work from
Haskins Laboratories Haskins Laboratories, Inc. is an independent 501(c) non-profit corporation, founded in 1935 and located in New Haven, Connecticut, since 1970. Haskins has formal affiliation agreements with both Yale University and the University of Connecticut; ...
and a black-box synthesizer built by Votrax Speech synthesis techniques are also used in entertainment productions such as games and animations. In 2007, Animo Limited announced the development of a software application package based on its speech synthesis software FineSpeech, explicitly geared towards customers in the entertainment industries, able to generate narration and lines of dialogue according to user specifications. The application reached maturity in 2008, when NEC Biglobe announced a web service that allows users to create phrases from the voices of characters from the Japanese anime series ''Code Geass: Lelouch of the Rebellion R2''. In recent years, text-to-speech for disability and impaired communication aids have become widely available. Text-to-speech is also finding new applications; for example, speech synthesis combined with
speech recognition Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the m ...
allows for interaction with mobile devices via
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
interfaces. Text-to-speech is also used in second language acquisition. Voki, for instance, is an educational tool created by Oddcast that allows users to create their own talking avatar, using different accents. They can be emailed, embedded on websites or shared on social media. Another area of application is AI video creation with talking heads. Tools, like Elai.io are allowing users to create video content with AI avatars who speak using text-to-speech technology. In addition, speech synthesis is a valuable computational aid for the analysis and assessment of speech disorders. A voice quality synthesizer, developed by Jorge C. Lucero et al. at the University of Brasília, simulates the physics of phonation and includes models of vocal frequency jitter and tremor, airflow noise and laryngeal asymmetries. The synthesizer has been used to mimic the timbre of dysphonic speakers with controlled levels of roughness, breathiness and strain.


See also


References


External links

* *
Simulated singing with the singing robot Pavarobotti
or a description from the BBC o
how the robot synthesized the singing
{{DEFAULTSORT:Speech Synthesis Speech synthesis, Applications of artificial intelligence Assistive technology Auditory displays Computational linguistics History of human–computer interaction