Speech synthesis is the artificial production of human speech. A
computer system used for this purpose is called a speech computer or
speech synthesizer, and can be implemented in software or hardware
products. A text-to-speech (TTS) system converts normal language text
into speech; other systems render symbolic linguistic representations
like phonetic transcriptions into speech.
Synthesized speech can be created by concatenating pieces of recorded
speech that are stored in a database. Systems differ in the size of
the stored speech units; a system that stores phones or diphones
provides the largest output range, but may lack clarity. For specific
usage domains, the storage of entire words or sentences allows for
high-quality output. Alternatively, a synthesizer can incorporate a
model of the vocal tract and other human voice characteristics to
create a completely "synthetic" voice output.
The quality of a speech synthesizer is judged by its similarity to the
human voice and by its ability to be understood clearly. An
intelligible text-to-speech program allows people with visual
impairments or reading disabilities to listen to written words on a
home computer. Many computer operating systems have included speech
synthesizers since the early 1990s.
Overview of a typical TTS system
A synthetic voice announcing an arriving train in Sweden.
Problems playing this file? See media help.
Sample of Microsoft Sam
Microsoft Windows XP's default speech synthesizer voice saying "The
quick brown fox jumps over the lazy dog 1,234,567,890 times". It is
then followed by a demonstration of a glitch that occurs when the
words "SOI/SOY" are entered
Problems playing this file? See media help.
A text-to-speech system (or "engine") is composed of two parts: a
front-end and a back-end. The front-end has two major tasks. First, it
converts raw text containing symbols like numbers and abbreviations
into the equivalent of written-out words. This process is often called
text normalization, pre-processing, or tokenization. The front-end
then assigns phonetic transcriptions to each word, and divides and
marks the text into prosodic units, like phrases, clauses, and
sentences. The process of assigning phonetic transcriptions to words
is called text-to-phoneme or grapheme-to-phoneme conversion. Phonetic
transcriptions and prosody information together make up the symbolic
linguistic representation that is output by the front-end. The
back-end—often referred to as the synthesizer—then converts the
symbolic linguistic representation into sound. In certain systems,
this part includes the computation of the target prosody (pitch
contour, phoneme durations), which is then imposed on the output
1.1 Electronic devices
2.1.1 Unit selection synthesis
2.1.3 Domain-specific synthesis
2.3 Articulatory synthesis
2.4 HMM-based synthesis
2.5 Sinewave synthesis
3.1 Text normalization challenges
3.2 Text-to-phoneme challenges
3.3 Evaluation challenges
3.4 Prosodics and emotional content
4 Dedicated hardware
5 Hardware and software systems
5.6 Microsoft Windows
Texas Instruments TI-99/4A
6 Text-to-speech systems
6.3 Open source
6.5 Digital sound-alikes
Speech synthesis markup languages
10 See also
12 External links
Long before the invention of electronic signal processing, some people
tried to build machines to emulate human speech. Some early legends of
the existence of "Brazen Heads" involved Pope
Silvester II (d. 1003
Albertus Magnus (1198–1280), and
Roger Bacon (1214–1294).
In 1779 the German-Danish scientist Christian Gottlieb Kratzenstein
won the first prize in a competition announced by the Russian Imperial
Academy of Sciences and Arts for models he built of the human vocal
tract that could produce the five long vowel sounds (in International
Phonetic Alphabet notation: [aː], [eː], [iː], [oː] and [uː]).
There followed the bellows-operated "acoustic-mechanical speech
Wolfgang von Kempelen
Wolfgang von Kempelen of Pressburg, Hungary, described in
a 1791 paper. This machine added models of the tongue and lips,
enabling it to produce consonants as well as vowels. In 1837, Charles
Wheatstone produced a "speaking machine" based on von Kempelen's
design, and in 1846, Joseph Faber exhibited the "Euphonia". In 1923
Paget resurrected Wheatstone's design.
In the 1930s
Bell Labs developed the vocoder, which automatically
analyzed speech into its fundamental tones and resonances. From his
work on the vocoder,
Homer Dudley developed a keyboard-operated
The Voder (Voice Demonstrator), which he
exhibited at the 1939 New York World's Fair.
Franklin S. Cooper and his colleagues at Haskins Laboratories
Pattern playback in the late 1940s and completed it in 1950.
There were several different versions of this hardware device; only
one currently survives. The machine converts pictures of the acoustic
patterns of speech in the form of a spectrogram back into sound. Using
Alvin Liberman and colleagues discovered acoustic cues
for the perception of phonetic segments (consonants and vowels).
In 1975 MUSA was released, and was one of the first
systems. It consisted of a stand-alone computer hardware and a
specialized software that enabled it to read Italian. A second
version, released in 1978, was also able to sing Italian in an "a
Dominant systems in the 1980s and 1990s were the
DECtalk system, based
largely on the work of Dennis Klatt at MIT, and the Bell Labs
system; the latter was one of the first multilingual
language-independent systems, making extensive use of natural language
Early electronic speech-synthesizers sounded robotic and were often
barely intelligible. The quality of synthesized speech has steadily
improved, but as of 2016[update] output from contemporary speech
synthesis systems remains clearly distinguishable from actual human
Kurzweil predicted in 2005 that as the cost-performance ratio caused
speech synthesizers to become cheaper and more accessible, more people
would benefit from the use of text-to-speech programs.
Computer and speech synthesiser housing used by
Stephen Hawking in
The first computer-based speech-synthesis systems originated in the
late 1950s. Noriko Umeda et al. developed the first general English
text-to-speech system in 1968 at the Electrotechnical Laboratory,
Japan. In 1961 physicist
John Larry Kelly, Jr and his colleague
Louis Gerstman used an
IBM 704 computer to synthesize speech, an
event among the most prominent in the history of Bell Labs.[citation
needed] Kelly's voice recorder synthesizer (vocoder) recreated the
song "Daisy Bell", with musical accompaniment from Max Mathews.
Arthur C. Clarke
Arthur C. Clarke was visiting his friend and colleague
John Pierce at the
Bell Labs Murray Hill facility. Clarke was so
impressed by the demonstration that he used it in the climactic scene
of his screenplay for his novel 2001: A Space Odyssey, where the
HAL 9000 computer sings the same song as astronaut Dave Bowman puts it
to sleep. Despite the success of purely electronic speech
synthesis, research into mechanical speech-synthesizers continues.
Handheld electronics featuring speech synthesis began emerging in the
1970s. One of the first was the
Telesensory Systems Inc. (TSI) Speech+
portable calculator for the blind in 1976. Other devices had
primarily educational purposes, such as the Speak & Spell toy
Texas Instruments in 1978. Fidelity released a
speaking version of its electronic chess computer in 1979. The
first video game to feature speech synthesis was the 1980 shoot 'em up
Stratovox (known in Japan as Speak & Rescue), from
Sun Electronics. The first personal computer game with speech
synthesis was Manbiki Shoujo (Shoplifting Girl), released in 1980 for
the PET 2001, for which the game's developer, Hiroshi Suzuki,
developed a "zero cross" programming technique to produce a
synthesized speech waveform. Another early example, the arcade
version of Berzerk, also dates from 1980. The Milton Bradley Company
produced the first multi-player electronic game using voice synthesis,
Milton, in the same year.
The most important qualities of a speech synthesis system are
naturalness and intelligibility. Naturalness describes how closely
the output sounds like human speech, while intelligibility is the ease
with which the output is understood. The ideal speech synthesizer is
both natural and intelligible.
Speech synthesis systems usually try to
maximize both characteristics.
The two primary technologies generating synthetic speech waveforms are
concatenative synthesis and formant synthesis. Each technology has
strengths and weaknesses, and the intended uses of a synthesis system
will typically determine which approach is used.
Main article: Concatenative synthesis
Concatenative synthesis is based on the concatenation (or stringing
together) of segments of recorded speech. Generally, concatenative
synthesis produces the most natural-sounding synthesized speech.
However, differences between natural variations in speech and the
nature of the automated techniques for segmenting the waveforms
sometimes result in audible glitches in the output. There are three
main sub-types of concatenative synthesis.
Unit selection synthesis
Unit selection synthesis uses large databases of recorded speech.
During database creation, each recorded utterance is segmented into
some or all of the following: individual phones, diphones,
half-phones, syllables, morphemes, words, phrases, and sentences.
Typically, the division into segments is done using a specially
modified speech recognizer set to a "forced alignment" mode with some
manual correction afterward, using visual representations such as the
waveform and spectrogram. An index of the units in the speech
database is then created based on the segmentation and acoustic
parameters like the fundamental frequency (pitch), duration, position
in the syllable, and neighboring phones. At run time, the desired
target utterance is created by determining the best chain of candidate
units from the database (unit selection). This process is typically
achieved using a specially weighted decision tree.
Unit selection provides the greatest naturalness, because it applies
only a small amount of digital signal processing (DSP) to the recorded
speech. DSP often makes recorded speech sound less natural, although
some systems use a small amount of signal processing at the point of
concatenation to smooth the waveform. The output from the best
unit-selection systems is often indistinguishable from real human
voices, especially in contexts for which the TTS system has been
tuned. However, maximum naturalness typically require unit-selection
speech databases to be very large, in some systems ranging into the
gigabytes of recorded data, representing dozens of hours of
speech. Also, unit selection algorithms have been known to select
segments from a place that results in less than ideal synthesis (e.g.
minor words become unclear) even when a better choice exists in the
database. Recently, researchers have proposed various automated
methods to detect unnatural segments in unit-selection speech
Diphone synthesis uses a minimal speech database containing all the
diphones (sound-to-sound transitions) occurring in a language. The
number of diphones depends on the phonotactics of the language: for
example, Spanish has about 800 diphones, and German about 2500. In
diphone synthesis, only one example of each diphone is contained in
the speech database. At runtime, the target prosody of a sentence is
superimposed on these minimal units by means of digital signal
processing techniques such as linear predictive coding, PSOLA or
MBROLA. or more recent techniques such as pitch modification in
the source domain using discrete cosine transform Diphone
synthesis suffers from the sonic glitches of concatenative synthesis
and the robotic-sounding nature of formant synthesis, and has few of
the advantages of either approach other than small size. As such, its
use in commercial applications is declining, although
it continues to be used in research because there are a number of
freely available software implementations.
Domain-specific synthesis concatenates prerecorded words and phrases
to create complete utterances. It is used in applications where the
variety of texts the system will output is limited to a particular
domain, like transit schedule announcements or weather reports.
The technology is very simple to implement, and has been in commercial
use for a long time, in devices like talking clocks and calculators.
The level of naturalness of these systems can be very high because the
variety of sentence types is limited, and they closely match the
prosody and intonation of the original recordings.
Because these systems are limited by the words and phrases in their
databases, they are not general-purpose and can only synthesize the
combinations of words and phrases with which they have been
preprogrammed. The blending of words within naturally spoken language
however can still cause problems unless the many variations are taken
into account. For example, in non-rhotic dialects of English the "r"
in words like "clear" /ˈklɪə/ is usually only pronounced when the
following word has a vowel as its first letter (e.g. "clear out" is
realized as /ˌklɪəɾˈʌʊt/). Likewise in French, many final
consonants become no longer silent if followed by a word that begins
with a vowel, an effect called liaison. This alternation cannot be
reproduced by a simple word-concatenation system, which would require
additional complexity to be context-sensitive.
Formant synthesis does not use human speech samples at runtime.
Instead, the synthesized speech output is created using additive
synthesis and an acoustic model (physical modelling synthesis).
Parameters such as fundamental frequency, voicing, and noise levels
are varied over time to create a waveform of artificial speech. This
method is sometimes called rules-based synthesis; however, many
concatenative systems also have rules-based components. Many systems
based on formant synthesis technology generate artificial,
robotic-sounding speech that would never be mistaken for human speech.
However, maximum naturalness is not always the goal of a speech
synthesis system, and formant synthesis systems have advantages over
concatenative systems. Formant-synthesized speech can be reliably
intelligible, even at very high speeds, avoiding the acoustic glitches
that commonly plague concatenative systems. High-speed synthesized
speech is used by the visually impaired to quickly navigate computers
using a screen reader.
Formant synthesizers are usually smaller
programs than concatenative systems because they do not have a
database of speech samples. They can therefore be used in embedded
systems, where memory and microprocessor power are especially limited.
Because formant-based systems have complete control of all aspects of
the output speech, a wide variety of prosodies and intonations can be
output, conveying not just questions and statements, but a variety of
emotions and tones of voice.
Examples of non-real-time but highly accurate intonation control in
formant synthesis include the work done in the late 1970s for the
Texas Instruments toy Speak & Spell, and in the early 1980s Sega
arcade machines and in many
Atari, Inc. arcade games using the
TMS5220 LPC Chips. Creating proper intonation for these projects was
painstaking, and the results have yet to be matched by real-time
Formant synthesis was implemented in hardware in the
synthesizer, but the speech aspect of formants was never realized in
the synth. It was capable of short, several-second formant sequences
which could speak a single phrase, but since the MIDI control
interface was so restrictive live speech was an impossibility.
Articulatory synthesis refers to computational techniques for
synthesizing speech based on models of the human vocal tract and the
articulation processes occurring there. The first articulatory
synthesizer regularly used for laboratory experiments was developed at
Haskins Laboratories in the mid-1970s by Philip Rubin, Tom Baer, and
Paul Mermelstein. This synthesizer, known as ASY, was based on vocal
tract models developed at
Bell Laboratories in the 1960s and 1970s by
Paul Mermelstein, Cecil Coker, and colleagues.
Until recently, articulatory synthesis models have not been
incorporated into commercial speech synthesis systems. A notable
exception is the NeXT-based system originally developed and marketed
by Trillium Sound Research, a spin-off company of the University of
Calgary, where much of the original research was conducted. Following
the demise of the various incarnations of
NeXT (started by Steve Jobs
in the late 1980s and merged with
Apple Computer in 1997), the
Trillium software was published under the GNU General Public License,
with work continuing as gnuspeech. The system, first marketed in 1994,
provides full articulatory-based text-to-speech conversion using a
waveguide or transmission-line analog of the human oral and nasal
tracts controlled by Carré's "distinctive region model".
More recent synthesizers, developed by Jorge C. Lucero and colleagues,
incorporate models of vocal fold biomechanics, glottal aerodynamics
and acoustic wave propagation in the bronqui, traquea, nasal and oral
cavities, and thus constitute full systems of physics-based speech
HMM-based synthesis is a synthesis method based on hidden Markov
models, also called Statistical Parametric Synthesis. In this system,
the frequency spectrum (vocal tract), fundamental frequency (voice
source), and duration (prosody) of speech are modeled simultaneously
Speech waveforms are generated from HMMs themselves based on
the maximum likelihood criterion.
Sinewave synthesis is a technique for synthesizing speech by replacing
the formants (main bands of energy) with pure tone whistles.
Text normalization challenges
The process of normalizing text is rarely straightforward. Texts are
full of heteronyms, numbers, and abbreviations that all require
expansion into a phonetic representation. There are many spellings in
English which are pronounced differently based on context. For
example, "My latest project is to learn how to better project my
voice" contains two pronunciations of "project".
Most text-to-speech (TTS) systems do not generate semantic
representations of their input texts, as processes for doing so are
unreliable, poorly understood, and computationally ineffective. As a
result, various heuristic techniques are used to guess the proper way
to disambiguate homographs, like examining neighboring words and using
statistics about frequency of occurrence.
Recently TTS systems have begun to use HMMs (discussed above) to
generate "parts of speech" to aid in disambiguating homographs. This
technique is quite successful for many cases such as whether "read"
should be pronounced as "red" implying past tense, or as "reed"
implying present tense. Typical error rates when using HMMs in this
fashion are usually below five percent. These techniques also work
well for most European languages, although access to required training
corpora is frequently difficult in these languages.
Deciding how to convert numbers is another problem that TTS systems
have to address. It is a simple programming challenge to convert a
number into words (at least in English), like "1325" becoming "one
thousand three hundred twenty-five." However, numbers occur in many
different contexts; "1325" may also be read as "one three two five",
"thirteen twenty-five" or "thirteen hundred and twenty five". A TTS
system can often infer how to expand a number based on surrounding
words, numbers, and punctuation, and sometimes the system provides a
way to specify the context if it is ambiguous. Roman numerals can
also be read differently depending on context. For example, "Henry
VIII" reads as "Henry the Eighth", while "Chapter VIII" reads as
Similarly, abbreviations can be ambiguous. For example, the
abbreviation "in" for "inches" must be differentiated from the word
"in", and the address "12 St John St." uses the same abbreviation for
both "Saint" and "Street". TTS systems with intelligent front ends can
make educated guesses about ambiguous abbreviations, while others
provide the same result in all cases, resulting in nonsensical (and
sometimes comical) outputs, such as "co-operation" being rendered as
Speech synthesis systems use two basic approaches to determine the
pronunciation of a word based on its spelling, a process which is
often called text-to-phoneme or grapheme-to-phoneme conversion
(phoneme is the term used by linguists to describe distinctive sounds
in a language). The simplest approach to text-to-phoneme conversion is
the dictionary-based approach, where a large dictionary containing all
the words of a language and their correct pronunciations is stored by
the program. Determining the correct pronunciation of each word is a
matter of looking up each word in the dictionary and replacing the
spelling with the pronunciation specified in the dictionary. The other
approach is rule-based, in which pronunciation rules are applied to
words to determine their pronunciations based on their spellings. This
is similar to the "sounding out", or synthetic phonics, approach to
Each approach has advantages and drawbacks. The dictionary-based
approach is quick and accurate, but completely fails if it is given a
word which is not in its dictionary. As dictionary size grows, so too
does the memory space requirements of the synthesis system. On the
other hand, the rule-based approach works on any input, but the
complexity of the rules grows substantially as the system takes into
account irregular spellings or pronunciations. (Consider that the word
"of" is very common in English, yet is the only word in which the
letter "f" is pronounced [v].) As a result, nearly all speech
synthesis systems use a combination of these approaches.
Languages with a phonemic orthography have a very regular writing
system, and the prediction of the pronunciation of words based on
their spellings is quite successful.
Speech synthesis systems for such
languages often use the rule-based method extensively, resorting to
dictionaries only for those few words, like foreign names and
borrowings, whose pronunciations are not obvious from their spellings.
On the other hand, speech synthesis systems for languages like
English, which have extremely irregular spelling systems, are more
likely to rely on dictionaries, and to use rule-based methods only for
unusual words, or words that aren't in their dictionaries.
The consistent evaluation of speech synthesis systems may be difficult
because of a lack of universally agreed objective evaluation criteria.
Different organizations often use different speech data. The quality
of speech synthesis systems also depends on the quality of the
production technique (which may involve analogue or digital recording)
and on the facilities used to replay the speech. Evaluating speech
synthesis systems has therefore often been compromised by differences
between production techniques and replay facilities.
Since 2005, however, some researchers have started to evaluate speech
synthesis systems using a common speech dataset.
Prosodics and emotional content
See also: Prosody (linguistics)
A study in the journal
Speech Communication by Amy Drahota and
colleagues at the University of Portsmouth, UK, reported that
listeners to voice recordings could determine, at better than chance
levels, whether or not the speaker was smiling. It was
suggested that identification of the vocal features that signal
emotional content may be used to help make synthesized speech sound
more natural. One of the related issues is modification of the pitch
contour of the sentence, depending upon whether it is an affirmative,
interrogative or exclamatory sentence. One of the techniques for pitch
modification uses discrete cosine transform in the source domain
(linear prediction residual). Such pitch synchronous pitch
modification techniques need a priori pitch marking of the synthesis
speech database using techniques such as epoch extraction using
dynamic plosion index applied on the integrated linear prediction
residual of the voiced regions of speech.
Early Technology (not available anymore)
SC-01A (analog formant)
SC-02 / SSI-263 / "Artic 263"
General Instrument SP0256-AL2 (CTS256A-AL2)
National Semiconductor DT1050 Digitalker (Mozer – Forrest Mozer)
Silicon Systems SSI 263 (analog formant)
Texas Instruments LPC
Modern, Human Sounding Text to
Speech on a Chip
MSP50C6XX – Sold to
Sensory, Inc. in 2001
Hitachi HD38880BP (Vanguard Arcade Game SNK 1981)
Current (as of 2013)
Magnevation SpeakJet (www.speechchips.com) TTS256 Hobby and
Epson S1V30120F01A100 (www.epson.com) IC DEC
Talk Based voice, Robotic,
Textspeak TTS-EM (www.textspeak.com) ICs, Modules and Industrial
enclosures in 24 languages. Human sounding,
Hardware and software systems
Popular systems offering speech synthesis as a built-in capability.
Intellivision game console offered the
Synthesis module in 1982. It included the SP0256 Narrator speech
synthesizer chip on a removable cartridge. The Narrator had 2kB of
Read-Only Memory (ROM), and this was utilized to store a database of
generic words that could be combined to make phrases in Intellivision
games. Since the Orator chip could also accept speech data from
external memory, any additional words or phrases needed could be
stored inside the cartridge itself. The data consisted of strings of
analog-filter coefficients to modify the behavior of the chip's
synthetic vocal-tract model, rather than simple digitized samples.
Also released in 1982,
Software Automatic Mouth was the first
commercial all-software voice synthesis program. It was later used as
the basis for Macintalk. The program was available for non-Macintosh
Apple computers (including the Apple II, and the Lisa), various Atari
models and the Commodore 64. The Apple version preferred additional
hardware that contained DACs, although it could instead use the
computer's one-bit audio output (with the addition of much distortion)
if the card was not present. The Atari made use of the embedded POKEY
Speech playback on the Atari normally disabled interrupt
requests and shut down the ANTIC chip during vocal output. The audible
output is extremely distorted speech when the screen is on. The
Commodore 64 made use of the 64's embedded SID audio chip.
Arguably, the first speech system integrated into an operating system
was the 1400XL/1450XL personal computers designed by
Atari, Inc. using
the Votrax SC01 chip in 1983. The 1400XL/1450XL computers used a
Finite State Machine to enable World English
synthesis. Unfortunately, the 1400XL/1450XL personal computers
never shipped in quantity.
Atari ST computers were sold with "stspeech.tos" on floppy disk.
The first speech system integrated into an operating system that
shipped in quantity was Apple Computer's MacInTalk. The software was
licensed from 3rd party developers Joseph Katz and Mark Barton (later,
SoftVoice, Inc.) and was featured during the 1984 introduction of the
Macintosh computer. This January demo required 512 kilobytes of RAM
memory. As a result, it could not run in the 128 kilobytes of RAM the
first Mac actually shipped with. So, the demo was accomplished
with a prototype 512k Mac, although those in attendance were not told
of this and the synthesis demo created considerable excitement for the
Macintosh. In the early 1990s Apple expanded its capabilities offering
system wide text-to-speech support. With the introduction of faster
PowerPC-based computers they included higher quality voice sampling.
Apple also introduced speech recognition into its systems which
provided a fluid command set. More recently, Apple has added
sample-based voices. Starting as a curiosity, the speech system of
Macintosh has evolved into a fully supported program, PlainTalk,
for people with vision problems.
VoiceOver was for the first time
Mac OS X Tiger
Mac OS X Tiger (10.4). During 10.4 (Tiger) and first
releases of 10.5 (Leopard) there was only one standard voice shipping
with Mac OS X. Starting with 10.6 (Snow Leopard), the user can choose
out of a wide range list of multiple voices.
VoiceOver voices feature
the taking of realistic-sounding breaths between sentences, as well as
improved clarity at high read rates over PlainTalk. Mac OS X also
includes say, a command-line based application that converts text to
audible speech. The
AppleScript Standard Additions includes a say verb
that allows a script to use any of the installed voices and to control
the pitch, speaking rate and modulation of the spoken text.
The Apple iOS operating system used on the iPhone, iPad and iPod Touch
VoiceOver speech synthesis for accessibility. Some third
party applications also provide speech synthesis to facilitate
navigating, reading web pages or translating text.
The second operating system to feature advanced speech synthesis
capabilities was AmigaOS, introduced in 1985. The voice synthesis was
Commodore International from SoftVoice, Inc., who also
developed the original Macin
Talk text-to-speech system. It featured a
complete system of voice emulation for American English, with both
male and female voices and "stress" indicator markers, made possible
through the Amiga's audio chipset. The synthesis system was
divided into a translator library which converted unrestricted English
text into a standard set of phonetic codes and a narrator device which
implemented a formant model of speech generation..
featured a high-level "Speak Handler", which allowed command-line
users to redirect text output to speech.
Speech synthesis was
occasionally used in third-party programs, particularly word
processors and educational software. The synthesis software remained
largely unchanged from the first
AmigaOS release and Commodore
eventually removed speech synthesis support from
AmigaOS 2.1 onward.
Despite the American English phoneme limitation, an unofficial version
with multilingual speech synthesis was developed. This made use of an
enhanced version of the translator library which could translate a
number of languages, given a set of rules for each language.
See also: Microsoft Agent
Modern Windows desktop systems can use S
API 4 and S
API 5 components to
support speech synthesis and speech recognition. S
API 4.0 was
available as an optional add-on for
Windows 95 and Windows 98. Windows
2000 added Narrator, a text–to–speech utility for people who have
visual impairment. Third-party programs such as JAWS for Windows,
Window-Eyes, Non-visual Desktop Access, Supernova and System Access
can perform various text-to-speech tasks such as reading text aloud
from a specified website, email account, text document, the Windows
clipboard, the user's keyboard typing, etc. Not all programs can use
speech synthesis directly. Some programs can use plug-ins,
extensions or add-ons to read text aloud. Third-party programs are
available that can read text from the system clipboard.
Speech Server is a server-based package for voice synthesis
and recognition. It is designed for network use with web applications
and call centers.
Texas Instruments TI-99/4A
In the early 1980s, TI was known as a pioneer in speech synthesis, and
a highly popular plug-in speech synthesizer module was available for
the TI-99/4 and 4A.
Speech synthesizers were offered free with the
purchase of a number of cartridges and were used by many TI-written
video games (notable titles offered with speech during this promotion
were Alpiner and Parsec). The synthesizer uses a variant of linear
predictive coding and has a small in-built vocabulary. The original
intent was to release small cartridges that plugged directly into the
synthesizer unit, which would increase the device's built in
vocabulary. However, the success of software text-to-speech in the
Terminal Emulator II cartridge cancelled that plan.
Speech (TTS) refers to the ability of computers to read text
aloud. A TTS Engine converts written text to a phonemic
representation, then converts the phonemic representation to waveforms
that can be output as sound. TTS engines with different languages,
dialects and specialized vocabularies are available through
Version 1.6 of Android added support for speech synthesis (TTS).
Currently, there are a number of applications, plugins and gadgets
that can read messages directly from an e-mail client and web pages
from a web browser or
Google Toolbar, such as Text to Voice, which is
an add-on to Firefox. Some specialized software can narrate RSS-feeds.
On one hand, online RSS-narrators simplify information delivery by
allowing users to listen to their favourite news sources and to
convert them to podcasts. On the other hand, on-line RSS-readers are
available on almost any PC connected to the Internet. Users can
download generated audio files to portable devices, e.g. with a help
of podcast receiver, and listen to them while walking, jogging or
commuting to work.
A growing field in Internet based TTS is web-based assistive
technology, e.g. 'Browsealoud' from a UK company and Readspeaker. It
can deliver TTS functionality to anyone (for reasons of accessibility,
convenience, entertainment or information) with access to a web
browser. The non-profit project Pediaphon was created in 2006 to
provide a similar web-based TTS interface to the.
Other work is being done in the context of the
W3C through the W3C
Audio Incubator Group with the involvement of The
Systems that operate on free and open source software systems
Linux are various, and include open-source programs such as
Speech Synthesis System which uses diphone-based
synthesis, as well as more modern and better-sounding techniques,
eSpeak, which supports a broad range of languages, and gnuspeech which
uses articulatory synthesis from the Free
Following the commercial failure of the hardware-based Intellivoice,
gaming developers sparingly used software synthesis in later games. A
famous example is the introductory narration of Nintendo's Super
Metroid game for the Super Nintendo Entertainment System. Earlier
systems from Atari, such as the
Atari 5200 (Baseball) and the Atari
Quadrun and Open Sesame), also had games utilizing software
Some e-book readers, such as the Amazon Kindle,
Samsung E6, PocketBook
eReader Pro, enTourage eDGe, and the Bebook Neo.
BBC Micro incorporated the
Texas Instruments TMS5220 speech
Some models of
Texas Instruments home computers produced in 1979 and
Texas Instruments TI-99/4 and TI-99/4A) were capable of
text-to-phoneme synthesis or reciting complete words and phrases
(text-to-dictionary), using a very popular
peripheral. TI used a proprietary codec to embed complete spoken
phrases into applications, primarily video games.
OS/2 Warp 4 included VoiceType, a precursor to
GPS Navigation units produced by Garmin, Magellan,
TomTom and others
use speech synthesis for automobile navigation.
Yamaha produced a music synthesizer in 1999, the
Yamaha FS1R which
Formant synthesis capability. Sequences of up to 512
individual vowel and consonant formants could be stored and replayed,
allowing short vocal phrases to be synthesized.
Speech Notepad is a corpus-based Taiwanese concatenation
text-to-speech system for
Microsoft Windows XP/Win7. There are three
major components in the software; a Taiwanese tone group parser, a
speech engine, and a speech synthesizer. The system was installed
directly on the PC to operate independently without linking MS Speech
IBM TTS Engine. The user graphic interface includes functions
such as Romanized Taiwanese or traditional Chinese input, synchronous
voice dictionary, using Chinese/English for Taiwanese index word
searching, external application program/web browser speech output and
books making for the reading disabilities.
With the 2016 introduction of
Adobe Voco audio editing and generating
software prototype slated to be part of the
Adobe Creative Suite
Adobe Creative Suite and
the similarly enabled DeepMind WaveNet, a deep neural network based
audio synthesis software from
Google  speech synthesis is verging
on being completely indistinguishable from a real human's voice.
Adobe Voco takes approximately 20 minutes of the desired target's
speech and after that it can generate sound-alike voice with even
phonemes that were not present in the training material. The software
obviously poses ethical concerns as it allows to steal other peoples
voices and manipulate them to say anything desired.
This increases the stress on the disinformation situation coupled with
the facts that
Human image synthesis
Human image synthesis since the early 2000s has improved beyond the
point of human's inability to tell a real human imaged with a real
camera from a simulation of a human imaged with a simulation of a
2D video forgery techniques were presented in 2016 that allow near
real-time counterfeiting of facial expressions in existing 2D
Speech synthesis markup languages
A number of markup languages have been established for the rendition
of text as speech in an XML-compliant format. The most recent is
Speech Synthesis Markup
Language (SSML), which became a W3C
recommendation in 2004. Older speech synthesis markup languages
Language (JSML) and SABLE. Although each of
these was proposed as a standard, none of them have been widely
Speech synthesis markup languages are distinguished from dialogue
markup languages. VoiceXML, for example, includes tags related to
speech recognition, dialogue management and touchtone dialing, in
addition to text-to-speech markup.
Speech synthesis has long been a vital assistive technology tool and
its application in this area is significant and widespread. It allows
environmental barriers to be removed for people with a wide range of
disabilities. The longest application has been in the use of screen
readers for people with visual impairment, but text-to-speech systems
are now commonly used by people with dyslexia and other reading
difficulties as well as by pre-literate children. They are also
frequently employed to aid those with severe speech impairment usually
through a dedicated voice output communication aid.
Speech synthesis techniques are also used in entertainment productions
such as games and animations. In 2007, Animo Limited announced the
development of a software application package based on its speech
synthesis software FineSpeech, explicitly geared towards customers in
the entertainment industries, able to generate narration and lines of
dialogue according to user specifications. The application reached
maturity in 2008, when NEC
Biglobe announced a web service that allows
users to create phrases from the voices of Code Geass: Lelouch of the
Rebellion R2 characters.
In recent years, Text to
Speech for disability and handicapped
communication aids have become widely deployed in Mass Transit. Text
Speech is also finding new applications outside the disability
market. For example, speech synthesis, combined with speech
recognition, allows for interaction with mobile devices via natural
language processing interfaces.
Text-to speech is also used in second language acquisition. Voki, for
instance, is an educational tool created by Oddcast that allows users
to create their own talking avatar, using different accents. They can
be emailed, embedded on websites or shared on social media.
In addition, speech synthesis is a valuable computational aid for the
analysis and assessment of speech disorders. A voice quality
synthesizer, developed by Jorge C. Lucero et al. at University of
Brasilia, simulates the physics of phonation and includes models of
vocal frequency jitter and tremor, airflow noise and laryngeal
asymmetries. The synthesizer has been used to mimic the timbre of
dysphonic speakers with controlled levels of roughness, breathiness
Multiple companies offer TTS APIs to their customers to accelerate
development of new applications utilizing TTS technology. Companies
offering TTS APIs include AT&T, CereProc, DIOTEK, IVONA,
Neospeech, Readspeaker, SYNVO, YAKiToMe! and CPqD. For mobile app
development, Android operating system has been offering text to speech
API for a long time. Most recently, with iOS7, Apple started offering
API for text to speech.
Stephen Hawking was one of the most famous people using a speech
computer to communicate
Chinese speech synthesis
Comparison of screen readers
Comparison of speech synthesizers
Silent speech interface
Text to speech in digital television
^ Allen, Jonathan; Hunnicutt, M. Sharon; Klatt, Dennis (1987). From
Text to Speech: The MI
Talk system. Cambridge University Press.
^ Rubin, P.; Baer, T.; Mermelstein, P. (1981). "An articulatory
synthesizer for perceptual research". Journal of the Acoustical
Society of America. 70 (2): 321–328. doi:10.1121/1.386780.
^ van Santen, Jan P. H.; Sproat, Richard W.; Olive, Joseph P.;
Hirschberg, Julia (1997). Progress in
Speech Synthesis. Springer.
^ Van Santen, J. (April 1994). "Assignment of segmental duration in
text-to-speech synthesis". Computer
Speech & Language. 8 (2):
^ History and Development of
Speech Synthesis, Helsinki University of
Technology, Retrieved on November 4, 2006
^ Mechanismus der menschlichen Sprache nebst der Beschreibung seiner
sprechenden Maschine ("Mechanism of the human speech with description
of its speaking machine", J. B. Degen, Wien). (in German)
^ Mattingly, Ignatius G. (1974). Sebeok, Thomas A., ed. "Speech
synthesis for phonetic and phonological models" (PDF). Current Trends
in Linguistics. Mouton, The Hague. 12: 2451–2487.
^ Sproat, Richard W. (1997). Multilingual Text-to-
Bell Labs Approach. Springer. ISBN 0-7923-8027-4.
^ Kurzweil, Raymond (2005). The Singularity is Near. Penguin Books.
^ Klatt, D (1987). "Review of text-to-speech conversion for English".
Journal of the Acoustical Society of America. 82 (3): 737–93.
^ Lambert, Bruce (March 21, 1992). "Louis Gerstman, 61, a Specialist
Speech Disorders and Processes". New York Times.
Arthur C. Clarke
Arthur C. Clarke Biography". Archived from the original on December
11, 1997. Retrieved 5 December 2017.
^ "Where "HAL" First Spoke (
Speech Synthesis website)". Bell
Labs. Archived from the original on 2000-04-07. Retrieved
^ Anthropomorphic Talking Robot Waseda-Talker Series Archived
2016-03-04 at the Wayback Machine.
^ TSI Speech+ & other speaking calculators
^ Gevaryahu, Jonathan, "TSI S14001A
Synthesizer LSI Integrated
Circuit Guide"[dead link]
^ Breslow, et al. US 4326710 : "Talking electronic game", April
^ Voice Chess Challenger
^ Gaming's most important evolutions Archived 2011-06-15 at the
Wayback Machine., GamesRadar
^ Szczepaniak, John (2014). The Untold History of Japanese Game
Developers. 1. SMG Szczepaniak. pp. 544–615.
^ Taylor, Paul (2009). Text-to-speech synthesis. Cambridge, UK:
Cambridge University Press. p. 3. ISBN 9780521899277.
^ Alan W. Black, Perfect synthesis for all of the people all of the
time. IEEE TTS Workshop 2002.
^ John Kominek and Alan W. Black. (2003). CMU ARCTIC databases for
speech synthesis. CMU-LTI-03-177.
Language Technologies Institute,
School of Computer Science, Carnegie Mellon University.
^ Julia Zhang.
Language Generation and
Speech Synthesis in Dialogues
Language Learning, masters thesis, Section 5.6 on page 54.
^ William Yang Wang and Kallirroi Georgila. (2011). Automatic
Detection of Unnatural Word-Level Segments in Unit-Selection Speech
Synthesis, IEEE ASRU 2011.
^ "Pitch-Synchronous Overlap and Add (PSOLA) Synthesis". Archived from
the original on February 22, 2007. Retrieved 2008-05-28. CS1
maint: BOT: original-url status unknown (link)
^ T. Dutoit, V. Pagel, N. Pierret, F. Bataille, O. van der Vrecken.
MBROLA Project: Towards a set of high quality speech synthesizers
of use for non commercial purposes. ICSLP Proceedings, 1996.
^ Muralishankar, R; Ramakrishnan, A.G.; Prathibha, P (2004).
"Modification of Pitch using DCT in the Source Domain". Speech
Communication. 42 (2): 143–154.
^ L.F. Lamel, J.L. Gauvain, B. Prouts, C. Bouhier, R. Boesch.
Generation and Synthesis of Broadcast Messages, Proceedings ESCA-NATO
Workshop and Applications of
Speech Technology, September 1993.
^ Dartmouth College: Music and Computers Archived 2011-06-08 at the
Wayback Machine., 1993.
^ Examples include Astro Blaster, Space Fury, and Star Trek: Strategic
^ Examples include Star Wars, Firefox, Return of the Jedi, Road
Runner, The Empire Strikes Back, Indiana Jones and the Temple of Doom,
720°, Gauntlet, Gauntlet II, A.P.B., Paperboy, RoadBlasters,
Vindicators Part II, Escape from the Planet of the Robot Monsters.
^ John Holmes and Wendy Holmes (2001).
Speech Synthesis and
Recognition (2nd ed.). CRC. ISBN 0-7484-0856-8.
^ a b Lucero, J. C.; Schoentgen, J.; Behlau, M. (2013). "Physics-based
synthesis of disordered voices" (PDF). Interspeech 2013. Lyon, France:
Speech Communication Association. Retrieved Aug 27,
^ a b Englert, Marina; Madazio, Glaucya; Gielow, Ingrid; Lucero,
Jorge; Behlau, Mara (2016). "Perceptual error identification of human
and synthesized voices". Journal of Voice.
^ "The HMM-based
Speech Synthesis System". Hts.sp.nitech.ac.j.
^ Remez, R.; Rubin, P.; Pisoni, D.; Carrell, T. (22 May 1981). "Speech
perception without traditional speech cues" (PDF). Science. 212
(4497): 947–949. doi:10.1126/science.7233191.
Speech synthesis". World Wide Web Organization.
^ "Blizzard Challenge". Festvox.org. Retrieved 2012-02-22.
^ "Smile -and the world can hear you". University of Portsmouth.
January 9, 2008. Archived from the original on May 17, 2008.
^ "Smile – And The World Can Hear You, Even If You Hide". Science
Daily. January 2008.
^ Drahota, A. (2008). "The vocal communication of different kinds of
Speech Communication. 50 (4): 278–287.
^ Muralishankar, R.; Ramakrishnan, A. G.; Prathibha, P. (February
2004). "Modification of pitch using DCT in the source domain". Speech
Communication. 42 (2): 143–154. doi:10.1016/j.specom.2003.05.001.
Retrieved 7 December 2014.
^ Prathosh, A. P.; Ramakrishnan, A. G.; Ananthapadmanabha, T. V.
(December 2013). "Epoch extraction based on integrated linear
prediction residual using plosion index". IEEE Trans. Audio Speech
Language Processing. 21 (12): 2471–2480.
doi:10.1109/TASL.2013.2273717. Retrieved 19 December 2014.
^ EE Times. "TI will exit dedicated speech-synthesis chips, transfer
products to Sensory." June 14, 2001.
Speech Handler External Reference Specification"
(PDF). Retrieved 2012-02-22.
^ "It Sure Is Great To Get Out Of That Bag!". folklore.org. Retrieved
^ "iPhone: Configuring accessibility features (Including
Zoom)". Apple. Retrieved 2011-01-29.
^ Miner, Jay; et al. (1991).
Amiga Hardware Reference Manual (3rd
Addison-Wesley Publishing Company, Inc.
^ Devitt, Francesco (30 June 1995). "Translator Library
(Multilingual-speech version)". Archived from the original on 26
February 2012. Retrieved 9 April 2013.
^ "Accessibility Tutorials for Windows XP: Using Narrator". Microsoft.
2011-01-29. Retrieved 2011-01-29.
^ "How to configure and use Text-to-
Speech in Windows XP and in
Windows Vista". Microsoft. 2007-05-07. Retrieved 2010-02-17.
^ Jean-Michel Trivi (2009-09-23). "An introduction to Text-To-Speech
in Android". Android-developers.blogspot.com. Retrieved
^ Andreas Bischoff, The Pediaphon –
Speech Interface to the free
Encyclopedia for Mobile Phones, PDA's and MP3-Players,
Proceedings of the 18th International Conference on
Expert Systems Applications, Pages: 575–579 ISBN 0-7695-2932-1,
^ "gnuspeech". Gnu.org. Retrieved 2010-02-17.
Speech Synthesis History Project (SSSHP) 1986–2002".
Mindspring.com. Retrieved 2010-02-17.
東京：現代書館。[Tamura, S.(2010).Hajimete Taiwango o pasokon
ni shaberaseta otoko: bogo o yomigaeraseru monogatari.Tokyo, Japan:
^ Chang, Y. C. (2017). A Knowledge Representation Method to Implement
A Taiwanese Tone Group Parser [In Chinese]. International Journal of
Linguistics & Chinese
Language Processing; 22:2
2017.12 pp. 73-86
^ "WaveNet: A Generative Model for Raw Audio". Deepmind.com.
2016-09-08. Retrieved 2017-05-24.
Adobe Voco 'Photoshop-for-voice' causes concern". BBC.com. BBC.
2016-11-07. Retrieved 2017-06-18.
^ Thies, Justus (2016). "Face2Face: Real-time Face Capture and
Reenactment of RGB Videos". Proc. Computer Vision and Pattern
Recognition (CVPR), IEEE. Retrieved 2016-06-18.
Software for Anime Announced". Anime News Network.
2007-05-02. Retrieved 2010-02-17.
Synthesizer Service Offered in Japan".
Animenewsnetwork.com. 2008-09-09. Retrieved 2010-02-17.
Speech synthesis at Curlie (based on DMOZ)
MARY Web Client (German Research Centre for Artificial Intelligence)
Dennis Klatt's History of
Simulated singing with the singing robot Pavarobotti or a description
BBC on how the robot synthesized the singing.
Chrome TTS Demo
Sound synthesis types
Frequency modulation synthesis
Linear Arithmetic synthesis
Phase distortion synthesis
Physical modelling synthesis
Digital waveguide synthesis
Banded waveguide synthesis
Karplus–Strong string synthesis
Speech Synthesis System/Flite
Automatik Text Reader
Lyricos / Flinger
Software Automatic Mouth
Microsoft text-to-speech voices
CeVIO Creative Studio
CeVIO Creative Studio
NIAONiao Virtual Singer
Texas Instruments LPC
General Instrument SP0256
Speech Synthesis Markup Language
Alan W. Black
Franklin Seaney Cooper
Wolfgang von Kempelen
Natural language processing
n-gram (Bigram, Trigram)
Compound term processing
and data capture
Optical character recognition
Natural language generation
Latent Dirichlet allocation
Latent semantic analysis
Automated essay scoring
Automated online assistant