Echo suppression and echo cancellation are methods used in telephony to improve voice quality by preventing echo from being created or removing it after it is already present. In addition to improving subjective audio quality, echo suppression increases the capacity achieved through silence suppression by preventing echo from traveling across a network. Echo suppressors were developed in the 1950s in response to the first use of satellites for telecommunications, but they have since been largely supplanted by better performing echo cancellers.
Echo suppression and cancellation methods are commonly called acoustic echo suppression (AES) and acoustic echo cancellation (AEC), and more rarely line echo cancellation (LEC). In some cases, these terms are more precise, as there are various types and causes of echo with unique characteristics, including acoustic echo (sounds from a loudspeaker being reflected and recorded by a microphone, which can vary substantially over time) and line echo (electrical impulses caused by, e.g., coupling between the sending and receiving wires, impedance mismatches, electrical reflections, etc., which varies much less than acoustic echo). In practice, however, the same techniques are used to treat all types of echo, so an acoustic echo canceller can cancel line echo as well as acoustic echo. AEC in particular is commonly used to refer to echo cancelers in general, regardless of whether they were intended for acoustic echo, line echo, or both.
Although echo suppressors and echo cancellers have similar goals—preventing a speaking individual from hearing an echo of their own voice—the methods they use are different:
In telephony, "echo" is very much like what one would experience yelling in a canyon. Echo is the reflected copy of one's voice heard some time later and a delayed version of the original. On a telephone, if the delay is fairly significant (more than a few hundred milliseconds), it is considered annoying. If the delay is very small (10s of milliseconds or less), the phenomenon is called sidetone, and while not objectionable to humans, can interfere with the communication between data modems. If the delay is slightly longer, around 50 milliseconds, humans cannot hear the echo as a distinct sound, but instead hear a chorus effect which sounds like talking in a tunnel or cave.
In the earlier days of telecommunications, echo suppression was used to reduce the objectionable nature of echos to human users. One person speaks while the other listens, and they speak back and forth. An echo suppressor attempts to determine which is the primary direction and allows that channel to go forward. In the reverse channel, it places attenuation to block or "suppress" any signal on the assumption that the signal is echo. Naturally, such a device is not perfect. There are cases where both ends are active, and other cases where one end replies faster than an echo suppressor can switch directions to keep the echo attenuated but allow the remote talker to reply without attenuation.
While effective, this approach leads to several problems:
These effects may be frustrating for both parties to a call, although the suppressor effectively deals with echo.
In response to this, AT&T Bell Labs developed echo canceler theory in the early 1960s, which then resulted in laboratory echo cancelers in the late 1960s and commercial echo cancelers in the 1980s.
The concept of an echo canceller is to synthesize an estimate of the echo from the talker's signal, and subtract that synthesis from the return path instead of switching attenuation into/out of the path. This technique requires adaptive signal processing to generate a signal accurate enough to effectively cancel the echo, where the echo can differ from the original due to various kinds of degradation along the way.
Rapid advances in the implementation of digital signal processing allowed echo cancellers to be made smaller and more cost-effective. In the 1990s, echo cancellers were implemented within voice switches for the first time (in the Northern Telecom DMS-250) rather than as standalone devices. The integration of echo cancellation directly into the switch meant that echo cancellers could be reliably turned on or off on a call-by-call basis, removing the need for separate trunk groups for voice and data calls. Today's telephony technology often employs echo cancellers in small or handheld communications devices via a software voice engine, which provides cancellation of either acoustic echo or the residual echo introduced by a far-end PSTN gateway system; such systems typically cancel echo reflections with up to 64 milliseconds delay.
Since invention at AT&T Bell Labs echo cancellation algorithms have been improved and honed. Like all echo cancelling processes, these first algorithms were designed to anticipate the signal which would inevitably re-enter the transmission path, and cancel it out.
The acoustic echo cancellation (AEC) process works as follows:
The primary challenge for an echo canceller is determining the nature of the filtering to be applied to the far-end signal such that it resembles the resultant near-end signal. The filter is essentially a model of speaker, microphone and the room's acoustical attributes. Echo cancellers must be adaptive because the characteristics of the near-end's speaker and microphone are generally not known in advance. The acoustical attributes of the near-end's room are also not generally known in advance, and may change (e.g., if the microphone is moved relative to the speaker, or if individuals walk around the room causing changes in the acoustic reflections). By using the far-end signal as the stimulus, modern systems use an adaptive filter and can 'converge' from nothing to 55 dB of cancellation in around 200 ms.
Until recently echo cancellation only needed to apply to the voice bandwidth of telephone circuits. PSTN calls transmit frequencies between 300 Hz and 3 kHz, the range required for human speech intelligibility. Videoconferencing is one area where full bandwidth audio is transceived. In this case, specialised products are employed to perform echo cancellation.
Echo suppression may have the side-effect of removing valid signals from the transmission. This can cause audible signal loss that is called "clipping" in telephony, but the effect is more like a "squelch" than amplitude clipping. In an ideal situation then, echo cancellation alone will be used. However this is insufficient in many applications, notably software phones on networks with long delay and meager throughput. Here, echo cancellation and suppression can work in conjunction to achieve acceptable performance.
Echo is measured as echo return loss (ERL). This is the difference in signal strength between the original far-end signal and the echo of that signal transmitted as the output of the near-end, expressed in decibels. In other words, ERL is the amount of signal loss applied to the original far-end signal that returns as echo. High values mean the echo is very weak, while low or negative values mean the echo is very strong (negative values would mean the echo is stronger than the original signal which was put in, which if left unchecked would cause feedback).
The performance of an echo canceller is measured in echo return loss enhancement (ERLE), which is the amount of additional signal loss applied by the echo canceller. Most echo cancellers are able to apply 18 to 35 dB ERLE.
Voice messaging and voice response systems which accept speech for caller input use echo cancellation while speech prompts are played to prevent the systems own speech recognition from falsely recognizing the echoed prompts.
Examples of echo are found in everyday surroundings such as:
In most of these cases, direct sound from the loudspeaker (not the person at the far end, otherwise referred to as the Talker) enters the microphone almost unaltered. The difficulties in cancelling echo stem from the alteration of the original sound by the ambient space. These changes can include certain frequencies being absorbed by soft furnishings, and reflection of different frequencies at varying strength.
In modern times, the main use of an AES (over an AEC) lies in the VoIP sector. This is primarily because AECs require a fast processor, usually in the form of a digital signal processor (DSP). For the PC market, and especially for the embedded VoIP market, this cost in MHz comes at a premium. This said, many (embedded) VoIP solutions do have a fully functional AEC.
Echo control on voice-frequency data calls that use dial-up modems may cause data corruption. Some telephone devices disable echo suppression or echo cancellation when they detect the 2100 or 2225 Hz "answer" tones associated with such calls, in accordance with ITU-T recommendation G.164 or G.165.
In the 1990s most echo cancellation was done inside modems of type v.32 and later. In voiceband modems this allowed using the same frequencies in both directions simultaneously, greatly increasing the data rate. As part of connection negotiation, each modem sent line probe signals, measured the echoes, and set up its delay lines. Echoes in this case did not include long echoes caused by acoustic coupling, but did include short echoes caused by impedance mismatches in the 2-wire local loop to the telephone exchange.
After the turn of the century, DSL modems also made extensive use of automated echo cancellation. Though they used separate incoming and outgoing frequencies, these frequencies were beyond the voiceband for which the cables were designed, and often suffered attenuation distortion due to bridge taps and incomplete impedance matching. Deep, narrow frequency gaps often resulted, that could not be made usable by echo cancellation. These were detected and mapped out during connection negotiation.