HOME

TheInfoList



OR:

Wave field synthesis (WFS) is a spatial audio rendering technique, characterized by creation of virtual acoustic environments. It produces ''artificial''
wavefront In physics, the wavefront of a time-varying '' wave field'' is the set ( locus) of all points having the same '' phase''. The term is generally meaningful only for fields that, at each point, vary sinusoidally in time with a single temporal fr ...
s synthesized by a large number of individually driven
loudspeaker A loudspeaker (commonly referred to as a speaker or speaker driver) is an electroacoustic transducer that converts an electrical audio signal into a corresponding sound. A ''speaker system'', also often simply referred to as a "speaker" or ...
s. Such wavefronts seem to originate from a virtual starting point, the ''virtual source'' or ''notional source''. Contrary to traditional spatialization techniques such as
stereo Stereophonic sound, or more commonly stereo, is a method of sound reproduction that recreates a multi-directional, 3-dimensional audible perspective. This is usually achieved by using two independent audio channels through a configuration ...
or
surround sound Surround sound is a technique for enriching the fidelity and depth of sound reproduction by using multiple audio channels from speakers that surround the listener (surround channels). Its first application was in movie theaters. Prior to s ...
, the localization of virtual sources in WFS does not depend on or change with the listener's position.


Physical fundamentals

WFS is based on the
Huygens–Fresnel principle The Huygens–Fresnel principle (named after Dutch physicist Christiaan Huygens and French physicist Augustin-Jean Fresnel) states that every point on a wavefront is itself the source of spherical wavelets, and the secondary wavelets emanating ...
, which states that any wavefront can be regarded as a superposition of elementary spherical waves. Therefore, any wavefront can be synthesized from such elementary waves. In practice, a computer controls a large array of individual loudspeakers and actuates each one at exactly the time when the desired virtual wavefront would pass through it. The basic procedure was developed in 1988 by Professor A.J. Berkhout at the
Delft University of Technology Delft University of Technology ( nl, Technische Universiteit Delft), also known as TU Delft, is the oldest and largest Dutch public technical university, located in Delft, Netherlands. As of 2022 it is ranked by QS World University Rankings among ...
. Its mathematical basis is the Kirchhoff–Helmholtz integral. It states that the sound pressure is completely determined within a volume free of sources, if sound pressure and velocity are determined in all points on its surface. :\boldsymbol(w,z)=\iint_ \left(G(w,z \vert z') \frac P(w,z')- P(w,z') \frac G(w,z \vert z') \right)dz' Therefore, any sound field can be reconstructed, if sound pressure and acoustic velocity are restored on all points of the surface of its volume. This approach is the underlying principle of ''holophony''. For reproduction, the entire surface of the volume would have to be covered with closely spaced loudspeakers, each individually driven with its own signal. Moreover, the listening area would have to be anechoic, in order to avoid sound reflections that would violate source-free volume assumption. In practice, this is hardly feasible. Because our acoustic perception is most exact in the horizontal plane, practical approaches generally reduce the problem to a horizontal loudspeaker line, circle or rectangle around the listener. The origin of the synthesized wavefront can be at any point on the horizontal plane of the loudspeakers. For sources behind the loudspeakers, the array will produce convex wavefronts. Sources in front of the speakers can be rendered by concave wavefronts that focus in the virtual source and diverge again. Hence the reproduction inside the volume is incomplete - it breaks down if the listener sits between speakers and inner virtual source. The origin represents the virtual acoustic source, which approximates an acoustic source at the same position. Unlike conventional (stereo) reproduction, the perceived position of the virtual sources is independent of listener position allowing the listener to move or giving an entire audience consistent perception of audio source location.


Procedural advantages

A sound field with very stable position of the acoustic sources can be established using wave field synthesis. In principle, it is possible to establish a virtual copy of a genuine sound field indistinguishable from the real sound. Changes of the listener position in the rendition area can produce the same impression as an appropriate change of location in the recording room. Listeners are no longer relegated to a sweet spot area within the room. The Moving Picture Expert Group standardized the object-oriented transmission standard
MPEG-4 MPEG-4 is a group of international standards for the compression of digital audio and visual data, multimedia systems, and file storage formats. It was originally introduced in late 1998 as a group of audio and video coding formats and related t ...
which allows a separate transmission of content (dry recorded audio signal) and form (the impulse response or the acoustic model). Each virtual acoustic source needs its own (mono) audio channel. The spatial sound field in the recording room consists of the direct wave of the acoustic source and a spatially distributed pattern of mirror acoustic sources caused by the reflections by the room surfaces. Reducing that spatial mirror source distribution onto a few transmitting channels causes a significant loss of spatial information. This spatial distribution can be synthesized much more accurately by the rendition side. Compared to conventional channel-orientated rendition procedures, WFS provides a clear advantage: Virtual acoustic sources guided by the signal content of the associated channels can be positioned far beyond the conventional material rendition area. This reduces the influence of the listener position because the relative changes in angles and levels are clearly smaller compared to conventional loudspeakers located within the rendition area. This extends the sweet spot considerably; it can now cover nearly the entire rendition area. WFS thus is not only compatible with, but potentially improves the reproduction for conventional channel-oriented methods.


Challenges


Sensitivity to room acoustics

Since WFS attempts to simulate the acoustic characteristics of the recording space, the acoustics of the rendition area must be suppressed. One possible solution is use of acoustic damping or to otherwise arrange the walls in an absorbing and non-reflective configuration. A second possibility is playback within the near field. For this to work effectively the loudspeakers must couple very closely at the hearing zone or the diaphragm surface must be very large. In some cases, the most perceptible difference compared to the original sound field is the reduction of the sound field to two dimensions along the horizontal of the loudspeaker lines. This is particularly noticeable for reproduction of ambiance. The suppression of acoustics in the rendition area does not complement playback of natural acoustic ambient sources.


Aliasing

There are undesirable spatial
aliasing In signal processing and related disciplines, aliasing is an effect that causes different signals to become indistinguishable (or ''aliases'' of one another) when sampled. It also often refers to the distortion or artifact that results when ...
distortions caused by position-dependent narrow-band break-downs in the frequency response within the rendition range. Their frequency depends on the angle of the virtual acoustic source and on the angle of the listener to the loudspeaker arrangement: :f_=\frac For aliasing-free rendition in the entire audio range a distance of the single emitters below 2 cm would be necessary. But fortunately, our ear is not particularly sensitive to spatial aliasing. A 10–15 cm emitter distance is generally sufficient.


Truncation effect

Another cause for disturbance of the spherical wavefront is the ''truncation effect''. Because the resulting wavefront is a composite of elementary waves, a sudden change of pressure can occur if no further speakers deliver elementary waves where the speaker row ends. This causes a 'shadow-wave' effect. For virtual acoustic sources placed in front of the loudspeaker arrangement, this pressure change hurries ahead of the actual wavefront whereby it becomes clearly audible. In
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing '' signals'', such as sound, images, and scientific measurements. Signal processing techniques are used to optimize transmissions, ...
terms, this is
spectral leakage The Fourier transform of a function of time, s(t), is a complex-valued function of frequency, S(f), often referred to as a frequency spectrum. Any linear time-invariant operation on s(t) produces a new spectrum of the form H(f)•S(f), which cha ...
in the spatial domain and is caused by application of a rectangular function as a
window function In signal processing and statistics, a window function (also known as an apodization function or tapering function) is a mathematical function that is zero-valued outside of some chosen interval, normally symmetric around the middle of the int ...
on what would otherwise be an infinite array of speakers. The shadow wave can be reduced if the volume of the outer loudspeakers is reduced; this corresponds to using a different window function that tapers off instead of being truncated.


High cost

A further and resultant problem is high cost. A large number of individual transducers must be very close together. Reducing the number of transducers by increasing their spacing introduces spatial aliasing artifacts. Reducing the number of transducers at a given spacing reduces the size of the emitter field and limits the representation range; outside of its borders no virtual acoustic sources can be produced.


Research and market maturity

Early development of WFS began 1988 at Delft University. Further work was carried out from January 2001 to June 2003 in the context of the CARROUSO project by the European Union which included ten institutes. The WFS sound system IOSONO was developed by the Fraunhofer Institute for digital media technology (IDMT) by the
Technical University of Ilmenau Technical may refer to: * Technical (vehicle), an improvised fighting vehicle * Technical analysis, a discipline for forecasting the future direction of prices through the study of past market data * Technical drawing, showing how something is c ...
in 2004. The first live WFS transmission took place in July 2008, recreating an organ recital at
Cologne Cathedral Cologne Cathedral (german: Kölner Dom, officially ', English: Cathedral Church of Saint Peter) is a Catholic cathedral in Cologne, North Rhine-Westphalia. It is the seat of the Archbishop of Cologne and of the administration of the Archdiocese ...
in lecture hall 104 of the
Technical University of Berlin The Technical University of Berlin (official name both in English and german: link=no, Technische Universität Berlin, also known as TU Berlin and Berlin Institute of Technology) is a public research university located in Berlin, Germany. It was ...
. The room contains the world’s largest speaker system with 2700 loudspeakers on 832 independent channels. Research trends in wave field synthesis include the consideration of psychoacoustics to reduce the necessary number of loudspeakers, and to implement complicated sound radiation properties so that a virtual grand piano sounds as grand as in real life.


See also

* Angular spectrum method * Ambisonics, a related spatial audio technique * Fourier optics *
Holophones A parabolic loudspeaker is a loudspeaker which seeks to focus its sound in coherent plane waves either by reflecting sound output from a speaker driver to a parabolic reflector aimed at the target audience, or by arraying drivers on a parabolic ...
, sound projectors *
Light field The light field is a vector function that describes the amount of light flowing in every direction through every point in space. The space of all possible '' light rays'' is given by the five-dimensional plenoptic function, and the magnitude of e ...
, analog for light * Wave field


References


Further reading

* Berkhout, A.J.: A Holographic Approach to Acoustic Control, J.Audio Eng.Soc., vol. 36, December 1988, pp. 977–995 * Berkhout, A.J.; De Vries, D.; Vogel, P.: Acoustic Control by Wave Field Synthesis, J.Acoust.Soc.Am., vol. 93, Mai 1993, pp. 2764–2778
Wave Field Synthesis : A brief overview

What is Wave Field Synthesis?

The Theory of Wave Field Synthesis Revisited

Wave Field Synthesis-A Promising Spatial Audio Rendering Concept
* {{cite thesis , url=https://ediss.sub.uni-hamburg.de/volltexte/2016/7939/pdf/Dissertation.pdf , title=Implementation of the Radiation Characteristics of Musical Instruments in Wave Field Synthesis Applications , type=Thesis , date=2015 , first=Tim , last=Ziemer , publisher=University of Hamburg


External links


Photo of wave field synthesis installation

Perceptual Differences Between Wavefield Synthesis and Stereophony
by Helmut Wittek
Inclusion of the playback room properties into the synthesis for WFS - Holophony

Wave Field Synthesis – A Promising Spatial Audio Rendering Concept
by Günther Theile/(IRT)


Wave Field Synthesis at the University of Erlangen-Nuremberg

Wavefield Generator build by HOLOPLOT Germany

The theory of wave field synthesis revisited.
S. Spors, R. Rabenstein, and J. Ahrens. In 124th AES Convention, May 2008.
Sound Reproduction by Wave Field Synthesis
(Thesis, 1997) by Edwin Verheijen

Sound production technology