The standardized the object-orientedtransmission standard MPEG4, which allows a separate transmissionof contents of (dry recorded audio signal) and form (the impulseresponse or the acoustic model). Each virtual acoustic source needsits own (mono) audio channel. The spatial sound field in therecording room consists of the direct wave of the acoustic sourceand a spatially distributed pattern of mirror acoustic sources,caused by the reflections by the recording room surfaces. To reducethat spatial mirror source distribution onto a few transmittingcanals inevitably must cause a significant loss of spatialinformation. Much more accurately this spatial distribution can besynthesized by the rendition side.
The systematic evaluation will take place in the Wave field synthesis studio at the Erich Thienhaus Institute of the University of Music Detmold, where all systems under test will be installed in parallel (photo on the right, with height speakers added temporarily for the recording). The study will be supervised by Dr.-Ing. Günther Theile and Dr.-Ing. Malte Kob.
It shows a loudspeaker line (1a) according to the principle of the wave field synthesis, arranged as ring around the listeners. The listeners within the spectator range (1b) see the picture of the acoustic source (1d) at the screen (1c). For the listener position (1e) within the spectator range the featured virtual acoustic source (1f) can be produced for a reference, at which the sound impression formidable agrees with the image representation. For all other listeners within the spectator range, the picture differs to those acoustic acoustic position.
The listener some rows in front (1g) hears the signal source represented in front at the screen from the rear. Moreover, the concave wave fronts between virtual source and lis tener leads to wrong detections. Thus, the range between the screen and virtual source is not usable in practice. For the spectator (1h) the acoustic source is visible in front at the right side. However, it acoustic source exactly beside. Spectator (1i) does listen the acoustic source intolerable quietly. The most acoustic perceptions don't agree with the optical one, which makes the whole perception completely improbable.
In consequence, we cannot use in practice virtual sound sources very closely at the listener, which is one of the most impressive advantages of the wave field synthesis, in case a visual depiction of the source is connected.
The acoustic perception of the virtual source inside the spectator range does not differ, in principle, from the acoustic perception of a material acoustic source at this point. Though, the problem results from the fact, the optically noticed signal source according to image representation cannot be produced at the same place. That fact also will not be changed in the future, by three-dimensional projection. However, the problem is solvable in certain degree. One possible solution is described at the application.
The reproduction of audio events is clearly improved by Wave Field Synthesis, because these much more stably in the position as the phantom acoustic sources. Their position does not move any longer also, if the listener in the playback moves.
The paper Hahn, N.; Winter, F.; Spors, S. (2017): “Synthesis of a Spatially Band-Limited Plane Wave in the Time-Domain Using Wave Field Synthesis” In: Proc. Eur. Sig. Process Conf. (EUSIPCO), Kos, Greece. was presented at the 25th European Signal Processing Conference …
Definition of Wave Field Synthesis (WFS): A horizontal acoustic holography method based on the Kirchhoff-Helmholtz integral equation to generate physically accurate sound fields.">
Among the highlights of ICSA2011 will be a systematic listening test to evaluate several multi-channel sound reproduction systems with respect to relevant properties of sound, localization and spaciousness. A parallel installation of multichannel stereo, wave field synthesis and higher-order ambisonics will be provided. All participants of ICSA 2011 will have the opportunity to compare these systems. For methodical reasons, the evaluation will concentrate on reproduction on the horizontal plane.
Wave field synthesis (WFS) is a spatial audiorendering technique, characterized by creation of virtual acousticenvironments. It produces "artificial" wave fronts synthesized by alarge number of individually driven speakers. Such wave fronts seemto originate from a virtual starting point, the virtualsource or notional source. Contrary to traditionalspatialization techniques such as , the localization of virtual sourcesin WFS does not depend on or change with the listener'sposition.
Works for Wave Field Synthesis consists of two parts, the creation of a modular software framework and the creation of musical works that make use of the software. The framework allows to place and move sound sources dynamically with a a set of Max objects, and a Max4Live device. Those objects, developed by Robert Henke specifically for this project dramatically simplify the usage of the system and make it possible to achieve artistic results without too much technical overhead.
At the 24th European Signal Processing Conference (EUSIPCO) conference we presented the contribution Winter, F.; Spors, S. (2016): “On Fractional Delay Interpolation for Local Wave Field Synthesis.” In: Proc. of the 24th European Signal Processing Conference (EUSIPCO), 2016. Additional Material can be …
This project operates on the edge between technical and artistic exploration and focuses on the distribution of sound in space, utilizing a ring of 192 computer controlled loudspeakers, forming a 'wave field synthesis array'. Wave field synthesis allows to place a huge number of virtual sound sources anywhere inside and outside that ring. Most spectacular is the effect of locating a sound source inside the head of a listener, an experience that cannot be achieved with other techniques. The large number of speakers is the equivalent to a large number of pixels in the visual world, enabling subtle and precise placements in space and a impossible depth of field. Wave field synthesis is a new technique that relies on powerful computers and on advanced algorithms for calculating the signals going to each speaker.
Wave field synthesis (WFS) and higher-order Ambisonics (HOA) are two high-resolution spatial sound reproduction techniques aiming at overcoming some of the limitations of stereophonic reproduction techniques. In the past, the theoretical foundations of WFS and HOA have been formulated in a quite different fashion. Although, some work has been published that aims at comparing both approaches their similarities and differences are not well documented. This paper formulates the theory of both approaches in a common framework, highlights the different assumptions made to derive the driving functions and the resulting physical properties of the reproduced wave field. Special attention will be drawn to the consequences of spatial sampling since both approaches differ significantly here.