IRCAM SPAT - The complete Room Acoustics Simulation and Localisation Solution
With more than a decade of research performed by the Acoustic and Cognitive Spaces Team at IRCAM, being at the forefront of scientific and technological innovations, the SPAT is the most advanced and sophisticated tool for room acoustics simulation and localisation ever designed, managing both spatialisation (source localisation) and room acoustic simulation in a truly consistent and visually logical way.
SPAT introduce state of the art techniques for room acoustics simulation utilizing advanced perceptive models, concealing the complexity behind the actual algorithms, allowing for intuitive and accommodating user interaction capabilities.
Designed for surround and multi-channel use, SPAT presents the option to setup the output arrangements providing a variety of stereo and surround configurations, including subwoofer configuration. With eight input and output channels available in SPAT, configurations up to 7.1 and 8.0 are feasible.
IRCAM VERB - Room Acoustics and Reverberation
VERB is an algorithmic room acoustics and reverberation processor. It has a modular construction, employing a recursive filtering reverb engine, reproducing and synthesizing the specific acoustical characteristics of any spatial sound environment.
Following a decade of intense research by the Acoustic and Cognitive Spaces Team at IRCAM, the IRCAM VERB introduce state of the art techniques for room acoustics simulation utilising advanced perceptive models, concealing the complexity behind the actual reverb algorithms and allowing for a flexible and intuitive experience for the user.
The IRCAM VERB provides eight input and output channels, presenting the option for reverberation processing in multi-channel and surround formats. A built in Input/Output (I/O) routing matrix provides instant flexibility when setting up the I/O in relation to the physical audio monitoring in the control room.
IRCAM VERB Session
VERB Session is based on the same fine technology used in the acclaimed IRCAM VERB algorithmic room acoustics and reverberation processor, with a modular construction employing a recursive filtering reverb engine, reproducing and synthesizing the specific acoustical characteristics of any spatial sound environment, developed utilizing the results of over a decade of intense research by the Acoustic and Cognitive Spaces Team at the IRCAM R&D centre in Paris.
Tailored for simplicity, with a fast paced workflow well-suited for situations where the perfect result has to be achieved within seconds, VERB Session presents the ultimate solution whether you are a seasoned session engineer or a demanding broadcast and post-production mixer.
HEar - Binaural Encoding Tool
HEar provides faithful reproduction of a stereo or surround mix with a pair of conventional stereo headphones. It relies on proven technology to model the various phenomena that occur when playing back audio material through a loudspeaker system.
This allows monitoring a full surround mix in situations when a surround-capable environment is not available or practical. Another typical use of HEar is doing precise checking of a mix, which is convenient with headphones as these provide a surgical and very detailed, microscope-like rendering of the audio. HEar can also prove very useful in a project studio context, and whenever noise isolation is a concern, as it helps achieving a more realistic sound environment.
TRAX Transformer - Real-time Voice and Sonic Modelling Processor
Transformer is based on an augmented phase vocoder technology, and a cutting edge transformation algorithm, allowing for manipulating characteristic properties of a voice such as gender, age and breath, and on any other sound; expression, formant and pitch.
These voice transformation algorithms have already been used, with great success, in international and French cinema productions. (Farinelli, Vatel, Tirésia, Les amours d'Astrée and Céladon).
Cross Synthesis utilizes a phase vocoder (the amplitude and the frequency/phase spectra), to morph the spectral characteristics of two sounds. The amplitude and frequency/phase spectra can be blended continuously and since the features that are used here are strongly nonlinear, the sound morphing is nonlinear as well, offering a way to create a wide range of new exciting and unusual sound effects.
Source Filter is based on a signal model decomposing the signal into a time envelope describing the energy/loudness contour of the sound, and a spectral envelope describing the spectral colour of the sound timbre. The energy contour and the spectral colour of the source sound, can be continuously blended with the spectral colour and energy contour of an arbitrary filtrating sound, allowing transformation that extend well beyond the more common source filter morphing effect.