“Spatial Audio - Practical Master Guide” is a free online course on spatial audio content creation. The target group are persons who have basic knowledge on audio production but are not necessarily dedicated experts in the underlying technologies and aesthetics. “Spatial Audio - Practical Master Guide” will be released on the Acoucou platform chapter-by-chapter all through Spring 2025. Some course content is already available as a preview.
The course comprises a variety of audio examples and interactive content that allow for the learners to develop their skills in a playful manner. The entire spectrum from psychoacoustics via the underlying technologies to delivery formats is covered. The course’s highlights are the 14 case studies and step-by-step guides that provide behind-the-scenes information. Many of the course components are self-sufficient so that they can be used in isolation or be integrated into other educational contexts.
The workshop on “Spatial Audio - Practical Master Guide” will provide an overview of the course contents, and we will explain the educational concepts that the course is based on. We will demonstrate the look and feel of the course on the Acoucou platform by demonstrating a set of representative examples from the courseware and provide the audience with the opportunity to experience it themselves. The workshop will wrap up with a discussion of the contexts in which the course contents may be useful besides self-study.
Course contents: Chapter 1: Overview (introduction, history of spatial, evolution of aesthetics in spatial audio) Chapter 2: Psychoacoustics (spatial hearing, perception of reverberation) Chapter 3: Reproduction (loudspeaker arrays, headphones) Chapter 4: Capture (microphone arrays) Chapter 5: Ambisonics (capture, reproduction, editing of ambisonic content) Chapter 6: Storing spatial audio content Chapter 7: Delivery formats
Case studies: Dolby Atmos truck streaming, fulldome, ikosahedral loudspeaker, spatial audio sound installation, spatial audio at Friedrichstadt Palast, spatial audio in the health industry, live music performance with spatial audio, spatial audio in automotive
Step-by-step guides: setting up your spatial audio workstation, channel-based production for music, dolby atmos mix for cinema, ambisonics sound production for 360 film, build your own ambisonic microphone array, interactive spatial audio
UWB as a RF protocol is being heavily used by handset manufacturers for device location applications. As a transport option, UWB offers tremendous possibilities for Professional audio use cases which also require low latency for real time requirements. These applications include digital wireless microphones and In Ear Monitors (IEM’s). These UWB enabled devices, when used for live performances, can deliver a total latency which is able to service Mic to Front of House Mixer and back to the performers IEM’s without a noticeable delay.
UWB is progressing as an audio standard within the AES and it's first iteration was in live performance applications. Issues relating to body blocking due to frequencies (6.5 / 8GHz) and also clocking challenges that could result in dropped packets have been addressed to ensure a stable, reliable link. This workshop will outline how UWB is capable of delivering a low latency link and providing up to 10MHz of data throughput for Hi Res (24/96) Linear PCM audio.
The progression of UWB for Audio is seeing the launch of high end devices which are being supported by several RF wireless vendors. This workshop will dive into the options open to device manufacturer who are considering UWB for their next generation product roadmaps.
Sound synthesis is a key part of modern music and audio production. Whether you are a producer, composer, or just curious about how electronic sounds are made, this workshop will break it down in a simple and practical way.
We will explore essential synthesis techniques like subtractive, additive, FM, wavetable, and granular synthesis. You will learn how different synthesis methods create and shape sound, and see them in action through live demonstrations using both hardware and virtual synthesizers, including emulators of the legendary studio equipment.
This session is designed for everyone — whether you are a total beginner or an experienced audio professional looking for fresh ideas. You will leave with a solid understanding of synthesis fundamentals and the confidence to start creating your own unique sounds. Join us for an interactive, hands-on introduction to the world of sound synthesis!
Digital filters are often used to model or equalize acoustic or electroacoustic transfer functions. Applications include headphone, loudspeaker, and room equalization, or modeling the radiation of musical instruments for sound synthesis. As the final judge of quality is the human ear, filter design should take into account the quasi-logarithmic frequency resolution of the auditory system. This tutorial presents various approaches for achieving this goal, including warped FIR and IIR, Kautz, and fixed-pole parallel filters, and discusses their differences and similarities. Examples will include loudspeaker and room equalization applications, and the equalization of a spherical loudspeaker array. The effect of quantization noise arising in real-world applications will also be considered.
In today's era, 3D audio enables us to craft sounds akin to how composers have created sonic landscapes with orchestras for centuries. We achieve significantly higher spatial precision than conventional stereo thanks to advanced loudspeaker setups like 7.1.4 and 9.1.6. This means that sounds become sharper, more plastic, and thus plausible – like the transition from HD to 8K in the visual realm, yielding an image virtually indistinguishable from looking out of a window.
In the first part of his contribution, Lasse Nipkow introduces a specialized microphone technique that captures instruments in space as if the musicians were right in front of us. This forms the basis for capturing the unique timbres of the instruments while ensuring that the sounds remain as pure as possible for the mix.
In the second part of his contribution, Nipkow elucidates the parallels between classical orchestras and modern pop or singer-songwriter productions. He demonstrates how composers of yesteryear shaped their sounds for concert performances – like our studio practices today with double tracking. Using sound examples, he illustrates how sounds can establish an auditory connection between loudspeakers, thus creating a sound body distinct from individual instruments that stand out solitarily.
Since 2010, Lasse Nipkow has been a renowned keynote speaker in the field of 3D audio music production. His expertise spans from seminars to conferences, both online and offline, and has gained significant popularity. As one of the leading experts in Europe, he provides comprehensive... Read More →
Thursday May 22, 2025 10:45am - 11:45am CEST C4ATM Studio Warsaw, Poland
Everybody knows the existence of music with electronic elements. Most of us are aware of the synthesis standing behind it. But the moment I start asking about what's under the hood, the majority of the audience start to run for their lifes. Which is rather sad for me, because learning synthesis could be among the greatest journeys you could take in your life. And I want to back those words up on my workshop.
Let's talk and see what exactly is synthesis, and what it is not. Let's talk about building blocks of basic substractive setup. We will track all the knobs, buttons and sliders, down to every single cable under the front panel. Simply to see which "valve" and "motor" is controlled by which knob. And how does it sounds.
I also want to make you feel safe about modular setups, because when you understand the basic blocks - you understand the modular synthesis. Just like building from bricks!
One day Chet Atkins was playing guitar when a woman approached him. She said, "That guitar sounds beautiful". Chet immediately quit playing. Staring her in the eyes he asked, "How does it sound now?" The quality of the sound in Chet’s case clearly rested with the player, not the instrument, and the quality of our product ultimately lies with us as engineers and producers, not with the gear we use. The dual significance of this question, “How does it sound now”, informs our discussion, since it addresses both the engineer as the driver and the changes we have seen and heard as our business and methodology have evolved through the decades. Let’s start by exploring the methodology employed by the most successful among us when confronted with new and evolving technology. How do we retain quality and continue to create a product that conforms to our own high standards? This may lead to other conversations about the musicians we work with, the consumers we serve, and the differences and similarities between their standards and our own. How high should your standards be? How should it sound now? How should it sound tomorrow?
Wireless audio, both mics and in-ear-monitors, has become essential in many live productions of music and theatre, but it is often fraught with uneasiness and uncertainty. The panel of presenters will draw on their varied experience and knowledge to show how practitioners can use best engineering practices to ensure reliability and performance of their wireless mic and in-ear-monitor systems.
I'm a fellow of the AES, an RF and electronics geek, and live audio specialist, especially in both amateur and professional theater. My résumé includes Senhheiser, ARRL, and a 27-year-long tenure at QSC. Now I help live audio practitioners up their wireless mic and IEM game.I play... Read More →
Thursday May 22, 2025 11:45am - 12:45pm CEST Hall FATM Studio Warsaw, Poland
This tutorial proposal presents a comprehensive exploration of spatial audio recording methodologies applied to the unique challenges of documenting Eastern Orthodox liturgical music in monumental acoustic environments. Centered on a recent project at the Church of the Assumption of the Blessed Virgin Mary and St. Joseph in Warsaw, Poland, the session dissects the technical and artistic decisions behind capturing the Męski Zespół Muzyki Cerkiewnej (Male Ensemble of Orthodox Music) “Katapetasma.” The repertoire—spanning 16th-century monodic irmologions, Baroque-era folk chant collections, and contemporary compositions—demanded innovative approaches to balance clarity, spatial immersion, and the venue’s 5-second reverberation time. Attendees will gain insight into hybrid microphone techniques tailored for immersive formats (Dolby Atmos, Ambisonics) and stereo reproduction. The discussion focuses on the strategic deployment of a Decca Tree core augmented by an AMBEO array, height channels, a Faulkner Pair for mid-depth detail, ambient side arrays, and spaced AB ambient pairs to capture the room’s decay. Particular emphasis is placed on reconciling close-miking strategies (essential for textual clarity in melismatic chants) with distant arrays that preserve the sacred space’s acoustic identity. The tutorial demonstrates how microphone placement—addressing both the choir’s position and the building’s 19th-century vaulted architecture—became critical in managing comb filtering and low-frequency buildup. Practical workflow considerations include: Real-time monitoring of spatial imaging through multiple microphone and loudspeaker configurations Phase coherence management between spot microphones and room arrays Post-production techniques for maintaining vocal intimacy within vast reverberant fields Case studies compare results from the Decca/AMBEO hybrid approach against traditional spaced omni configurations, highlighting tradeoffs between localization precision and spatial envelopment. The session also addresses the psychoacoustic challenges of recording small choral ensembles in reverberant spaces, where transient articulation must coexist with diffuse sustain.
This presentation focusses on side and rear channels in recordings in Dolby Atmos. At present, there is no standardised placement for side or rear speakers. This can result in poor localisation in a major portion of the listening area. Sometimes, side speakers are at 90° off the centre axis, sometimes up to 110° off axis. Similarly, rear speakers can be anywhere 120°-135° degrees off axis; in cinemas those can be located directly behind the listener(s). However, an Atmos speaker bed assumes a fixed placement of these side and rear speakers, resulting in inconsistent imaging. Additionally, placing side and rear speakers further off-axis results in a larger gap between them and the front speakers.
These inconsistencies can be minimised by placing these objects at specific virtual locations, whilst avoiding the fixed speaker bed. This ensures a listening experience which represents better what the mix engineer intended. Additionally, reverb feeds can also be sent as objects, to create an illusion further depth. Finally, these additional objects can be fine-tuned for binaural rendering by use of Near/Mid/Far controls.
Mr. Bowles will demonstrate these techniques in an immersive playback session.
David v.R Bowles formed Swineshead Productions, LLC as a classical recording production company in 1995. His recordings have been GRAMMY- and JUNO-nominated and critically acclaimed worldwide. His releases in 3D Dolby Atmos can be found on Avie, OutHere Music (Delos) and Navona labels.Mr... Read More →
Thursday May 22, 2025 2:45pm - 3:30pm CEST C4ATM Studio Warsaw, Poland
Microphones are the very first link in the recording chain, so it’s important to understand them to use them effectively. This presentation will explain the differences between different types of microphones; explain polar-patterns and directivity, proximity effect relative recording distances and a little about room acoustics. Many of these “golden nuggets” helped me greatly when I first understood them and I hope they will help you too.
We will look at the different microphone types – dynamic, moving-coil, ribbon and capacitor microphones, as well as boundary and line-array microphones. We will look at polar patterns and how they are derived. We will look at relative recording distances and a little about understanding room acoustics. All to help you to choose the best microphone for what you want to do and how best to use it.
John Willett is the Managing Director of Sound-Link ProAudio Ltd. who are the official UK distributors for Microtech Gefell microphones, ME-Geithain studio monitors, HUM Audio Devices ribbon microphones (as well as the LAAL – Look Ahead Analogue Limiter, the N-Trophy mixing console... Read More →
Thursday May 22, 2025 2:45pm - 3:45pm CEST C3ATM Studio Warsaw, Poland
Tutorial: Capturing Your Prosumers This session breaks down how top brands like Samsung, Apple, and Slack engage professional and semi-professional buyers. Attendees will gain concrete strategies and psychological insights they can use to boost customer retention and revenue.
Format: 1-Hour Session Key Takeaways: - Understand the psychology behind purchasing decisions of prosumers, drawing on our access to insights from over 300 million global buyers - Explore proven strategies to increase engagement and revenue - Gain actionable frameworks for immediate implementation
The field of audio production is always evolving. Now with immersive audio formats becoming more and more prominent, we should have a closer look at what possibilities come with it from a technical but most importantly from an artistic and musical standpoint. In our Workshop, "Unlocking New Dimensions: Producing Music in Immersive Audio," we demonstrate how immersive audio formats can bring an artist's vision to life and how the storytelling in the music benefits from them. In order to truly change the way people listen to music and provide an immersive experience, we must transform how we write and produce music, using immersive formats not just as a technical advancement but as a medium to create new art. In this session, we will explore the entire production process, from recording to the final mix, and master with a focus on how one can create a dynamic and engaging listening experience with immersive formats like Dolby Atmos. We believe that immersive audio is more than just a technical upgrade—it's a new creative canvas. Our goal is to show how, by fully leveraging a format like Dolby Atmos, artists and producers can create soundscapes that envelop the listener and add new dimensions to the storytelling of music.
Philosophy
Artists often feel disconnected from the immersive production process. They rarely can give input on how their music is mixed in this format, leading to results that may not fully align with their artistic vision. At High Tide, we prioritize artist involvement, ensuring they are an integral part of the process. We believe that their input is crucial for creating an immersive experience that truly represents their vision. We will share insights and examples from our collaborations with artists like Amistat, an acoustic folk duo, and Tinush, an electronic music producer known for his attention to detail. These case studies will illustrate how our method fosters creativity and produces superior immersive audio experiences.
New workflows need new tools
A significant pain point in current immersive productions is the tendency to use only a few stems, which often limits the immersive potential. This often happens because the process of exporting individual tracks and preparing a mixing session can be time-consuming and labor-intensive. We will address these challenges in our presentation. We have developed innovative scripts and workflows that streamline this process, allowing us to work with all available tracks without the typical hassle. This approach not only enhances the quality of the final mix but also retains the intricate details and nuances of the original recordings. Our workshop is designed to be interactive, with opportunities for attendees to ask questions throughout. We will provide real-world insights into our ProTools sessions, giving participants a detailed look at our Dolby Atmos mixing process. By walking through the entire workflow, from recording with Dolby Atmos in mind to the final mix, attendees will gain a comprehensive understanding of the steps involved and the benefits of this approach to create an engaging and immersive listening experience.
High pass Filters (HPF) in music production, do's and don'ts This presentation aims to bring a thorough insight on the use of high pass filters in music production. WHich type, slope, and frequency settings could be more desirable for a given source or application? Are HPF in microphones and preamps the same? Do they serve the same purpose? is there any rule on when to use one, the other or both? furthermore, HPF is also used extensively in the mixing and processing of audio signals. HPF is commonly applied in the sidechain signal on dynamic processors (EG: buss compressors) and of course in all multiband processing. what are the benefits of this practice? Live sound reinforcement, different approaches on the use of HPF. Different genres call for different production techniques, understanding the basics of this simple albeit important signal filtering process helps in the conscious implementation.