Loading…
Company: Workshop clear filter
Thursday, May 22
 

9:00am CEST

Free Online Course “Spatial Audio - Practical Master Guide”
Thursday May 22, 2025 9:00am - 10:00am CEST
“Spatial Audio - Practical Master Guide” is a free online course on spatial audio content creation. The target group are persons who have basic knowledge on audio production but are not necessarily dedicated experts in the underlying technologies and aesthetics. “Spatial Audio - Practical Master Guide” will be released on the Acoucou platform chapter-by-chapter all through Spring 2025. Some course content is already available as a preview.

The course comprises a variety of audio examples and interactive content that allow for the learners to develop their skills in a playful manner. The entire spectrum from psychoacoustics via the underlying technologies to delivery formats is covered. The course’s highlights are the 14 case studies and step-by-step guides that provide behind-the-scenes information. Many of the course components are self-sufficient so that they can be used in isolation or be integrated into other educational contexts.

The workshop on “Spatial Audio - Practical Master Guide” will provide an overview of the course contents, and we will explain the educational concepts that the course is based on. We will demonstrate the look and feel of the course on the Acoucou platform by demonstrating a set of representative examples from the courseware and provide the audience with the opportunity to experience it themselves. The workshop will wrap up with a discussion of the contexts in which the course contents may be useful besides self-study.

Course contents:
Chapter 1: Overview (introduction, history of spatial, evolution of aesthetics in spatial audio)
Chapter 2: Psychoacoustics (spatial hearing, perception of reverberation)
Chapter 3: Reproduction (loudspeaker arrays, headphones)
Chapter 4: Capture (microphone arrays)
Chapter 5: Ambisonics (capture, reproduction, editing of ambisonic content)
Chapter 6: Storing spatial audio content
Chapter 7: Delivery formats

Case studies: Dolby Atmos truck streaming, fulldome, ikosahedral loudspeaker, spatial audio sound installation, spatial audio at Friedrichstadt Palast, spatial audio in the health industry, live music performance with spatial audio, spatial audio in automotive

Step-by-step guides: setting up your spatial audio workstation, channel-based production for music, dolby atmos mix for cinema, ambisonics sound production for 360 film, build your own ambisonic microphone array, interactive spatial audio

Links:
https://spatial-audio.acoucou.org/
https://acoucou.org/
Thursday May 22, 2025 9:00am - 10:00am CEST
Hall F ATM Studio Warsaw, Poland

9:00am CEST

The Advance of UWB for High Quality and Low Latency Audio
Thursday May 22, 2025 9:00am - 10:00am CEST
UWB as a RF protocol is being heavily used by handset manufacturers for device location applications. As a transport option, UWB offers tremendous possibilities for Professional audio use cases which also require low latency for real time requirements. These applications include digital wireless microphones and In Ear Monitors (IEM’s). These UWB enabled devices, when used for live performances, can deliver a total latency which is able to service Mic to Front of House Mixer and back to the performers IEM’s without a noticeable delay.

UWB is progressing as an audio standard within the AES and it's first iteration was in live performance applications. Issues relating to body blocking due to frequencies (6.5 / 8GHz) and also clocking challenges that could result in dropped packets have been addressed to ensure a stable, reliable link. This workshop will outline how UWB is capable of delivering a low latency link and providing up to 10MHz of data throughput for Hi Res (24/96) Linear PCM audio.

The progression of UWB for Audio is seeing the launch of high end devices which are being supported by several RF wireless vendors. This workshop will dive into the options open to device manufacturer who are considering UWB for their next generation product roadmaps.
Speakers
JM

Jonathan McClintock

Audio Codecs Ltd
Thursday May 22, 2025 9:00am - 10:00am CEST
C3 ATM Studio Warsaw, Poland

9:00am CEST

Awarded albums of Jakub Józef Orliński (#letsbarock and Orfeo ed Euridice) deconstructed by sound supervisor and music producer - Mateusz Banasiuk
Thursday May 22, 2025 9:00am - 10:30am CEST
Thursday May 22, 2025 9:00am - 10:30am CEST
C4 ATM Studio Warsaw, Poland

9:30am CEST

Sound Synthesis 101: An Introduction To Sound Creation
Thursday May 22, 2025 9:30am - 11:00am CEST
Sound synthesis is a key part of modern music and audio production. Whether you are a producer, composer, or just curious about how electronic sounds are made, this workshop will break it down in a simple and practical way.

We will explore essential synthesis techniques like subtractive, additive, FM, wavetable, and granular synthesis. You will learn how different synthesis methods create and shape sound, and see them in action through live demonstrations using both hardware and virtual synthesizers, including emulators of the legendary studio equipment.

This session is designed for everyone — whether you are a total beginner or an experienced audio professional looking for fresh ideas. You will leave with a solid understanding of synthesis fundamentals and the confidence to start creating your own unique sounds. Join us for an interactive, hands-on introduction to the world of sound synthesis!
Speakers
Thursday May 22, 2025 9:30am - 11:00am CEST
C1 ATM Studio Warsaw, Poland

10:15am CEST

Logarithmic frequency resolution filter design with applications to loudspeaker and room equalization
Thursday May 22, 2025 10:15am - 11:15am CEST
Digital filters are often used to model or equalize acoustic or electroacoustic transfer functions. Applications include headphone, loudspeaker, and room equalization, or modeling the radiation of musical instruments for sound synthesis. As the final judge of quality is the human ear, filter design should take into account the quasi-logarithmic frequency resolution of the auditory system. This tutorial presents various approaches for achieving this goal, including warped FIR and IIR, Kautz, and fixed-pole parallel filters, and discusses their differences and
similarities. Examples will include loudspeaker and room equalization applications, and the equalization of a spherical loudspeaker array. The effect of quantization noise arising in real-world applications will also be considered.
Speakers
Thursday May 22, 2025 10:15am - 11:15am CEST
C3 ATM Studio Warsaw, Poland

10:45am CEST

Sound Aesthetics for Impressive 3D Audio Productions
Thursday May 22, 2025 10:45am - 11:45am CEST
In today's era, 3D audio enables us to craft sounds akin to how composers have created sonic landscapes with orchestras for centuries. We achieve significantly higher spatial precision than conventional stereo thanks to advanced loudspeaker setups like 7.1.4 and 9.1.6. This means that sounds become sharper, more plastic, and thus plausible – like the transition from HD to 8K in the visual realm, yielding an image virtually indistinguishable from looking out of a window.

In the first part of his contribution, Lasse Nipkow introduces a specialized microphone technique that captures instruments in space as if the musicians were right in front of us. This forms the basis for capturing the unique timbres of the instruments while ensuring that the sounds remain as pure as possible for the mix.

In the second part of his contribution, Nipkow elucidates the parallels between classical orchestras and modern pop or singer-songwriter productions. He demonstrates how composers of yesteryear shaped their sounds for concert performances – like our studio practices today with double tracking. Using sound examples, he illustrates how sounds can establish an auditory connection between loudspeakers, thus creating a sound body distinct from individual instruments that stand out solitarily.
Speakers
avatar for Lasse Nipkow

Lasse Nipkow

CEO, Silent Work LLC
Since 2010, Lasse Nipkow has been a renowned keynote speaker in the field of 3D audio music production. His expertise spans from seminars to conferences, both online and offline, and has gained significant popularity. As one of the leading experts in Europe, he provides comprehensive... Read More →
Thursday May 22, 2025 10:45am - 11:45am CEST
C4 ATM Studio Warsaw, Poland

11:15am CEST

Don't run! It's just a synthesizer
Thursday May 22, 2025 11:15am - 12:45pm CEST
Everybody knows the existence of music with electronic elements. Most of us are aware of the synthesis standing behind it. But the moment I start asking about what's under the hood, the majority of the audience start to run for their lifes. Which is rather sad for me, because learning synthesis could be among the greatest journeys you could take in your life. And I want to back those words up on my workshop.

Let's talk and see what exactly is synthesis, and what it is not. Let's talk about building blocks of basic substractive setup. We will track all the knobs, buttons and sliders, down to every single cable under the front panel. Simply to see which "valve" and "motor" is controlled by which knob. And how does it sounds.

I also want to make you feel safe about modular setups, because when you understand the basic blocks - you understand the modular synthesis. Just like building from bricks!
Thursday May 22, 2025 11:15am - 12:45pm CEST
C1 ATM Studio Warsaw, Poland

11:45am CEST

How Does It Sound Now? The Evolution of Audio
Thursday May 22, 2025 11:45am - 12:45pm CEST
One day Chet Atkins was playing guitar when a woman approached him. She said, "That guitar sounds beautiful". Chet immediately quit playing. Staring her in the eyes he asked, "How does it sound now?"
The quality of the sound in Chet’s case clearly rested with the player, not the instrument, and the quality of our product ultimately lies with us as engineers and producers, not with the gear we use. The dual significance of this question, “How does it sound now”, informs our discussion, since it addresses both the engineer as the driver and the changes we have seen and heard as our business and methodology have evolved through the decades.
Let’s start by exploring the methodology employed by the most successful among us when confronted with new and evolving technology. How do we retain quality and continue to create a product that conforms to our own high standards? This may lead to other conversations about the musicians we work with, the consumers we serve, and the differences and similarities between their standards and our own. How high should your standards be? How should it sound now? How should it sound tomorrow?
Speakers
Thursday May 22, 2025 11:45am - 12:45pm CEST
C3 ATM Studio Warsaw, Poland

11:45am CEST

Best practices for wireless audio in live production
Thursday May 22, 2025 11:45am - 12:45pm CEST
Wireless audio, both mics and in-ear-monitors, has become essential in many live productions of music and theatre, but it is often fraught with uneasiness and uncertainty. The panel of presenters will draw on their varied experience and knowledge to show how practitioners can use best engineering practices to ensure reliability and performance of their wireless mic and in-ear-monitor systems.
Speakers
avatar for Bob Lee

Bob Lee

Applications Engineer / Trainer, RF Venue, Inc.
I'm a fellow of the AES, an RF and electronics geek, and live audio specialist, especially in both amateur and professional theater. My résumé includes Senhheiser, ARRL, and a 27-year-long tenure at QSC. Now I help live audio practitioners up their wireless mic and IEM game.I play... Read More →
Thursday May 22, 2025 11:45am - 12:45pm CEST
Hall F ATM Studio Warsaw, Poland

12:00pm CEST

Advanced Spatial Recording Techniques for Chamber Orthodox-Choir Music in Monumental Acoustics
Thursday May 22, 2025 12:00pm - 12:25pm CEST
This tutorial proposal presents a comprehensive exploration of spatial audio recording methodologies applied to the unique challenges of documenting Eastern Orthodox liturgical music in monumental acoustic environments. Centered on a recent project at the Church of the Assumption of the Blessed Virgin Mary and St. Joseph in Warsaw, Poland, the session dissects the technical and artistic decisions behind capturing the Męski Zespół Muzyki Cerkiewnej (Male Ensemble of Orthodox Music) “Katapetasma.” The repertoire—spanning 16th-century monodic irmologions, Baroque-era folk chant collections, and contemporary compositions—demanded innovative approaches to balance clarity, spatial immersion, and the venue’s 5-second reverberation time.
Attendees will gain insight into hybrid microphone techniques tailored for immersive formats (Dolby Atmos, Ambisonics) and stereo reproduction. The discussion focuses on the strategic deployment of a Decca Tree core augmented by an AMBEO array, height channels, a Faulkner Pair for mid-depth detail, ambient side arrays, and spaced AB ambient pairs to capture the room’s decay. Particular emphasis is placed on reconciling close-miking strategies (essential for textual clarity in melismatic chants) with distant arrays that preserve the sacred space’s acoustic identity. The tutorial demonstrates how microphone placement—addressing both the choir’s position and the building’s 19th-century vaulted architecture—became critical in managing comb filtering and low-frequency buildup.
Practical workflow considerations include:
Real-time monitoring of spatial imaging through multiple microphone and loudspeaker configurations
Phase coherence management between spot microphones and room arrays
Post-production techniques for maintaining vocal intimacy within vast reverberant fields
Case studies compare results from the Decca/AMBEO hybrid approach against traditional spaced omni configurations, highlighting tradeoffs between localization precision and spatial envelopment. The session also addresses the psychoacoustic challenges of recording small choral ensembles in reverberant spaces, where transient articulation must coexist with diffuse sustain.
Speakers
avatar for Pawel Malecki

Pawel Malecki

Profesor, AGH University of Krakow
Thursday May 22, 2025 12:00pm - 12:25pm CEST
C4 ATM Studio Warsaw, Poland

2:45pm CEST

Objects and Layers: Creating a Sense of Depth in Atmos recordings
Thursday May 22, 2025 2:45pm - 3:30pm CEST
This presentation focusses on side and rear channels in recordings in Dolby Atmos. At present, there is no standardised placement for side or rear speakers. This can result in poor localisation in a major portion of the listening area. Sometimes, side speakers are at 90° off the centre axis, sometimes up to 110° off axis. Similarly, rear speakers can be anywhere 120°-135° degrees off axis; in cinemas those can be located directly behind the listener(s). However, an Atmos speaker bed assumes a fixed placement of these side and rear speakers, resulting in inconsistent imaging. Additionally, placing side and rear speakers further off-axis results in a larger gap between them and the front speakers.

These inconsistencies can be minimised by placing these objects at specific virtual locations, whilst avoiding the fixed speaker bed. This ensures a listening experience which represents better what the mix engineer intended. Additionally, reverb feeds can also be sent as objects, to create an illusion further depth. Finally, these additional objects can be fine-tuned for binaural rendering by use of Near/Mid/Far controls.

Mr. Bowles will demonstrate these techniques in an immersive playback session.
Speakers
avatar for David Bowles

David Bowles

Owner, Swineshead Productions, LLC
David v.R Bowles formed Swineshead Productions, LLC as a classical recording production company in 1995. His recordings have been GRAMMY- and JUNO-nominated and critically acclaimed worldwide. His releases in 3D Dolby Atmos can be found on Avie, OutHere Music (Delos) and Navona labels.Mr... Read More →
Thursday May 22, 2025 2:45pm - 3:30pm CEST
C4 ATM Studio Warsaw, Poland

2:45pm CEST

The Ins and Outs of Microphones
Thursday May 22, 2025 2:45pm - 3:45pm CEST
Microphones are the very first link in the recording chain, so it’s important to understand them to use them effectively. This presentation will explain the differences between different types of microphones; explain polar-patterns and directivity, proximity effect relative recording distances and a little about room acoustics. Many of these “golden nuggets” helped me greatly when I first understood them and I hope they will help you too.

We will look at the different microphone types – dynamic, moving-coil, ribbon and capacitor microphones, as well as boundary and line-array microphones. We will look at polar patterns and how they are derived. We will look at relative recording distances and a little about understanding room acoustics. All to help you to choose the best microphone for what you want to do and how best to use it.
Speakers
avatar for John Willett

John Willett

Director, Sound-Link ProAudio Ltd.
John Willett is the Managing Director of Sound-Link ProAudio Ltd. who are the official UK distributors for Microtech Gefell microphones, ME-Geithain studio monitors, HUM Audio Devices ribbon microphones (as well as the LAAL – Look Ahead Analogue Limiter, the N-Trophy mixing console... Read More →
Thursday May 22, 2025 2:45pm - 3:45pm CEST
C3 ATM Studio Warsaw, Poland

2:45pm CEST

Tutorial: Capturing Your Prosumers
Thursday May 22, 2025 2:45pm - 3:45pm CEST
Tutorial: Capturing Your Prosumers
This session breaks down how top brands like Samsung, Apple, and Slack engage professional and semi-professional buyers. Attendees will gain concrete strategies and psychological insights they can use to boost customer retention and revenue.

Format: 1-Hour Session
Key Takeaways:
- Understand the psychology behind purchasing decisions of prosumers, drawing on our access to insights from over 300 million global buyers
- Explore proven strategies to increase engagement and revenue
- Gain actionable frameworks for immediate implementation
Speakers
Thursday May 22, 2025 2:45pm - 3:45pm CEST
C2 ATM Studio Warsaw, Poland

3:45pm CEST

The Records of Gaitan: Restoring the long silenced voice of an important political figure in Colombian History.
Thursday May 22, 2025 3:45pm - 4:45pm CEST
Speakers
Thursday May 22, 2025 3:45pm - 4:45pm CEST
C4 ATM Studio Warsaw, Poland

5:00pm CEST

Getting the most out of your immersive production
Thursday May 22, 2025 5:00pm - 6:00pm CEST
The field of audio production is always evolving. Now with immersive audio formats becoming more and more prominent, we should have a closer look at what possibilities come with it from a technical but most importantly from an artistic and musical standpoint.
In our Workshop, "Unlocking New Dimensions: Producing Music in Immersive Audio," we demonstrate how immersive audio formats can bring an artist's vision to life and how the storytelling in the music benefits from them.
In order to truly change the way people listen to music and provide an immersive experience, we must transform how we write and produce music, using immersive formats not just as a technical advancement but as a medium to create new art.
In this session, we will explore the entire production process, from recording to the final mix, and master with a focus on how one can create a dynamic and engaging listening experience with immersive formats like Dolby Atmos. We believe that immersive audio is more than just a technical upgrade—it's a new creative canvas. Our goal is to show how, by fully leveraging a format like Dolby Atmos, artists and producers can create soundscapes that envelop the listener and add new dimensions to the storytelling of music.

Philosophy

Artists often feel disconnected from the immersive production process. They rarely can give input on how their music is mixed in this format, leading to results that may not fully align with their artistic vision. At High Tide, we prioritize artist involvement, ensuring they are an integral part of the process. We believe that their input is crucial for creating an immersive experience that truly represents their vision. We will share insights and examples from our collaborations with artists like Amistat, an acoustic folk duo, and Tinush, an electronic music producer known for his attention to detail. These case studies will illustrate how our method fosters creativity and produces superior immersive audio experiences.

New workflows need new tools

A significant pain point in current immersive productions is the tendency to use only a few stems, which often limits the immersive potential. This often happens because the process of exporting individual tracks and preparing a mixing session can be time-consuming and labor-intensive. We will address these challenges in our presentation. We have developed innovative scripts and workflows that streamline this process, allowing us to work with all available tracks without the typical hassle. This approach not only enhances the quality of the final mix but also retains the intricate details and nuances of the original recordings.
Our workshop is designed to be interactive, with opportunities for attendees to ask questions throughout. We will provide real-world insights into our ProTools sessions, giving participants a detailed look at our Dolby Atmos mixing process. By walking through the entire workflow, from recording with Dolby Atmos in mind to the final mix, attendees will gain a comprehensive understanding of the steps involved and the benefits of this approach to create an engaging and immersive listening experience.
Speakers
avatar for Lennart Damann

Lennart Damann

Founder / Engineer, High Tide - Immersive Audio
avatar for Benedikt Ernst

Benedikt Ernst

High Tide - Immersive Audio
Thursday May 22, 2025 5:00pm - 6:00pm CEST
C4 ATM Studio Warsaw, Poland

5:15pm CEST

High Pass everything! or not?
Thursday May 22, 2025 5:15pm - 6:00pm CEST
High pass Filters (HPF) in music production, do's and don'ts
This presentation aims to bring a thorough insight on the use of high pass filters in music production. WHich type, slope, and frequency settings could be more desirable for a given source or application?
Are HPF in microphones and preamps the same? Do they serve the same purpose? is there any rule on when to use one, the other or both? furthermore, HPF is also used extensively in the mixing and processing of audio signals. HPF is commonly applied in the sidechain signal on dynamic processors (EG: buss compressors) and of course in all multiband processing. what are the benefits of this practice?
Live sound reinforcement, different approaches on the use of HPF.
Different genres call for different production techniques, understanding the basics of this simple albeit important signal filtering process helps in the conscious implementation.
Speakers
avatar for Cesar Lamschtein
Thursday May 22, 2025 5:15pm - 6:00pm CEST
C3 ATM Studio Warsaw, Poland
 
Friday, May 23
 

9:00am CEST

How to create and use audio for accessible video games?
Friday May 23, 2025 9:00am - 10:00am CEST
Sound is one of the most powerful tools for accessibility in video games, enabling players with visual impairments or cognitive disabilities to navigate, interact, and fully engage with the game world. This panel will explore how sound engineers can leverage audio design to enhance accessibility, making games more inclusive without compromising artistic intent. Experts from different areas of game development will discuss practical approaches, tools, and case studies that showcase how audio can bridge gaps in accessibility.

Discussion Topics:

• Why is sound crucial for accessibility in video games? Audio cues, spatial sound, and adaptive music can replace or complement visual elements, guiding players with disabilities through complex environments and interactions.
• Designing effective spatial audio for navigation and interaction. Using 3D audio and binaural rendering to provide players with intuitive sound-based navigation, enhancing orientation and gameplay flow for blind or visually impaired users.
• Audio feedback and sonification as key accessibility tools. Implementing detailed auditory feedback for in-game actions, menu navigation, and contextual cues to improve usability and player experience.
• Case studies of games with exemplary accessible audio design. Examining how games like The Last of Us Part II, BROK: The InvestiGator, and other titles have successfully integrated sound-based accessibility features.
• Tools and middleware solutions for accessible sound design (example: InclusivityForge). Showcasing how game engines and plugins such as InclusivityForge can streamline the implementation of accessibility-focused audio solutions.
• Challenges in designing accessible game audio and overcoming them. Addressing common technical and creative challenges when designing inclusive audio experiences, including balancing accessibility with immersive design.
• Future trends in accessibility-driven audio design. Exploring how AI, procedural sound, and new hardware technologies can push the boundaries of accessibility in interactive audio environments.

Panel Guests:

• Dr Joanna Pigulak - accessibility expert in games, researcher specializing in game audio accessibility, assistant professor at the Institute of Film, Media, and Audiovisual Arts at UAM.
• Tomasz Tworek - accessibility consultant, blind gamer, and audio design collaborator specializing in improving audio cues and sonification in video games.
• Dr Tomasz Żernicki - sound engineer, creator of accessibility-focused audio technologies for games, and founder of InclusivityForge.

Target Audience:

• Sound engineers and game audio designers looking to implement accessibility features in their projects.
• Game developers interested in leveraging audio as a tool for accessibility.
• UX designers and researchers focusing on sound-based interaction in gaming.
• Middleware and tool developers aiming to create better solutions for accessible audio design.
• Industry professionals seeking to align with accessibility regulations and best practices.

This panel discussion will explore how sound engineers can enhance game accessibility through innovative audio solutions, providing insights into the latest tools, design techniques, and industry best practices.
Speakers
avatar for Tomasz Żernicki

Tomasz Żernicki

co-founder, my3DAudio
Tomasz Zernicki is co-founder and former CEO of Zylia (www.zylia.co), an innovative company that provides tools for 3D audio recording and music production.Additionally, he is a founder of my3DAudio Ventures, whose goal is to scale audio companies that reach the MVP phase and want... Read More →
Friday May 23, 2025 9:00am - 10:00am CEST
Hall F ATM Studio Warsaw, Poland

9:00am CEST

Binaural Audio Reproduction Using Loudspeaker Array Beamforming
Friday May 23, 2025 9:00am - 10:15am CEST
Binaural audio is fundamental to delivering immersive spatial sound, but traditional playback has been limited to headphones. Crosstalk Cancellation (CTC) technology overcomes this limitation by enabling accurate binaural reproduction over loudspeakers, allowing for a more natural listening experience. Using a compact loudspeaker array positioned in front of the listener, CTC systems apply beamforming techniques to direct sound precisely to each ear. Combined with listener tracking, this ensures consistent and accurate binaural playback, even as the listener moves. This workshop will provide an in-depth look at the principles behind CTC technology, the role of loudspeaker array beamforming, and a live demonstration of a listener-tracked CTC soundbar.
Speakers
avatar for Jacob Hollebon

Jacob Hollebon

Principal Research Engineer, Audioscenic
I am a researcher specialising in 3D spatial audio reproduction and beamforming using loudspeaker arrays. In my current role at Audioscenic I am helping commercialize innovate listener-adaptive loudspeaker arrays for 3D audio and multizone reproduction. Previously I developed a new... Read More →
avatar for Marcos Simón

Marcos Simón

CTO, Audioscenic
Friday May 23, 2025 9:00am - 10:15am CEST
C3 ATM Studio Warsaw, Poland

9:00am CEST

Theoretical, Aesthetic, and Musical Review of Microphone Techniques for Immersive Sound Recording
Friday May 23, 2025 9:00am - 10:30am CEST
Immersive audio has become a significant trend in music recording, reproduction, and the audio and entertainment industries. This workshop will explore microphone techniques for immersive sound recording from theoretical, aesthetic, and musical perspectives.

Capturing a music performance and its acoustic features in a specific reverberant field, such as a concert hall, requires specialized microphone techniques for immersive sound. Various microphone techniques have already been proposed for immersive music recording. Achieving a natural timbre, appropriate musical balance, wide frequency range, low distortion, and high signal-to-noise ratio are essential in music recordings for capturing the music performance, including immersive sound recording. The acoustic features of the musical performances can be naturally reproduced by appropriately capturing direct and indirect sounds in the sound field.

The first topic of this workshop will cluster and review microphone techniques based on their fundamental roles. The panelists will also introduce their immersive sound music recording concept, demonstrate their microphone techniques, and provide sound demos.

Immersive audio can expand the adequate listening area if the microphone technique is designed with this goal. This is crucial for popularizing immersive sound reproduction among music lovers. Therefore, the second topic of this workshop will discuss microphone techniques from the perspective of the listening area during reproduction. The panelist will explain his hypothesis that lower correlation values in the vertical direction contribute to the expansion of the listening area.

In immersive sound recording, various microphone techniques have been proposed to reproduce the top layer of the multichannel discrete loudspeaker layout. It is recommended to use directional microphones and position the top and middle layer microphones simultaneously to avoid phase differences that can degrade timbre. However, some reports suggest that separating the top and middle layers can enhance the perception of vertical spaciousness. Experiments conducted by the panelists also suggest that separating these layers and lowering the correlation between them can widen the listening area without altering the central listening position's impression. Comparing microphone types and installation positions in the upper layer is challenging in actual recording situations. Therefore, the panelists will compare listening impressions under various conditions and allow participants to experience these differences using virtual recording techniques (V2MA), which will be discussed as the third topic of this workshop.

Several papers have reviewed microphone techniques, but most have relied on subjective evaluation. The third topic of this workshop will attempt to evaluate microphone techniques from a physical viewpoint. The panel will introduce the Virtual Microphone Array technique (V2MA) to determine how each microphone captures a room's reflection sounds and identify the acoustical features of several microphone arrays used for immersive sound recording. V2MA generates Spatial Room Impulse Responses (SRIR) using a virtual microphone placed in a virtual room with spatial properties of dominant reflections previously sampled in an actual room.

Lectures and demos help us understand the acoustical features and intentions behind microphone techniques, but they are insufficient to grasp their spatial characteristics, especially for immersive sound recording. The panelists will provide 7.0.4ch demos to showcase the spatial features of microphone techniques using V2MA. V2MA generates the acoustic response of a microphone placed virtually in a room, calculated from spatial information of virtual sound sources, such as dominant reflections detected from sound intensities measured in the target room. This workshop will illustrate the spatial characteristics of microphone arrays, allowing us to discuss the types of reflections captured by microphones and discover the differences in spatial features between microphone techniques.

Following each panelist's presentation, a panel discussion will delve into microphone techniques from theoretical, aesthetic, and musical viewpoints. This workshop aims to review issues with microphone techniques for immersive sound and discuss potential solutions to achieve natural spatial reproduction of musical performances for home entertainment.
Speakers
avatar for Toru Kamekawa

Toru Kamekawa

Professor, Tokyo University of the Arts
Toru Kamekawa: After graduating from the Kyushu Institute of Design in 1983, he joined the Japan Broadcasting Corporation (NHK) as a sound engineer. During that period, he gained his experience as a recording engineer, mostly in surround sound programs for HDTV.In 2002, he joined... Read More →
avatar for Masataka Nakahara

Masataka Nakahara

Acoustician, SONA Corp. / ONFUTURE Ltd.
Masataka Nakahra is an acoustician specializing in studio acoustic design and R&D work on room acoustics, as well as an educator. After studying acoustics at the Kyushu Institute of Design, he joined SONA Corporation and began his career as an acoustic designer.In 2005, he received... Read More →
Friday May 23, 2025 9:00am - 10:30am CEST
C4 ATM Studio Warsaw, Poland

10:15am CEST

Fast facts on room acoustics
Friday May 23, 2025 10:15am - 11:45am CEST
If you are considering establishing a room for sound, i.e., recording, mixing, editing, listening, or even a room for live music, this is the crash course to attend!
Initially, we’ll walk through the essential considerations for any design of an acoustic space, (almost) no matter the purpose: Appropriate reverberation time, appropriate sound distribution, low background noise, no echoes/flutter echoes, appropriate control of early reflections, (and for stereo/surround/immersive: a degree of room symmetry).
To prevent misunderstandings, we must define the difference between room acoustics and building acoustics. This is a tutorial on room acoustics! Finding the right reverberation time for a project depends on the room's purpose. We’ll look into some relevant standards to find an appropriate target value and pay attention to the importance of the room's frequency balance, especially at low frequencies! We will take the starting point for calculation using Sabine’s equation and discuss the conditions to make it work.
The room's shape, the shape’s effect on room modes, and the distribution of the modes are mentioned (together with the term Schroeder Frequency). The acoustical properties of some conventional building materials and the consequences of choosing one in favor of another for the basic design are discussed. The membrane absorbers (plasterboard, plywood, gypsum board) and their importance in proper room design are presented here. This also involves the definition of absorption coefficients (and how to get them).
From the “raw” room and its properties, we move on to define the acoustic treatment to reach the target value. Again, the treatment often can be cheaper building materials. However, a lot of expensive specialized materials are also available. We’ll try to find a way through the jungle, keeping an eye on the spending. The tools typically are porous absorbers for the smaller rooms. Sometimes, resonance absorbers are used for larger rooms. We don’t want overkill of the high frequencies!
The placement of the sound sources in the room influences the perceived sound. A few basic rules are given. Elements to control the sound field are discussed: Absorption vs. diffusion. Some more uncomplicated principles for DYI diffusers are shown.
During the presentation, various practical solutions are presented. At the end of the tutorial, there will be some time for a minor Q&A.
Speakers
avatar for Eddy B. Brixen

Eddy B. Brixen

consultant, EBB-consult
Eddy B. Brixenreceived his education in electronic engineering from the Danish Broadcasting Corporation, the Copenhagen Engineering College, and the Technical University of Denmark. Major activities include room acoustics, electro-acoustic design, and audio forensics. He is a consultant... Read More →
Friday May 23, 2025 10:15am - 11:45am CEST
Hall F ATM Studio Warsaw, Poland

10:30am CEST

Use of Headphones in Audio Monitoring
Friday May 23, 2025 10:30am - 11:30am CEST
Extensive studies have been made into achieving generally enjoyable sound colour in headphone listening, but few publications have been written focusing on the demanding requirements of a single audio professional, and what they actually hear.

However, headphones provide fundamentally different listening conditions, compared to our professional, in-room monitoring standards. With headphones, there is even no direct connection between measured frequency response and what a given user hears.

Media professionals from a variety of fields need awareness of such differences, and to take them into account in content production and quality control.

The paper details a recently published method and systematic steps to get to know yourself as a headphone listener. It also summarises new studies of basic listening requirements in headphone monitoring; and it explains why, even if the consumer is listening on headphones, in-room monitoring is generally the better and more relevant common denominator to base production on. The following topics and dimensions are compared across in-room and headphone monitoring: Audio format, listening level, frequency response, auditory envelopment, localisation, speech intelligibility and low frequency sensation.

New, universal headphone monitoring standards are required, before such devices may be used with a reliability and a confidence comparable to in-room monitoring adhering to, for example, ITU-R BS.1116, BS.775 and BS.2051.
Speakers
Friday May 23, 2025 10:30am - 11:30am CEST
C3 ATM Studio Warsaw, Poland

10:45am CEST

Immersive Music Production - Stereo plus effects is not enough!
Friday May 23, 2025 10:45am - 11:45am CEST
Since we've moved from stereo to surround and 3D/immersive productions, many immersive music mixes still sound very much like larger stereo versions. Part of the reason for this is the record company's demands and the argument, that people don't have properly set up systems at home or only listen with headphones. But that's not the way to experience the real adventure, which is to create new, stunning sound and musical experiences. The workshop will not criticize mixes, but try to open the door to the new dimension of music and discuss the pros and cons that producers have to deal with today.
Speakers
avatar for Tom Ammermann

Tom Ammermann

New Audio Technology
Grammy-nominated music producer, Tom Ammermann, began his journey as a musician and music producer in the 1980s.At the turn of the 21st Century, Tom produced unique surround audio productions for music and film projects as well as pioneering the very first surround mixes for headphones... Read More →
Friday May 23, 2025 10:45am - 11:45am CEST
C4 ATM Studio Warsaw, Poland

12:00pm CEST

The Future of Immersive Audio: Expanding Beyond Music and Film
Friday May 23, 2025 12:00pm - 1:00pm CEST
The evolution of 3D audio has significantly influenced the music and film industries, yet its full potential remains untapped. This panel will explore how immersive audio technologies, including Ambisonics, Dolby Atmos, and volumetric sound, shape new frontiers beyond traditional applications. We will focus on three key areas: accessibility in video games, the integration of 3D audio in gaming experiences, and its growing role in the automotive industry. Our panelists will discuss the state of the market, technological limitations, and emerging opportunities where spatial audio enhances user experience, safety, and engagement. This discussion aims to inspire innovation and collaboration among researchers, developers, and industry professionals.
Speakers
avatar for Tomasz Żernicki

Tomasz Żernicki

co-founder, my3DAudio
Tomasz Zernicki is co-founder and former CEO of Zylia (www.zylia.co), an innovative company that provides tools for 3D audio recording and music production.Additionally, he is a founder of my3DAudio Ventures, whose goal is to scale audio companies that reach the MVP phase and want... Read More →
Friday May 23, 2025 12:00pm - 1:00pm CEST
C4 ATM Studio Warsaw, Poland

12:15pm CEST

The Future Of Spatial Audio For Consumers
Friday May 23, 2025 12:15pm - 1:15pm CEST
As spatial audio shifts from a premium feature to a mainstream expectation, significant challenges remain in delivering a uniform experience across devices, formats, and playback systems. This panel brings together industry and academic experts to explore the key technologies driving the future of immersive audio for consumers. We’ll discuss the core technological advancements, software, hardware, and ecosystem innovations necessary to enable more seamless and consistent spatial audio experiences. Additionally, we will examine the challenges of delivering perceptually accurate spatial audio across diverse playback environments and identify the most critical areas of focus for industry and academia to accelerate broader consumer adoption of spatial audio.
Speakers
avatar for Jacob Hollebon

Jacob Hollebon

Principal Research Engineer, Audioscenic
I am a researcher specialising in 3D spatial audio reproduction and beamforming using loudspeaker arrays. In my current role at Audioscenic I am helping commercialize innovate listener-adaptive loudspeaker arrays for 3D audio and multizone reproduction. Previously I developed a new... Read More →
avatar for Marcos Simón

Marcos Simón

CTO, Audioscenic
avatar for Jan Skoglund

Jan Skoglund

Google
Jan Skoglund leads a team at Google in San Francisco, CA, developing speech and audio signal processing components for capture, real-time communication, storage, and rendering. These components have been deployed in Google software products such as Meet and hardware products such... Read More →
avatar for Hyunkook Lee

Hyunkook Lee

Professor, Applied Psychoacoustics Lab, University of Huddersfield
Professor
Friday May 23, 2025 12:15pm - 1:15pm CEST
C2 ATM Studio Warsaw, Poland

2:15pm CEST

Storytelling in Audio Augmented Reality
Friday May 23, 2025 2:15pm - 3:45pm CEST
How can Audio Augmented Reality (AAR) serve as a storytelling medium? Sound designer Matias Harju shares insights from The Reign Union, an experimental interactive AAR story currently exhibited at WHS Union Theatre in Helsinki, Finland.

This workshop addresses the challenges and breakthroughs of creating an immersive, headphone-based 6DoF AAR experience. In The Reign Union, two simultaneous participants experience the same bio-fictional story from different points of audition. Narrative design considerations and approaches are discussed and demonstrated through video clips featuring binaural sound recorded from the experience. References to other AAR experiences around the world are included to provide a broader context. A central theme is how reality anchors the narrative, while virtual sounds reveal new perspectives and interpretations.

The session also briefly examines the development of an in-house 6DoF AAR prototype platform, used for The Reign Union story as well as other narrative research conducted by the author and his team. This has been a journey through various pose tracking, virtual acoustic, and authoring solutions, resulting in a scalable system potentially suited for complex indoor spaces.

Matias, author of the forthcoming book Audio Augmented Reality: Concepts, Technologies, and Narratives (Routledge, June 2025), invites attendees to discuss and discover the possibilities of AAR as a tool for storytelling and artistic expression.
Speakers
Friday May 23, 2025 2:15pm - 3:45pm CEST
C3 ATM Studio Warsaw, Poland

3:15pm CEST

ECHO Project - Immersive Microphone Array Techniques for Orchestral Recording
Friday May 23, 2025 3:15pm - 4:45pm CEST
The ECHO Project (Exploring the Cinematic Hemisphere for Orchestra) is a collaborative research initiative that explores 3D microphone array techniques for orchestral recording, involving eight experts in immersive sound recording: Kellogg Boynton, Anthony Caruso, Hyunkook Lee, Morten Lindberg, Simon Ratcliffe, Katarzyna Sochaczewska, Mark Willsher, and Nick Wollage. Building on the 3D-MARCo initiative, this project aims to provide a platform for sound engineers, composers, researchers, and students to experiment with various immersive recording techniques. To this end, an open-access database of high-quality orchestral recordings was created from a recording session at AIR Studios, London, featuring a Oscar-winning composer Volker Bertelmann and the London Contemporary Orchestra.

The ECHO database includes recordings of four pieces, captured using up to 143 microphone capsules per piece. This setup includes seven different microphone arrays designed by the experts, spot microphones, a dummy head, and a higher-order spherical microphone system. The database allows users to not only compare different techniques but also to experiment with mixing different microphones, helping them develop their own techniques. It also serves as a useful resource for research, teaching and learning in immersive audio.

This workshop will present the rationale behind each microphone array used in the project, detail the recording process, discuss the immersive approach to composition and recording methods, and present some of the recordings in 7.1.4.
Speakers
avatar for Hyunkook Lee

Hyunkook Lee

Professor, Applied Psychoacoustics Lab, University of Huddersfield
Professor
avatar for Katarzyna Sochaczewska

Katarzyna Sochaczewska

Researcher, AGH UST
Immersive Audio Producer - Research in Perception in Spatial Audio——————————I am driven by a passion for making sound experiences unforgettable. My work lies at the intersection of technologyand creativity, where I explore how immersive sound and music can captivate... Read More →
avatar for Morten Lindberg

Morten Lindberg

Producer and Engineer, 2L (Lindberg Lyd, Norway)
Recording Producer and Balance Engineer with 46 GRAMMY-nominations, 38 of these in craft categories Best Engineered Album, Best Surround Sound Album, Best Immersive Audio Album and Producer of the Year. Founder and CEO of the record label 2L. Grammy Award-winner 2020.
Friday May 23, 2025 3:15pm - 4:45pm CEST
C4 ATM Studio Warsaw, Poland

4:00pm CEST

Beyond Stereo: Using Binaural Audio to Bridge Legacy and Modern Sound Systems
Friday May 23, 2025 4:00pm - 5:30pm CEST
As immersive audio content becomes more prevalent across streaming and broadcast platforms, creators and engineers face the challenge of making spatial audio accessible to listeners using legacy codecs and traditional playback systems, particularly headphones. With multiple binaural encoding methods available, choosing the right approach for a given project can be complex.

This workshop is designed as an exploration for audio professionals to better understand the strengths and applications of various binaural encoding systems. By comparing different techniques and their effectiveness in real-world scenarios, attendees will gain insights into how binaural processing can serve as a bridge between legacy and modern formats, preserving spatial cues while maintaining compatibility with existing distribution channels.

As the first in a series of workshops, this session will help define key areas for real-world testing between this convention and the next. Attendee insights and discussions will directly influence which encoding methods are explored further, ensuring that the most effective solutions are identified for different content types and delivery platforms.

Participants will gain an understanding of processing methods, and implementation strategies for various distribution platforms. By integrating these approaches, content creators can enhance accessibility and ensure that immersive audio reaches a wider audience, possibly encouraging consumers to explore how to enjoy immersive content using a variety of playback systems.
Speakers
avatar for Alex Kosiorek

Alex Kosiorek

Manager / Executive Producer / Sr. Engineer, Central Sound at Arizona PBS
Multi-Emmy Award Winning Senior Audio Engineer, Executive Producer, Media Executive, Surround, Immersive, and Acoustic Music Specialist. 30+ years of experience creating audio-media productions for broadcast and online distribution. Known for many “firsts” such as 1st audio fellow... Read More →
Friday May 23, 2025 4:00pm - 5:30pm CEST
C3 ATM Studio Warsaw, Poland

4:30pm CEST

Ask Us Anything About Starting Your Career
Friday May 23, 2025 4:30pm - 6:00pm CEST
Join a panel of professionals from a variety of fields in the industry as we discuss topics including how to enter the audio industry, how they each got started in their own careers and the path their careers took, and give advice geared towards students and recent graduates. Bring your questions for the panelists – most of this workshop will be focused the information YOU want to hear!
Speakers
avatar for Ian Corbett

Ian Corbett

Coordinator & Professor, Audio Engineering & Music Technology, Kansas City Kansas Community College
Dr. Ian Corbett is the Coordinator and Professor of Audio Engineering and Music Technology at Kansas City Kansas Community College. He also owns and operates off-beat-open-hats LLC, providing live sound, recording, and audio production services to clients in the Kansas City area... Read More →
Friday May 23, 2025 4:30pm - 6:00pm CEST
Hall F ATM Studio Warsaw, Poland
  Audio in education

5:00pm CEST

Exploring Temporal Properties of Closely Delayed Signals in Immersive Music Production: Psychoacoustic and Spatial Perception Considerations
Friday May 23, 2025 5:00pm - Sunday May 25, 2025 6:00pm CEST
*Introduction
With the growing market of immersive audio, both new and exciting production possibilities are emerging, alongside the resurfacing of existing surround sound production techniques. As audio production continues to evolve, understanding the impact of temporal properties on spatial perception becomes increasingly critical. One of the most effective ways to create a sense of space and depth, as well as to enhance listener envelopment, is through precise manipulation of temporal characteristics of sound.

*Temporal Adjustments in Audio Production
In stereophonic recording techniques, spatialization is often achieved by carefully controlling both each microphone’s distance from the sound source and the distance between microphones, in conjunction with leveraging variations in microphone sensitivity through polar patterns and directional rejection.
These distance-based variations introduce time delays, which are fundamental to spatial localization and depth perception. Similarly, in post-production workflows, delaying and applying differentiated effects to signals serve as powerful tools for enhancing immersion and spatiality. The controlled use of delay, reflections, and micro-temporal variations plays a significant role in shaping perceived auditory space. These techniques are widely used in both as mixing approaches with music and also sound design where artificially introducing delays helps simulate the propagation of sound in physical spaces, creating a more authentic and immersive auditory experience.

*Psychoacoustic Phenomena and Spatial Perception
Closely delayed or slightly altered signals give rise to psychoacoustic effects that influence spatial perception rather than purely temporal perception.
For instance, the number, spectral characteristics, and temporal distribution of reflections can lead a listener to perceive an auditory environment akin to a concert hall, even in the absence of an actual reverberant space.
The well-known Haas effect (precedence effect) provides insights into how human perception prioritizes the first-arriving sound over subsequent delayed versions, influencing localization and clarity. Additionally, the concepts of Temporal Integration Window (auditory signal fusion) describe how multiple signals originating from the same source are perceptually fused into a single event, affecting spatial coherence and envelopment.

*Workshop and Study Overview
This workshop presents and exemplifies findings from an ongoing semester-long study, which is currently being prepared as a submission to the Journal of the Audio Engineering Society. The study investigates whether sensation, timbral perception, and temporal integration windows are influenced when the delayed signal's spatial position is altered. By showcasing how spatial modifications of delayed signals affect auditory perception, the workshop aims to contribute insights to the field of immersive audio production.

*Conclusion
This research underscores the importance of temporal manipulation in immersive audio, bridging psychoacoustics with production techniques. By examining spatial perception through the lens of delay-based processing, the study offers new perspectives on designing more effective immersive sound experiences. The workshop will provide participants with theoretical insights and practical examples, encouraging further exploration of the intersection between temporal properties and spatial audio design.
Speakers
avatar for Can Murtezaoglu

Can Murtezaoglu

Research Assistant, Istanbul Technical University
Immersive audio recording and mixing techniques, audio design for visual media
Friday May 23, 2025 5:00pm - Sunday May 25, 2025 6:00pm CEST
C4 ATM Studio Warsaw, Poland
 
Saturday, May 24
 

9:00am CEST

Creating and distributing immersive audio: from IRCAM Spat to Acoustic Objects
Saturday May 24, 2025 9:00am - 10:00am CEST
In this session, we propose a path for the evolution of immersive audio technology towards accelerating commercial deployment and enabling rich user-end personalization, in any linear or interactive entertainment or business application. We review an example of perceptually based immersive audio creation platform, IRCAM Spat, which enables plausible aesthetically motivated immersive music creation and performance, with optional dependency on physical modeling of an acoustic environment. We advocate to alleviate ecosystem fragmentation by showing: (a) how a universal device-agnostic immersive audio rendering model can support the creation and distribution of both physics-driven interactive audio experiences and artistically motivated immersive audio content; (b) how object-based immersive linear audio content formats can be extended, via the notion of Acoustic Objects, to support end-user interaction, reverberant object substitution, or 6-DoF navigation.
Speakers
avatar for Jean-Marc Jot

Jean-Marc Jot

Founder and Principal, Virtuel Works LLC
Spatial audio and music technology expert and innovator. Virtuel Works provides audio technology strategy, IP creation and licensing services to help accelerate the development of audio and music spatial computing technology and interoperability solutions.
avatar for Thibaut Carpentier

Thibaut Carpentier

STMS Lab - IRCAM, SU, CNRS, Ministère de la Culture
Thibaut Carpentier studied acoustics at the École centrale and signal processing at Télécom ParisTech, before joining the CNRS as a research engineer. Since 2009, he has been a member of the Acoustic and Cognitive Spaces team in the STMS Lab (Sciences and Technologies of Music... Read More →
Saturday May 24, 2025 9:00am - 10:00am CEST
C4 ATM Studio Warsaw, Poland

9:00am CEST

Tutorial Workshop: The Gentle Art of Dithering
Saturday May 24, 2025 9:00am - 10:45am CEST
This tutorial is for everyone working on the design or production of digital audio and should benefit beginners and experts. We aim to bring this topic to life with several interesting audio demonstrations, and up to date with new insights and some surprising results that may reshape pre-conceptions of high resolution.
In a recent paper, we stressed that transparency (high-resolution audio fidelity) depends on the preservation of micro-sounds – those small details that are easily lost to quantization errors, but which can be perfectly preserved by using the right dither.
It is often asked: ‘Why should I add noise to my recording?’ or, ‘How can adding noise make things clearer?’ This tutorial gives a tour through these questions and presents a call to action: dither should not be looked on as an added noise, but an essential lubricant to preserves naturalness.

Tutorial topics include: fundamentals of dithering; analysis using histograms and synchronous averaging; what happens if undithered quantizers are cascaded?; ‘washboard distortion’; noise-shaping; additive and subtractive dither; time-domain effects; inside A/D and D/A converters; the perilous world of modern signal chains (including studio workflow and DSP in fixed and floating-point processors) and, finally, audibility analysis.
Saturday May 24, 2025 9:00am - 10:45am CEST
Hall F ATM Studio Warsaw, Poland

10:45am CEST

Audio Post in the AI Future
Saturday May 24, 2025 10:45am - 12:15pm CEST
This panel discussion gathers professionals with a broad range of experience across audio post production for film, television and visual media. During the session, the panel will consider questions around how AI technology could be leveraged to solve common problems and pain-points across audio post, and offer opportunities to encourage human creativity, not supplant it.
Speakers
avatar for Bradford Swanson

Bradford Swanson

Head of Product, Pro Sound Effects
Bradford is the Head of Product at Pro Sound Effects, an industry leader in licensing audio for media and machine learning. Previously, he worked in product development at iZotope, Nomono, and Sense Labs, and toured for more than 12 years as a musician, production manager, and FOH... Read More →
Saturday May 24, 2025 10:45am - 12:15pm CEST
C3 ATM Studio Warsaw, Poland

11:00am CEST

Loudness of movies for Broadcasting
Saturday May 24, 2025 11:00am - 12:00pm CEST
Broadcasting movies in linear TV or via streaming presents a considerable challenge, especially for highly dynamic content like action films. Normalising such content to the paradigm of "Programme Loudness" may result in dialogue levels much lower than the loudness reference level (-23 LUFS in Europe). On the other hand, normalising to the dialogue level may lead to overly loud sound effects. The EBU Loudness group PLOUD has addressed this issue with the publication of R 128 s4, the forth supplement to the core recommendation R 128. In order to have a better understanding of the challenge, an extensive analysis of 44 dubbed movies (mainly Hollywood mainstream films) has been conducted. These analysed films were already dynamically treated for broadcast delivery by experienced sound engineers. The background of the latest document of the PLOUD group will be presented and the main parameter LDR (Loudness-to-Dialogue-Ratio) will be introduced. A systematic approach when and how to proceed with dynamic treatment will be included.
Speakers
avatar for Florian Camerer

Florian Camerer

Senior Sound Engineer, ORF
Saturday May 24, 2025 11:00am - 12:00pm CEST
Hall F ATM Studio Warsaw, Poland

11:45am CEST

The Next Generation of Immersive Capture and Reproduction: Sessions from McGill University’s Virtual Acoustic Laboratory
Saturday May 24, 2025 11:45am - 12:45pm CEST
In this workshop, we present the next generation of Immersive audio capture and reproduction through virtual acoustics. The aural room, whether real or generated, brings together the listener and the sound source in a way that fulfills both the listener’s perceptual needs—like increasing the impression of orientation, presence, and envelopment—and creates aesthetic experiences by elaborating on the timbre and phrasing of the music.
Members of the Immersive Audio Lab (IMLAB) at McGill University will discuss recent forays in creating and capturing aural spaces, using technology ranging from virtual acoustics to Higher Order Ambisonics (HOA) microphones. Descriptions of capture methods, including microphone techniques and experiments will be accompanied by 7.1.4 audio playback demos.
From our studio sessions, we will showcase updates to our Virtual Acoustics Technology (VAT) system, which uses active acoustics in conjunction with 15 omnidirectional and 32 bidirectional speakers to transport musicians into simulated environments. Workshop elements will include a new methodology for creating dynamically changing interactive environments for musicians and listeners, ways to create focus and “mix” sound sources within the virtual room, experimental capture techniques for active acoustic environments, and real-time electronics spatialization in the tracking room via the VAT system.
On location, lab members have been experimenting with hybridized HOA capture systems for large-scale musical scenes. We will showcase multi-point HOA recording techniques to best capture direct sound and room reverberance, and excerpts that compare HOA to traditional channel-based capture systems.
Speakers
avatar for Kathleen Zhang

Kathleen Zhang

McGill University
AA

Aybar Aydin

PhD Candidate, McGill University
avatar for Michail Oikonomidis

Michail Oikonomidis

Doctoral student, McGill University
Michael Ikonomidis (Michail Oikonomidis) is an accomplished audio engineer and PhD student in Sound Recording at McGill University, specializing in immersive audio, high-channel count orchestral recordings and scoring sessions.With a diverse background in music production, live sound... Read More →
avatar for Richard King

Richard King

Professor, McGill University
Richard King is an Educator, Researcher, and a Grammy Award winning recording engineer. Richard has garnered Grammy Awards in various fields including Best Engineered Album in both the Classical and Non-Classical categories. Richard is an Associate Professor at the Schulich School... Read More →
Saturday May 24, 2025 11:45am - 12:45pm CEST
C4 ATM Studio Warsaw, Poland

12:15pm CEST

Workshop: How to Build a World-Class Brand in 24 Hours
Saturday May 24, 2025 12:15pm - 1:15pm CEST
In this dynamic, hackathon-style session, participants will rapidly develop a world-class brand strategy for their company using cutting-edge AI tools and collaborative exercises. Attendees will leave with an actionable blueprint they can implement immediately in their businesses or projects.

Format: 90 minute session
Key Takeaways:
Master the essentials of brand strategy and its impact on content creation and sales
Engage in hands-on exercises to develop a brand strategy in real time
Learn how AI tools can accelerate brand positioning
Speakers
Saturday May 24, 2025 12:15pm - 1:15pm CEST
C1 ATM Studio Warsaw, Poland

12:15pm CEST

Simulated Free-field Measurements
Saturday May 24, 2025 12:15pm - 1:45pm CEST
Time selective techniques that enable measurements of the free field response of a loudspeaker to be performed without the need for an anechoic chamber are presented. The low frequency resolution dependent room size limitations of both time selective measurements and anechoic chambers are discussed. Techniques combining signal processing and appropriate test methods are presented enabling measurements of the complex free field response of a loudspeaker to be performed throughout the entire audio frequency range without an anechoic chamber. Measurement technique for both nar field and time selective far field measurements are detailed. The results in both the time and frequency domain are available and ancilliary functions derived from these results are easily calculated automatically. A review of the current state of the art is also presented.
Saturday May 24, 2025 12:15pm - 1:45pm CEST
C2 ATM Studio Warsaw, Poland

12:30pm CEST

What was it about the Dolby Noise Reduction System that made it successful?
Saturday May 24, 2025 12:30pm - 1:30pm CEST
Warsaw tutorial

Love it or hate it the Dolby noise reduction system had a significant impact on sound recording practice. Even nowadays, in our digital audio workstation world, Dolby noise reduction units are used as effects processors. 2
However, when the system first came out in the 1960s, there were other noise reduction systems, but the Dolby “Model A” noise reduction system, and its successors, still became dominant. What was it about the Dolby system that made it so successful?
This tutorial will look in some detail into the inner workings of the Dolby A Noise reduction system to see how this came about.
Dolby made some key technical decisions in his design, that worked with the technology of the day, to provide noise reduction that did minimal harm to the audio signal and tried to minimise any audible effects of the noise reduction processing. We will examine these key decisions and show how the fitted with the technology and electronic components at the time.
The tutorial will start with a basic introduction to complementary noise reduction systems and their pros and cons. We will the go on to examine the Dolby system in more detail, including looking at some of the circuitry.
In particular, we will discuss:
1. The principle of least treatment.
2. Side chain processing.
3. Psychoacoustic elements.
4. What Dolby could have done better.
Although the talk will concentrate on the Model 301 processor, if time permits, we will look at the differences between it, and the later Cat 22 version.
The tutorial will be accessible to everyone, you will not have to be an electronic engineer to understand the principles behind this seminal piece of audio engineering history.
Speakers
avatar for Jamie Angus-Whiteoak

Jamie Angus-Whiteoak

Emeritus Professor/Consultant, University of Salford/JASA Consultancy
Jamie Angus-Whiteoak is Emeritus Professor of Audio Technology at Salford University. Her interest in audio was crystallized at age 11 when she visited the WOR studios in NYC on a school trip in 1967. After this she was hooked, and spent much of her free time studying audio, radio... Read More →
Saturday May 24, 2025 12:30pm - 1:30pm CEST
C3 ATM Studio Warsaw, Poland

1:45pm CEST

Be A Leader!
Saturday May 24, 2025 1:45pm - 2:45pm CEST
Have you ever wondered how AES works? Let's meet up and talk about the benefits of volunteering and the path to leadership in AES! You could be our next Chair, Vice President, or even AES President!
Speakers
avatar for Leslie Gaston-Bird

Leslie Gaston-Bird

President, Audio Engineering Society
Dr. Leslie Gaston-Bird (AMPS, MPSE) is President of the Audio Engineering Society and author of the books "Women in Audio", part of the AES Presents series and published by Focal Press (Routledge); and Math for Audio Majors (A-R Editions). She is a voting member of the Recording Academy... Read More →
Saturday May 24, 2025 1:45pm - 2:45pm CEST
Hall F ATM Studio Warsaw, Poland

1:45pm CEST

A century of dynamic loudspeakers
Saturday May 24, 2025 1:45pm - 2:45pm CEST
This tutorial is based on a Journal of the Audio Engineering Society review paper being submitted.

2025 marks the centennial of the commercial introduction of the modern dynamic direct radiating loudspeaker, Radiola 104, and the publication of Kellogg and Rice’s paper describing its design. The tutorial outlines the developments leading to the first dynamic loudspeakers and their subsequent evolution. The presentation focuses on direct radiating loudspeakers, although the parallel development of horn technology is acknowledged.

The roots of the dynamic loudspeaker trace back to the moving coil linear actuator patented by Werner Siemens in 1877. The first audio-related application was Sir Joseph Lodge’s 1896 mechanical telephone signal amplifier, or “repeater.” The first moving coil loudspeaker was the Magnavox by Peter Jensen in 1915, but the diaphragm assembly resembled earlier electromagnetic loudspeakers. The Blatthaller loudspeakers by Schottky and Gerlach in 1920’s are another example of a different early use of the dynamic concept.

It is interesting to take a look at the success factors of the dynamic loudspeakers, creating a market for quality sound reproduction and practically replacing the earlier electromagnetic designs by the end of 1920s. The first dynamic loudspeakers were heavy, expensive, and inefficient, but the sound quality could not be matched by any other technology available then. The direct radiating dynamic loudspeaker is also one of the most scalable technologies in engineering, both in terms of size and production volume. The dynamic loudspeaker is also quite friendly in terms of operating voltage and current, and what is important, the sound can be adjusted through enclosure design.

The breadth of the applications of dynamic loudspeakers would not have been possible without the developments in magnet materials. Early dynamic loudspeakers used electromagnets for air gap flux, requiring constant high power (e.g., Radiola 104’s field coil consumed 8W, while peak audio power was about 1W). Some manufacturers attempted steel permanent magnets, but these were bulky. A major breakthrough came with AlNiCo (Aluminum-Nickel-Cobalt) magnets, first developed in Japan in the 1930s and commercialized in the U.S. during World War II. AlNiCo enabled smaller, lighter, and more efficient designs. However, a cobalt supply crisis in 1970 led to the widespread adoption of ferrite (ceramic) magnets, which were heavier but cost-effective. The next advancement especially in small drivers were rare earth magnets introduced in the early 1980s. However, a neodymium supply crisis in the 2000s led to a partial return to ferrite magnets.

One of the focus points of the industry’s attention has been the cone and surround materials for the loudspeaker. Already the first units employed relatively lossy cardboard type material. Although plastic and foam materials were attempted in loudspeakers from 1950’s onwards, plastic cones for larger loudspeakers were successfully launched only in the late 1970’s. Metal cones, honeycomb diaphragms, and use of coatings to improve the stiffness have all brought more variety to the loudspeaker market, enabled by the significant improvement of numerical loudspeaker modelling and measurement methods, also starting their practical use during 1970’s.

A detail that was somewhat different in the first loudspeakers as compared to modern designs was the centering mechanism. The Radiola centering mechanism was complex, and soon simpler flat supports (giving the name “spider”) were developed. The modern concentrically corrugated centering system was developed in the early 1930’s by Walter Vollman at the German Gravor loudspeaker company, and this design has remained the standard solution with little variation.

The limitations of the high frequency reproduction of the early drivers led to improvements in driver design. The high frequency performance of the cone drivers was improved by introducing lossy or compliant areas that attempted to restrict the radiation of high frequencies to the apex part of the cone, and adding a double cone. The introduction of FM radio and improved records led to the need to develop loudspeakers with more extended treble reproduction. The first separate tweeter units were horn loudspeakers, and the first direct radiating tweeters were scaled down cone drivers, but late 1950’s saw the introduction of modern tweeters where the voice coil was outside the radiating diaphragm.

The latest paradigm shift in dynamic loudspeakers is the microspeaker, ubiquitous in portable devices. By manufacturing numbers, microspeakers are the largest class of dynamic loudspeakers, presenting unique structural, engineering, and manufacturing challenges. Their rapid evolution from the 1980s onwards includes the introduction of rare earth magnets, diaphragm forming improvements, and a departure from the cylindrical form factor of traditional loudspeakers. The next phase in loudspeaker miniaturization is emerging, with the first MEMS-based dynamic microspeakers now entering the market.
Speakers
JB

Juha Backman

AAC Technologies
Saturday May 24, 2025 1:45pm - 2:45pm CEST
C3 ATM Studio Warsaw, Poland
 


Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
  • Acoustic Transducers & Measurements
  • Acoustics
  • Acoustics of large performance or rehearsal spaces
  • Acoustics of smaller rooms
  • Acoustics of smaller rooms Room acoustic solutions and materials
  • Acoustics & Sig. Processing
  • AI
  • AI & Machine Audition
  • Analysis and synthesis of sound
  • Archiving and restoration
  • Audio and music information retrieval
  • Audio Applications
  • Audio coding and compression
  • Audio effects
  • Audio Effects & Signal Processing
  • Audio for mobile and handheld devices
  • Audio for virtual/augmented reality environments
  • Audio formats
  • Audio in Education
  • Audio perception
  • Audio quality
  • Auditory display and sonification
  • Automotive Audio
  • Automotive Audio & Perception
  • Digital broadcasting
  • Electronic dance music
  • Electronic instrument design & applications
  • Evaluation of spatial audio
  • Forensic audio
  • Game Audio
  • Generative AI for speech and audio
  • Hearing Loss Protection and Enhancement
  • High resolution audio
  • Hip-Hop/R&B
  • Impact of room acoustics on immersive audio
  • Instrumentation and measurement
  • Interaction of transducers and the room
  • Interactive sound
  • Listening tests and evaluation
  • Live event and stage audio
  • Loudspeakers and headphones
  • Machine Audition
  • Microphones converters and amplifiers
  • Microphones converters and amplifiers Mixing remixing and mastering
  • Mixing remixing and mastering
  • Multichannel and spatial audio
  • Music and speech signal processing
  • Musical instrument design
  • Networked Internet and remote audio
  • New audio interfaces
  • Perception & Listening Tests
  • Protocols and data formats
  • Psychoacoustics
  • Room acoustics and perception
  • Sound design and reinforcement
  • Sound design/acoustic simulation of immersive audio environments
  • Spatial Audio
  • Spatial audio applications
  • Speech intelligibility
  • Studio recording techniques
  • Transducers & Measurements
  • Wireless and wearable audio