Loading…
Venue: C3 clear filter
Thursday, May 22
 

9:00am CEST

The Advance of UWB for High Quality and Low Latency Audio
Thursday May 22, 2025 9:00am - 10:00am CEST
UWB as a RF protocol is being heavily used by handset manufacturers for device location applications. As a transport option, UWB offers tremendous possibilities for Professional audio use cases which also require low latency for real time requirements. These applications include digital wireless microphones and In Ear Monitors (IEM’s). These UWB enabled devices, when used for live performances, can deliver a total latency which is able to service Mic to Front of House Mixer and back to the performers IEM’s without a noticeable delay.

UWB is progressing as an audio standard within the AES and it's first iteration was in live performance applications. Issues relating to body blocking due to frequencies (6.5 / 8GHz) and also clocking challenges that could result in dropped packets have been addressed to ensure a stable, reliable link. This workshop will outline how UWB is capable of delivering a low latency link and providing up to 10MHz of data throughput for Hi Res (24/96) Linear PCM audio.

The progression of UWB for Audio is seeing the launch of high end devices which are being supported by several RF wireless vendors. This workshop will dive into the options open to device manufacturer who are considering UWB for their next generation product roadmaps.
Speakers
JM

Jonathan McClintock

Audio Codecs Ltd
Thursday May 22, 2025 9:00am - 10:00am CEST
C3 ATM Studio Warsaw, Poland

10:15am CEST

Logarithmic frequency resolution filter design with applications to loudspeaker and room equalization
Thursday May 22, 2025 10:15am - 11:15am CEST
Digital filters are often used to model or equalize acoustic or electroacoustic transfer functions. Applications include headphone, loudspeaker, and room equalization, or modeling the radiation of musical instruments for sound synthesis. As the final judge of quality is the human ear, filter design should take into account the quasi-logarithmic frequency resolution of the auditory system. This tutorial presents various approaches for achieving this goal, including warped FIR and IIR, Kautz, and fixed-pole parallel filters, and discusses their differences and
similarities. Examples will include loudspeaker and room equalization applications, and the equalization of a spherical loudspeaker array. The effect of quantization noise arising in real-world applications will also be considered.
Speakers
Thursday May 22, 2025 10:15am - 11:15am CEST
C3 ATM Studio Warsaw, Poland

11:45am CEST

How Does It Sound Now? The Evolution of Audio
Thursday May 22, 2025 11:45am - 12:45pm CEST
One day Chet Atkins was playing guitar when a woman approached him. She said, "That guitar sounds beautiful". Chet immediately quit playing. Staring her in the eyes he asked, "How does it sound now?"
The quality of the sound in Chet’s case clearly rested with the player, not the instrument, and the quality of our product ultimately lies with us as engineers and producers, not with the gear we use. The dual significance of this question, “How does it sound now”, informs our discussion, since it addresses both the engineer as the driver and the changes we have seen and heard as our business and methodology have evolved through the decades.
Let’s start by exploring the methodology employed by the most successful among us when confronted with new and evolving technology. How do we retain quality and continue to create a product that conforms to our own high standards? This may lead to other conversations about the musicians we work with, the consumers we serve, and the differences and similarities between their standards and our own. How high should your standards be? How should it sound now? How should it sound tomorrow?
Speakers
Thursday May 22, 2025 11:45am - 12:45pm CEST
C3 ATM Studio Warsaw, Poland

2:45pm CEST

The Ins and Outs of Microphones
Thursday May 22, 2025 2:45pm - 3:45pm CEST
Microphones are the very first link in the recording chain, so it’s important to understand them to use them effectively. This presentation will explain the differences between different types of microphones; explain polar-patterns and directivity, proximity effect relative recording distances and a little about room acoustics. Many of these “golden nuggets” helped me greatly when I first understood them and I hope they will help you too.

We will look at the different microphone types – dynamic, moving-coil, ribbon and capacitor microphones, as well as boundary and line-array microphones. We will look at polar patterns and how they are derived. We will look at relative recording distances and a little about understanding room acoustics. All to help you to choose the best microphone for what you want to do and how best to use it.
Speakers
avatar for John Willett

John Willett

Director, Sound-Link ProAudio Ltd.
John Willett is the Managing Director of Sound-Link ProAudio Ltd. who are the official UK distributors for Microtech Gefell microphones, ME-Geithain studio monitors, HUM Audio Devices ribbon microphones (as well as the LAAL – Look Ahead Analogue Limiter, the N-Trophy mixing console... Read More →
Thursday May 22, 2025 2:45pm - 3:45pm CEST
C3 ATM Studio Warsaw, Poland

4:00pm CEST

Student Recording Competition 1
Thursday May 22, 2025 4:00pm - 5:00pm CEST
Thursday May 22, 2025 4:00pm - 5:00pm CEST
C3 ATM Studio Warsaw, Poland

5:15pm CEST

High Pass everything! or not?
Thursday May 22, 2025 5:15pm - 6:00pm CEST
High pass Filters (HPF) in music production, do's and don'ts
This presentation aims to bring a thorough insight on the use of high pass filters in music production. WHich type, slope, and frequency settings could be more desirable for a given source or application?
Are HPF in microphones and preamps the same? Do they serve the same purpose? is there any rule on when to use one, the other or both? furthermore, HPF is also used extensively in the mixing and processing of audio signals. HPF is commonly applied in the sidechain signal on dynamic processors (EG: buss compressors) and of course in all multiband processing. what are the benefits of this practice?
Live sound reinforcement, different approaches on the use of HPF.
Different genres call for different production techniques, understanding the basics of this simple albeit important signal filtering process helps in the conscious implementation.
Speakers
avatar for Cesar Lamschtein
Thursday May 22, 2025 5:15pm - 6:00pm CEST
C3 ATM Studio Warsaw, Poland
 
Friday, May 23
 

9:00am CEST

Binaural Audio Reproduction Using Loudspeaker Array Beamforming
Friday May 23, 2025 9:00am - 10:15am CEST
Binaural audio is fundamental to delivering immersive spatial sound, but traditional playback has been limited to headphones. Crosstalk Cancellation (CTC) technology overcomes this limitation by enabling accurate binaural reproduction over loudspeakers, allowing for a more natural listening experience. Using a compact loudspeaker array positioned in front of the listener, CTC systems apply beamforming techniques to direct sound precisely to each ear. Combined with listener tracking, this ensures consistent and accurate binaural playback, even as the listener moves. This workshop will provide an in-depth look at the principles behind CTC technology, the role of loudspeaker array beamforming, and a live demonstration of a listener-tracked CTC soundbar.
Speakers
avatar for Jacob Hollebon

Jacob Hollebon

Principal Research Engineer, Audioscenic
I am a researcher specialising in 3D spatial audio reproduction and beamforming using loudspeaker arrays. In my current role at Audioscenic I am helping commercialize innovate listener-adaptive loudspeaker arrays for 3D audio and multizone reproduction. Previously I developed a new... Read More →
avatar for Marcos Simón

Marcos Simón

CTO, Audioscenic
Friday May 23, 2025 9:00am - 10:15am CEST
C3 ATM Studio Warsaw, Poland

10:30am CEST

Use of Headphones in Audio Monitoring
Friday May 23, 2025 10:30am - 11:30am CEST
Extensive studies have been made into achieving generally enjoyable sound colour in headphone listening, but few publications have been written focusing on the demanding requirements of a single audio professional, and what they actually hear.

However, headphones provide fundamentally different listening conditions, compared to our professional, in-room monitoring standards. With headphones, there is even no direct connection between measured frequency response and what a given user hears.

Media professionals from a variety of fields need awareness of such differences, and to take them into account in content production and quality control.

The paper details a recently published method and systematic steps to get to know yourself as a headphone listener. It also summarises new studies of basic listening requirements in headphone monitoring; and it explains why, even if the consumer is listening on headphones, in-room monitoring is generally the better and more relevant common denominator to base production on. The following topics and dimensions are compared across in-room and headphone monitoring: Audio format, listening level, frequency response, auditory envelopment, localisation, speech intelligibility and low frequency sensation.

New, universal headphone monitoring standards are required, before such devices may be used with a reliability and a confidence comparable to in-room monitoring adhering to, for example, ITU-R BS.1116, BS.775 and BS.2051.
Speakers
Friday May 23, 2025 10:30am - 11:30am CEST
C3 ATM Studio Warsaw, Poland

12:00pm CEST

Student Recording Competition 2
Friday May 23, 2025 12:00pm - 1:00pm CEST
Friday May 23, 2025 12:00pm - 1:00pm CEST
C3 ATM Studio Warsaw, Poland

1:00pm CEST

Student Recording Competition 3
Friday May 23, 2025 1:00pm - 2:00pm CEST
Friday May 23, 2025 1:00pm - 2:00pm CEST
C3 ATM Studio Warsaw, Poland

2:15pm CEST

Storytelling in Audio Augmented Reality
Friday May 23, 2025 2:15pm - 3:45pm CEST
How can Audio Augmented Reality (AAR) serve as a storytelling medium? Sound designer Matias Harju shares insights from The Reign Union, an experimental interactive AAR story currently exhibited at WHS Union Theatre in Helsinki, Finland.

This workshop addresses the challenges and breakthroughs of creating an immersive, headphone-based 6DoF AAR experience. In The Reign Union, two simultaneous participants experience the same bio-fictional story from different points of audition. Narrative design considerations and approaches are discussed and demonstrated through video clips featuring binaural sound recorded from the experience. References to other AAR experiences around the world are included to provide a broader context. A central theme is how reality anchors the narrative, while virtual sounds reveal new perspectives and interpretations.

The session also briefly examines the development of an in-house 6DoF AAR prototype platform, used for The Reign Union story as well as other narrative research conducted by the author and his team. This has been a journey through various pose tracking, virtual acoustic, and authoring solutions, resulting in a scalable system potentially suited for complex indoor spaces.

Matias, author of the forthcoming book Audio Augmented Reality: Concepts, Technologies, and Narratives (Routledge, June 2025), invites attendees to discuss and discover the possibilities of AAR as a tool for storytelling and artistic expression.
Speakers
Friday May 23, 2025 2:15pm - 3:45pm CEST
C3 ATM Studio Warsaw, Poland

4:00pm CEST

Beyond Stereo: Using Binaural Audio to Bridge Legacy and Modern Sound Systems
Friday May 23, 2025 4:00pm - 5:30pm CEST
As immersive audio content becomes more prevalent across streaming and broadcast platforms, creators and engineers face the challenge of making spatial audio accessible to listeners using legacy codecs and traditional playback systems, particularly headphones. With multiple binaural encoding methods available, choosing the right approach for a given project can be complex.

This workshop is designed as an exploration for audio professionals to better understand the strengths and applications of various binaural encoding systems. By comparing different techniques and their effectiveness in real-world scenarios, attendees will gain insights into how binaural processing can serve as a bridge between legacy and modern formats, preserving spatial cues while maintaining compatibility with existing distribution channels.

As the first in a series of workshops, this session will help define key areas for real-world testing between this convention and the next. Attendee insights and discussions will directly influence which encoding methods are explored further, ensuring that the most effective solutions are identified for different content types and delivery platforms.

Participants will gain an understanding of processing methods, and implementation strategies for various distribution platforms. By integrating these approaches, content creators can enhance accessibility and ensure that immersive audio reaches a wider audience, possibly encouraging consumers to explore how to enjoy immersive content using a variety of playback systems.
Speakers
avatar for Alex Kosiorek

Alex Kosiorek

Manager / Executive Producer / Sr. Engineer, Central Sound at Arizona PBS
Multi-Emmy Award Winning Senior Audio Engineer, Executive Producer, Media Executive, Surround, Immersive, and Acoustic Music Specialist. 30+ years of experience creating audio-media productions for broadcast and online distribution. Known for many “firsts” such as 1st audio fellow... Read More →
Friday May 23, 2025 4:00pm - 5:30pm CEST
C3 ATM Studio Warsaw, Poland
 
Saturday, May 24
 

9:00am CEST

Key Technology Briefing 4
Saturday May 24, 2025 9:00am - 10:30am CEST
Saturday May 24, 2025 9:00am - 10:30am CEST
C3 ATM Studio Warsaw, Poland

10:45am CEST

Audio Post in the AI Future
Saturday May 24, 2025 10:45am - 12:15pm CEST
This panel discussion gathers professionals with a broad range of experience across audio post production for film, television and visual media. During the session, the panel will consider questions around how AI technology could be leveraged to solve common problems and pain-points across audio post, and offer opportunities to encourage human creativity, not supplant it.
Speakers
avatar for Bradford Swanson

Bradford Swanson

Head of Product, Pro Sound Effects
Bradford is the Head of Product at Pro Sound Effects, an industry leader in licensing audio for media and machine learning. Previously, he worked in product development at iZotope, Nomono, and Sense Labs, and toured for more than 12 years as a musician, production manager, and FOH... Read More →
Saturday May 24, 2025 10:45am - 12:15pm CEST
C3 ATM Studio Warsaw, Poland

12:30pm CEST

What was it about the Dolby Noise Reduction System that made it successful?
Saturday May 24, 2025 12:30pm - 1:30pm CEST
Warsaw tutorial

Love it or hate it the Dolby noise reduction system had a significant impact on sound recording practice. Even nowadays, in our digital audio workstation world, Dolby noise reduction units are used as effects processors. 2
However, when the system first came out in the 1960s, there were other noise reduction systems, but the Dolby “Model A” noise reduction system, and its successors, still became dominant. What was it about the Dolby system that made it so successful?
This tutorial will look in some detail into the inner workings of the Dolby A Noise reduction system to see how this came about.
Dolby made some key technical decisions in his design, that worked with the technology of the day, to provide noise reduction that did minimal harm to the audio signal and tried to minimise any audible effects of the noise reduction processing. We will examine these key decisions and show how the fitted with the technology and electronic components at the time.
The tutorial will start with a basic introduction to complementary noise reduction systems and their pros and cons. We will the go on to examine the Dolby system in more detail, including looking at some of the circuitry.
In particular, we will discuss:
1. The principle of least treatment.
2. Side chain processing.
3. Psychoacoustic elements.
4. What Dolby could have done better.
Although the talk will concentrate on the Model 301 processor, if time permits, we will look at the differences between it, and the later Cat 22 version.
The tutorial will be accessible to everyone, you will not have to be an electronic engineer to understand the principles behind this seminal piece of audio engineering history.
Speakers
avatar for Jamie Angus-Whiteoak

Jamie Angus-Whiteoak

Emeritus Professor/Consultant, University of Salford/JASA Consultancy
Jamie Angus-Whiteoak is Emeritus Professor of Audio Technology at Salford University. Her interest in audio was crystallized at age 11 when she visited the WOR studios in NYC on a school trip in 1967. After this she was hooked, and spent much of her free time studying audio, radio... Read More →
Saturday May 24, 2025 12:30pm - 1:30pm CEST
C3 ATM Studio Warsaw, Poland

1:45pm CEST

A century of dynamic loudspeakers
Saturday May 24, 2025 1:45pm - 2:45pm CEST
This tutorial is based on a Journal of the Audio Engineering Society review paper being submitted.

2025 marks the centennial of the commercial introduction of the modern dynamic direct radiating loudspeaker, Radiola 104, and the publication of Kellogg and Rice’s paper describing its design. The tutorial outlines the developments leading to the first dynamic loudspeakers and their subsequent evolution. The presentation focuses on direct radiating loudspeakers, although the parallel development of horn technology is acknowledged.

The roots of the dynamic loudspeaker trace back to the moving coil linear actuator patented by Werner Siemens in 1877. The first audio-related application was Sir Joseph Lodge’s 1896 mechanical telephone signal amplifier, or “repeater.” The first moving coil loudspeaker was the Magnavox by Peter Jensen in 1915, but the diaphragm assembly resembled earlier electromagnetic loudspeakers. The Blatthaller loudspeakers by Schottky and Gerlach in 1920’s are another example of a different early use of the dynamic concept.

It is interesting to take a look at the success factors of the dynamic loudspeakers, creating a market for quality sound reproduction and practically replacing the earlier electromagnetic designs by the end of 1920s. The first dynamic loudspeakers were heavy, expensive, and inefficient, but the sound quality could not be matched by any other technology available then. The direct radiating dynamic loudspeaker is also one of the most scalable technologies in engineering, both in terms of size and production volume. The dynamic loudspeaker is also quite friendly in terms of operating voltage and current, and what is important, the sound can be adjusted through enclosure design.

The breadth of the applications of dynamic loudspeakers would not have been possible without the developments in magnet materials. Early dynamic loudspeakers used electromagnets for air gap flux, requiring constant high power (e.g., Radiola 104’s field coil consumed 8W, while peak audio power was about 1W). Some manufacturers attempted steel permanent magnets, but these were bulky. A major breakthrough came with AlNiCo (Aluminum-Nickel-Cobalt) magnets, first developed in Japan in the 1930s and commercialized in the U.S. during World War II. AlNiCo enabled smaller, lighter, and more efficient designs. However, a cobalt supply crisis in 1970 led to the widespread adoption of ferrite (ceramic) magnets, which were heavier but cost-effective. The next advancement especially in small drivers were rare earth magnets introduced in the early 1980s. However, a neodymium supply crisis in the 2000s led to a partial return to ferrite magnets.

One of the focus points of the industry’s attention has been the cone and surround materials for the loudspeaker. Already the first units employed relatively lossy cardboard type material. Although plastic and foam materials were attempted in loudspeakers from 1950’s onwards, plastic cones for larger loudspeakers were successfully launched only in the late 1970’s. Metal cones, honeycomb diaphragms, and use of coatings to improve the stiffness have all brought more variety to the loudspeaker market, enabled by the significant improvement of numerical loudspeaker modelling and measurement methods, also starting their practical use during 1970’s.

A detail that was somewhat different in the first loudspeakers as compared to modern designs was the centering mechanism. The Radiola centering mechanism was complex, and soon simpler flat supports (giving the name “spider”) were developed. The modern concentrically corrugated centering system was developed in the early 1930’s by Walter Vollman at the German Gravor loudspeaker company, and this design has remained the standard solution with little variation.

The limitations of the high frequency reproduction of the early drivers led to improvements in driver design. The high frequency performance of the cone drivers was improved by introducing lossy or compliant areas that attempted to restrict the radiation of high frequencies to the apex part of the cone, and adding a double cone. The introduction of FM radio and improved records led to the need to develop loudspeakers with more extended treble reproduction. The first separate tweeter units were horn loudspeakers, and the first direct radiating tweeters were scaled down cone drivers, but late 1950’s saw the introduction of modern tweeters where the voice coil was outside the radiating diaphragm.

The latest paradigm shift in dynamic loudspeakers is the microspeaker, ubiquitous in portable devices. By manufacturing numbers, microspeakers are the largest class of dynamic loudspeakers, presenting unique structural, engineering, and manufacturing challenges. Their rapid evolution from the 1980s onwards includes the introduction of rare earth magnets, diaphragm forming improvements, and a departure from the cylindrical form factor of traditional loudspeakers. The next phase in loudspeaker miniaturization is emerging, with the first MEMS-based dynamic microspeakers now entering the market.
Speakers
JB

Juha Backman

AAC Technologies
Saturday May 24, 2025 1:45pm - 2:45pm CEST
C3 ATM Studio Warsaw, Poland
 


Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
  • Acoustic Transducers & Measurements
  • Acoustics
  • Acoustics of large performance or rehearsal spaces
  • Acoustics of smaller rooms
  • Acoustics of smaller rooms Room acoustic solutions and materials
  • Acoustics & Sig. Processing
  • AI
  • AI & Machine Audition
  • Analysis and synthesis of sound
  • Archiving and restoration
  • Audio and music information retrieval
  • Audio Applications
  • Audio coding and compression
  • Audio effects
  • Audio Effects & Signal Processing
  • Audio for mobile and handheld devices
  • Audio for virtual/augmented reality environments
  • Audio formats
  • Audio in Education
  • Audio perception
  • Audio quality
  • Auditory display and sonification
  • Automotive Audio
  • Automotive Audio & Perception
  • Digital broadcasting
  • Electronic dance music
  • Electronic instrument design & applications
  • Evaluation of spatial audio
  • Forensic audio
  • Game Audio
  • Generative AI for speech and audio
  • Hearing Loss Protection and Enhancement
  • High resolution audio
  • Hip-Hop/R&B
  • Impact of room acoustics on immersive audio
  • Instrumentation and measurement
  • Interaction of transducers and the room
  • Interactive sound
  • Listening tests and evaluation
  • Live event and stage audio
  • Loudspeakers and headphones
  • Machine Audition
  • Microphones converters and amplifiers
  • Microphones converters and amplifiers Mixing remixing and mastering
  • Mixing remixing and mastering
  • Multichannel and spatial audio
  • Music and speech signal processing
  • Musical instrument design
  • Networked Internet and remote audio
  • New audio interfaces
  • Perception & Listening Tests
  • Protocols and data formats
  • Psychoacoustics
  • Room acoustics and perception
  • Sound design and reinforcement
  • Sound design/acoustic simulation of immersive audio environments
  • Spatial Audio
  • Spatial audio applications
  • Speech intelligibility
  • Studio recording techniques
  • Transducers & Measurements
  • Wireless and wearable audio