Loading…
Type: Game Audio clear filter
Friday, May 23
 

9:00am CEST

How to create and use audio for accessible video games?
Friday May 23, 2025 9:00am - 10:00am CEST
Sound is one of the most powerful tools for accessibility in video games, enabling players with visual impairments or cognitive disabilities to navigate, interact, and fully engage with the game world. This panel will explore how sound engineers can leverage audio design to enhance accessibility, making games more inclusive without compromising artistic intent. Experts from different areas of game development will discuss practical approaches, tools, and case studies that showcase how audio can bridge gaps in accessibility.

Discussion Topics:

• Why is sound crucial for accessibility in video games? Audio cues, spatial sound, and adaptive music can replace or complement visual elements, guiding players with disabilities through complex environments and interactions.
• Designing effective spatial audio for navigation and interaction. Using 3D audio and binaural rendering to provide players with intuitive sound-based navigation, enhancing orientation and gameplay flow for blind or visually impaired users.
• Audio feedback and sonification as key accessibility tools. Implementing detailed auditory feedback for in-game actions, menu navigation, and contextual cues to improve usability and player experience.
• Case studies of games with exemplary accessible audio design. Examining how games like The Last of Us Part II, BROK: The InvestiGator, and other titles have successfully integrated sound-based accessibility features.
• Tools and middleware solutions for accessible sound design (example: InclusivityForge). Showcasing how game engines and plugins such as InclusivityForge can streamline the implementation of accessibility-focused audio solutions.
• Challenges in designing accessible game audio and overcoming them. Addressing common technical and creative challenges when designing inclusive audio experiences, including balancing accessibility with immersive design.
• Future trends in accessibility-driven audio design. Exploring how AI, procedural sound, and new hardware technologies can push the boundaries of accessibility in interactive audio environments.

Panel Guests:

• Dr Joanna Pigulak - accessibility expert in games, researcher specializing in game audio accessibility, assistant professor at the Institute of Film, Media, and Audiovisual Arts at UAM.
• Tomasz Tworek - accessibility consultant, blind gamer, and audio design collaborator specializing in improving audio cues and sonification in video games.
• Dr Tomasz Żernicki - sound engineer, creator of accessibility-focused audio technologies for games, and founder of InclusivityForge.

Target Audience:

• Sound engineers and game audio designers looking to implement accessibility features in their projects.
• Game developers interested in leveraging audio as a tool for accessibility.
• UX designers and researchers focusing on sound-based interaction in gaming.
• Middleware and tool developers aiming to create better solutions for accessible audio design.
• Industry professionals seeking to align with accessibility regulations and best practices.

This panel discussion will explore how sound engineers can enhance game accessibility through innovative audio solutions, providing insights into the latest tools, design techniques, and industry best practices.
Speakers
avatar for Tomasz Żernicki

Tomasz Żernicki

co-founder, my3DAudio
Tomasz Zernicki is co-founder and former CEO of Zylia (www.zylia.co), an innovative company that provides tools for 3D audio recording and music production.Additionally, he is a founder of my3DAudio Ventures, whose goal is to scale audio companies that reach the MVP phase and want... Read More →
Friday May 23, 2025 9:00am - 10:00am CEST
Hall F ATM Studio Warsaw, Poland

10:30am CEST

Use of Headphones in Audio Monitoring
Friday May 23, 2025 10:30am - 11:30am CEST
Extensive studies have been made into achieving generally enjoyable sound colour in headphone listening, but few publications have been written focusing on the demanding requirements of a single audio professional, and what they actually hear.

However, headphones provide fundamentally different listening conditions, compared to our professional, in-room monitoring standards. With headphones, there is even no direct connection between measured frequency response and what a given user hears.

Media professionals from a variety of fields need awareness of such differences, and to take them into account in content production and quality control.

The paper details a recently published method and systematic steps to get to know yourself as a headphone listener. It also summarises new studies of basic listening requirements in headphone monitoring; and it explains why, even if the consumer is listening on headphones, in-room monitoring is generally the better and more relevant common denominator to base production on. The following topics and dimensions are compared across in-room and headphone monitoring: Audio format, listening level, frequency response, auditory envelopment, localisation, speech intelligibility and low frequency sensation.

New, universal headphone monitoring standards are required, before such devices may be used with a reliability and a confidence comparable to in-room monitoring adhering to, for example, ITU-R BS.1116, BS.775 and BS.2051.
Speakers
Friday May 23, 2025 10:30am - 11:30am CEST
C3 ATM Studio Warsaw, Poland

12:00pm CEST

The Future of Immersive Audio: Expanding Beyond Music and Film
Friday May 23, 2025 12:00pm - 1:00pm CEST
The evolution of 3D audio has significantly influenced the music and film industries, yet its full potential remains untapped. This panel will explore how immersive audio technologies, including Ambisonics, Dolby Atmos, and volumetric sound, shape new frontiers beyond traditional applications. We will focus on three key areas: accessibility in video games, the integration of 3D audio in gaming experiences, and its growing role in the automotive industry. Our panelists will discuss the state of the market, technological limitations, and emerging opportunities where spatial audio enhances user experience, safety, and engagement. This discussion aims to inspire innovation and collaboration among researchers, developers, and industry professionals.
Speakers
avatar for Tomasz Żernicki

Tomasz Żernicki

co-founder, my3DAudio
Tomasz Zernicki is co-founder and former CEO of Zylia (www.zylia.co), an innovative company that provides tools for 3D audio recording and music production.Additionally, he is a founder of my3DAudio Ventures, whose goal is to scale audio companies that reach the MVP phase and want... Read More →
Friday May 23, 2025 12:00pm - 1:00pm CEST
C4 ATM Studio Warsaw, Poland

12:15pm CEST

The Future Of Spatial Audio For Consumers
Friday May 23, 2025 12:15pm - 1:15pm CEST
As spatial audio shifts from a premium feature to a mainstream expectation, significant challenges remain in delivering a uniform experience across devices, formats, and playback systems. This panel brings together industry and academic experts to explore the key technologies driving the future of immersive audio for consumers. We’ll discuss the core technological advancements, software, hardware, and ecosystem innovations necessary to enable more seamless and consistent spatial audio experiences. Additionally, we will examine the challenges of delivering perceptually accurate spatial audio across diverse playback environments and identify the most critical areas of focus for industry and academia to accelerate broader consumer adoption of spatial audio.
Speakers
avatar for Jacob Hollebon

Jacob Hollebon

Principal Research Engineer, Audioscenic
I am a researcher specialising in 3D spatial audio reproduction and beamforming using loudspeaker arrays. In my current role at Audioscenic I am helping commercialize innovate listener-adaptive loudspeaker arrays for 3D audio and multizone reproduction. Previously I developed a new... Read More →
avatar for Marcos Simón

Marcos Simón

CTO, Audioscenic
avatar for Jan Skoglund

Jan Skoglund

Google
Jan Skoglund leads a team at Google in San Francisco, CA, developing speech and audio signal processing components for capture, real-time communication, storage, and rendering. These components have been deployed in Google software products such as Meet and hardware products such... Read More →
avatar for Hyunkook Lee

Hyunkook Lee

Professor, Applied Psychoacoustics Lab, University of Huddersfield
Professor
Friday May 23, 2025 12:15pm - 1:15pm CEST
C2 ATM Studio Warsaw, Poland
 
Saturday, May 24
 

9:00am CEST

Creating and distributing immersive audio: from IRCAM Spat to Acoustic Objects
Saturday May 24, 2025 9:00am - 10:00am CEST
In this session, we propose a path for the evolution of immersive audio technology towards accelerating commercial deployment and enabling rich user-end personalization, in any linear or interactive entertainment or business application. We review an example of perceptually based immersive audio creation platform, IRCAM Spat, which enables plausible aesthetically motivated immersive music creation and performance, with optional dependency on physical modeling of an acoustic environment. We advocate to alleviate ecosystem fragmentation by showing: (a) how a universal device-agnostic immersive audio rendering model can support the creation and distribution of both physics-driven interactive audio experiences and artistically motivated immersive audio content; (b) how object-based immersive linear audio content formats can be extended, via the notion of Acoustic Objects, to support end-user interaction, reverberant object substitution, or 6-DoF navigation.
Speakers
avatar for Jean-Marc Jot

Jean-Marc Jot

Founder and Principal, Virtuel Works LLC
Spatial audio and music technology expert and innovator. Virtuel Works provides audio technology strategy, IP creation and licensing services to help accelerate the development of audio and music spatial computing technology and interoperability solutions.
avatar for Thibaut Carpentier

Thibaut Carpentier

STMS Lab - IRCAM, SU, CNRS, Ministère de la Culture
Thibaut Carpentier studied acoustics at the École centrale and signal processing at Télécom ParisTech, before joining the CNRS as a research engineer. Since 2009, he has been a member of the Acoustic and Cognitive Spaces team in the STMS Lab (Sciences and Technologies of Music... Read More →
Saturday May 24, 2025 9:00am - 10:00am CEST
C4 ATM Studio Warsaw, Poland

10:45am CEST

Audio Post in the AI Future
Saturday May 24, 2025 10:45am - 12:15pm CEST
This panel discussion gathers professionals with a broad range of experience across audio post production for film, television and visual media. During the session, the panel will consider questions around how AI technology could be leveraged to solve common problems and pain-points across audio post, and offer opportunities to encourage human creativity, not supplant it.
Speakers
avatar for Bradford Swanson

Bradford Swanson

Head of Product, Pro Sound Effects
Bradford is the Head of Product at Pro Sound Effects, an industry leader in licensing audio for media and machine learning. Previously, he worked in product development at iZotope, Nomono, and Sense Labs, and toured for more than 12 years as a musician, production manager, and FOH... Read More →
Saturday May 24, 2025 10:45am - 12:15pm CEST
C3 ATM Studio Warsaw, Poland
 


Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
  • Acoustic Transducers & Measurements
  • Acoustics
  • Acoustics of large performance or rehearsal spaces
  • Acoustics of smaller rooms
  • Acoustics of smaller rooms Room acoustic solutions and materials
  • Acoustics & Sig. Processing
  • AI
  • AI & Machine Audition
  • Analysis and synthesis of sound
  • Archiving and restoration
  • Audio and music information retrieval
  • Audio Applications
  • Audio coding and compression
  • Audio effects
  • Audio Effects & Signal Processing
  • Audio for mobile and handheld devices
  • Audio for virtual/augmented reality environments
  • Audio formats
  • Audio in Education
  • Audio perception
  • Audio quality
  • Auditory display and sonification
  • Automotive Audio
  • Automotive Audio & Perception
  • Digital broadcasting
  • Electronic dance music
  • Electronic instrument design & applications
  • Evaluation of spatial audio
  • Forensic audio
  • Game Audio
  • Generative AI for speech and audio
  • Hearing Loss Protection and Enhancement
  • High resolution audio
  • Hip-Hop/R&B
  • Impact of room acoustics on immersive audio
  • Instrumentation and measurement
  • Interaction of transducers and the room
  • Interactive sound
  • Listening tests and evaluation
  • Live event and stage audio
  • Loudspeakers and headphones
  • Machine Audition
  • Microphones converters and amplifiers
  • Microphones converters and amplifiers Mixing remixing and mastering
  • Mixing remixing and mastering
  • Multichannel and spatial audio
  • Music and speech signal processing
  • Musical instrument design
  • Networked Internet and remote audio
  • New audio interfaces
  • Perception & Listening Tests
  • Protocols and data formats
  • Psychoacoustics
  • Room acoustics and perception
  • Sound design and reinforcement
  • Sound design/acoustic simulation of immersive audio environments
  • Spatial Audio
  • Spatial audio applications
  • Speech intelligibility
  • Studio recording techniques
  • Transducers & Measurements
  • Wireless and wearable audio