The evolution of musical instruments has been deeply influenced by advancements in audio equipment, allowing for the creation of musical instruments that bridge the gap between tradition and modern innovations. This paper highlights the integration of modern technologies such as digital signal processing (DSP), artificial intelligence (AI) and advanced materials into musical instruments to enhance functionality, sound quality and musicians experience at all level by examining the historical progress, design principles and modern innovations.
Major areas of focus include the roles of electronic components such as the pickups, sensors and wireless interfaces in improving the functionality of modern musical instruments, as well high-performance materials on durability and sustainability. The case study of digital pianos and the talking drum will provide practical insights into how these innovations are being implemented alongside the contrast. The paper further addresses challenges such as maintaining cultural authenticity of traditional instruments while integrating modern technology, issue of latency, accessibility for diverse users globally and sustainability concerns in manufacturing.
This paper presents a case study on the auralization of the lost wooden synagogue in Wołpa, digitally reconstructed using a Heritage Building Information Modelling (HBIM) framework for virtual reality (VR) presentation. The study explores how acoustic simulation can aid in the preservation of intangible heritage, focusing on the synagogue’s unique acoustics. Using historical documentation, the synagogue was reconstructed with accurate geometric and material properties, and its acoustics were analyzed through high-fidelity ray-tracing simulations. A key objective of this project is to recreate the Shema Israel ritual, incorporating a historical recording of the rabbi’s prayers. To enable interactive exploration, real-time auralization techniques were optimized to balance computational efficiency and perceptual authenticity, aiming to overcome the trade-offs between simplified VR audio models and physically accurate simulations. This research underscores the transformative potential of immersive technologies in reviving lost heritage, offering a scalable, multi-sensory approach to preserving sacred soundscapes and ritual experiences.
The article explores the innovative concept of interactive music, where both creators and listeners can actively shape the structure and sound of a musical piece in real-time. Traditionally, music is passively consumed, but interactivity introduces a new dimension, allowing for creative participation and raising questions about authorship and the listener's role. The project "Sound Permutation: A Real-Time Interactive Musical Experiment" aims to create a unique audio-visual experience by enabling listeners to choose performers for a chamber music piece in semi-real-time. Two well-known compositions, Edward Elgar's "Salut d’Amour" and Camille Saint-Saëns' "Le Cygne," were recorded by three cellists and three pianists in all possible combinations. This setup allows listeners to seamlessly switch between performers' parts, offering a novel musical experience that highlights the impact of individual musicians on the perception of the piece.
The project focuses on chamber music, particularly the piano-cello duet, and utilizes advanced recording technology to ensure high-quality audio and video. The interactive system, developed using JavaScript allows for smooth video streaming and performer switching. The user interface is designed to be intuitive, featuring options for selecting performers and camera views. The system's optimization ensures minimal disruption during transitions, providing a cohesive musical experience. This project represents a significant step towards making interactive music more accessible, showcasing the potential of technology in shaping new forms of artistic engagement and participation.
In the field of digital audio signal processing (DSP) systems, the choice between standard and proprietary digital audio networks (DANs) can significantly impact both functionality and performance. This abstract aims to explore the benefits, tradeoffs, and economic implications of these two approaches, providing a comprehensive comparison to aid in decision-making processes for audio professionals and system designers. The abstract emphasizes key benefits of A2B, AOIP and older proprietary currently adopted.
Conclusion The choice between standard and proprietary digital audio networks in audio DSP systems involves a careful consideration of benefits, tradeoffs, and economic implications. Standards-based systems provide interoperability and cost-effectiveness, while proprietary solutions offer optimized performance and innovative features. Understanding these factors can guide audio professionals and system designers in making informed decisions that align with their specific needs and long-term goals.
Electrical and Mechanical Engineer Bachelor Degree from Universidad Panamericana in Mexico City. Master in Science in Music Engineering from University of Miami.EMBA from Boston UniversityWorked at Analog Devices developing DSP Software and Algorithms ( SigmaStudio ) for 17 years... Read More →
Thursday May 22, 2025 10:00am - 12:00pm CEST Hall FATM Studio Warsaw, Poland
This paper presents an ongoing project that aims to document the urban soundscapes of the Polish city of Białystok. It describes the progress made so far, including the selection of sonic landmarks, the process of acquiring the audio recordings, and the design of the unique graphic user interface featuring original drawings. Furthermore, it elaborates on the ongoing efforts to extend the project beyond the scope of a typical urban soundscape repository. In the present phase of the project, in addition to monophonic recordings, audio excerpts are acquired in binaural and Ambisonic sound formats, providing listeners with an immersive experience. Moreover, state-of-the-art machine-learning algorithms are applied to analyze gathered audio recordings in terms of their content and spatial characteristics, ultimately providing prospective users of the sound map with some form of automatic audio tagging functionality.