Room acoustics optimisation in live sound environments using signal processing techniques has captivated the minds of audio enthusiasts and researchers alike for over half a century. From analogue filters in the 1950s, to modern research efforts such as room impulse response equalisation and adaptive sound field control, this subject has exploded to life. Controlling the sound field in a static acoustic space is complex due to the high number of system variables, such as reflections, speaker crosstalk, equipment-induced coloration, room modes, reverberation, diffraction and listener positioning. These challenges are further amplified by dynamic variables such as audience presence, environmental conditions and room occupancy changes, which continuously and unpredictably reshape the sound field. A primary objective of live sound reinforcement is to deliver uniform sound quality across the audience area. This is most critical at audience ear level, where tonal balance, clarity, and spatial imaging are most affected by variations in the sound field. While placing microphones at audience ear level positions could enable real-time monitoring, large-scale deployment is impractical due to audience interference. This research will explore the feasibility of an adaptive virtual microphone-based approach to room acoustics optimisation. By strategically placing microphone arrays and leveraging virtual microphone technology, the system estimates the sound field dynamically at audience ear level without requiring physical microphones. By continuously repositioning focal points across listening zones, a small number of arrays could effectively monitor large audience areas. If accurate estimations can be achieved, real-time sound field control becomes more manageable and effective.