Image: Meta / MIXED
Meta is improving the spatial audio of Meta Quest. A new HRTF model enables a more realistic and natural 3D audio experience.
Quest headsets use software that incorporates anatomical factors of the head (ear shape and distance between the ears) to simulate a more accurate and realistic spatial audio. This software is an important component of spatial audio known as Head-Related Transfer Function (HRTF). It enables more precise localization of sound sources and also affects the sound itself.
Metas Audio SDK previously used an HRTF model based on publicly available data sets. Meanwhile, Reality Labs researchers have recorded nearly 150 custom high-quality HRTFs based on individuals (see photo below) and used them to create a model that is intended to better represent the general population.
Meta calls it Universal HRTF. The new model is “the culmination of years of research and development” and replaces the old model in the latest Audio SDK.
Tests show: New HRTF model is significantly better
According to Meta, The Universal HRTF should deliver an improved spatial audio experience in two main areas: localization and frequency accuracy.
“Improved localization lets people more precisely detect where a sound is coming from within a given space, particularly when judging the elevation of sounds coming from above or below them. Improved frequency accuracy means that sounds will be much more natural, with less coloration and filtering,” Meta writes on the Oculus Developer Blog.
Meta tested the new HRTF model with 100 test participants, and it outperformed the old model in both subjective tests and objective performance measurements. The test subjects were able to locate sound sources more accurately, with height accuracy improving by an average of 81 percent.
Universal HRTF is part of the latest Audio SDK. For the changes to take effect, developers must update the Audio SKD to version 55 and recompile their VR app.
Apple goes one step further
Meta’s new HRTF model is a step in the right direction, but it’s far from a perfect solution. Heads and ears differ from person to person, and a universal model can only be an approximation of an individual’s anatomic features.
The ideal solution would therefore be to scan the user’s head and ear shape and to create an invididual HRTF model from this.
Apple Vision Pro is supposed to offer exactly such a feature. In the first press demos, the journalists’ heads and ears were scanned with the TrueDepth camera of the iPhone and an accurate HRTF model was generated for the headset for the best result.
Buy Quest 2, Quest Pro & Prescription Lenses