According to Treble, the SDK will allow users to utilise the company’s unique wave-based acoustic simulation technology in their own solutions, integrating the offering into existing workflows at any scale.
With the ability to integrate with a variety of platforms and tools, Treble said that its SDK has many use cases; notably, in the audio equipment industries for the testing and designing of audio devices such as headphones, speakers, and microphones. This will also extend to the architectural and engineering industries, where the SDK can be used to seamlessly analyse and shape acoustics in automotive and building designs.
Beyond architectural or hardware design, Treble’s SDK will also enable developers to generate synthetic audio data for the training of advanced AI software such as voice recognition, spatial audio and VR-based audio technologies.
The SDK functions by allowing developers to synthesize hyper realistic datasets encompassing thousands of customised audio scenes by configuring sound sources, receivers, environments and materials. The result is that users will be able to train and test their AI algorithms and products in thousands of different meeting rooms, open plan offices, restaurants, and living rooms - all with varying sorts of furnishing and details.
The high-fidelity acoustic data generated within these virtual environments will allow for the training, testing and validation of audio machine learning models. These types of algorithms are becoming increasingly popular as developers seek to integrate artificial intelligence (AI) into the development of their audio devices.
Through its plethora of virtual environments and ability to replicate real-world acoustic behaviour, Treble’s platform is capable of generating significant volumes of data on sound which proves invaluable when training AI to recognise potential problems with designs. Specifically, the synthetic audio data can be used to train algorithms such as speech recognition, speech enhancement, source localization, echo cancellation, beamforming, noise suppression, de-reverberation, and blind room estimation.
Commenting Finnur Pind, CEO and co-founder of Treble, said, “AI was always destined to revolutionise the audio industry. By allowing our wave-based technology to be combined with all manner of platforms and tools, we aim to revolutionise what can be done with AI and how audio equipment manufacturers and other industries conceptualize, design, and optimise their products. Not only will our SDK grant them the freedom to test and iterate with virtual soundscapes but will provide invaluable insights and data for AI to optimise acoustic properties, fine-tune audio equipment, and deliver a truly immersive listening experience for customers.”