Object-Based Audio: The Future of Audio Production?

I'm one of those people who is always excited by the new!


I love a technological development or a nice new shiny piece of audio kit. That's why when recording the latest episode of Voiceworks: Sound Business" I got a little bit giddy.


My guest for this episode, "Getting Social with Audio: Facebook, Clubhouse and where we are heading" was Ann Charles one of the brains behind Radio TechCon and a woman who has her finger on the pulse when it comes to developments in audio tech.


The conversation centred around "Social Audio" and what the future looked like for the likes of Clubhouse, Spotify Greenroom and Facebook, who are all playing in that space but then we got onto something really interesting. Something I'd never heard of before...


Object-Based Audio! Which, Ann claims, is the future of audio production.


She may well be right:


"There are currently several different definitions of object-based audio, as there always are at the beginnings of a new technology, but it is effectively the idea that every single piece of content that you make can be broken down and used in different ways."


Ann described the process as 'like making a cake' but where you can keep the ingredients separate so you could make multiple different versions of said cake depending on the taste of the person eating it. Feels a bit like an episode of 'The Great British Bake Off' doesn't it?


However, the "person eating the cake", or rather consuming the audio doesn't NEED to be a human listener. It could be an app, a piece of software or a producer re-purposing the audio for a multitude of different purposes.


"What you have is a mix of telemetry, metadata and transcription so you know exactly what is going on in each piece of audio. This means you can then reuse that audio in as many ways as you want and probably in ways that we've not even thought of yet."


In other words; everything is catalogued, segmented and indexed in a way that makes it easy to pull out clips, amend edits and navigate your way through an audio project.


It already sounds like a lot of admin right? Wrong!


What I really like about the idea of "Object-Based Audio" is that it captures the information that, as a producer, you are already putting into your show/podcast/project. Things like show notes, transcripts, guest names and music bed titles all get recorded, indexed and made searchable.


This isn't a technology that is set to appear way into the future. It is already in use. BBC hospital drama Casualty recently adopted the technology to allow viewers to select the ideal audio mix of an episode to help individuals with hearing issues and accessibility is a key part of Object-Based Audios potential applications:


"If you were watching television with Grandma, who has hearing difficulties, she can have a different mix of the program to the one you hear without creating multiple audio mixes of that show. Because we put the right information into our files. Grandma can press one button, have their headphones in and get a mix that's comfortable for her."


For me though, the really exciting part of Object-Based Audio comes with being able to find exact pieces of audio, in their component parts, quickly and easily. Not only does this have huge implications for the re-purposing of audio products to suit different broadcast channels but also solves some of the issues and questions around how 'Audio Search' will work in the future.


Last year, Apple Podcasts announced that they would be introducing 'Audio Search' for the Top100 podcasts on their platform. In other words, rather than just relying on keywords, podcasts descriptions and transcripts, the clever Apple Search Bots would be able to trawl the actual audio files for relevant sections and topics.


With Object-Based Audio it's easy to see how that could be expanded to cover any audio that is recorded and presented in the right way. This wouldn't just apply to Apple's platforms either. By presenting the correct meta-data for a piece of 'Object-Based Audio' that audio then becomes indexable by an internet search engine such as Google:


"One way to do it would be having a transcript timed to a specific piece of audio to make it searchable - so your metadata and audio are linked together. The other thing is if you release the metadata of each track. Even if during the whole recording a guest never says their name, you would still be able to find who they are and when they spoke."


As a producer, I can already see the potential power of this new Audio Production tool and, as Ann says, there will no doubt be many applications for it that haven't been conceived yet. Not only does it feel like a great way to automate certain elements of the production and repurposing process but it has huge creative potential too.


You can listen to more from Ann talking Object-Based Audio, Social Audio and how to repurpose audio for social media platforms on the latest episode of Voiceworks: Sound Business here: https://pod.fo/e/102b7f