A surprising group is largely responsible for the amazing video experiences you have on your new phone, your giant TV, and even your computer monitor. No, I’m not thinking of filmmakers or photographers. I’m thinking about… gamers.
Yes, gamers are largely responsible for the crisp, deep, and rich visual experiences we have come to expect when watching any sort of video on the myriad devices we own.
So it stands to reason that instead of the music labels or the musicians, perhaps podcasters can lead the charge in making devices capable of high-quality audio reproduction in or on our ears every bit as common.
Here are three trends I’m seeing that could indicate a coming shift in how all of us experience sound in our everyday lives. Not just podcasting!
The Rise Of The Smart Listening Device
I don't mean smart speakers. Or rather I should say that I don't only mean smart speakers. Until quite recently, the sound-transmission devices placed over or in our ears were dumb. I’m not oversimplifying too much to say that these devices weren’t much more than tiny speakers shooting sound into our ear canals. Many are well-crafted, and it’s amazing what some engineers have done making resonance chambers with such physical limitations.
But they haven’t been smart, taking all of their cues from the device that “holds” the audio - $1000 mobile phone, $1000 tube amp plugged into a $1000 turntable, or a vintage Sony Walkman.
Today’s headphones are getting smarter. Apple’s AirPods Pro, for example, have chipsets installed in each earbud. Those two “brains” are necessary to make all of the AirPods Pro magic happen, from detecting when a bud is no longer in the ear to the excellent noise cancellation.
In essence, this is a chipset dedicated to processing audio. Almost thirty years ago, pioneers like NVIDEA developed chipsets dedicated to processing graphics, mostly so gamers could play games. Not kidding. So thank a gamer the next time you rewatch Lord of The Rings on your 8K walls-sized TV.
Apple isn’t the only company making earbuds/headphones with chipsets. But imagine how much more room Apple’s engineers have to work with on their new over-the-ear-model rumored to come out in a few weeks. Even if that new device doesn’t have a dedicated “APU” - audio processing unit - it’s not a stretch to assume something like an APU is in development. Once earbuds and headphones with APUs become commonplace, you can bet savvy content creators and developers will find ways to make content that shows off the upper limits of that tech.
Both smart speakers and in-car audio systems - stuffed to the gills with plenty of silicon already -- are getting smarter. There’s a shift in engineering with both to focus less on reproducing the sound exactly as it was distributed, but to craft the sounds they emit to be finely-tuned for the listening environment.
Yes, that means smart speakers can tailor the sound of a podcast to fit the room where the file is played. It means that in-car systems can adjust to changes in road-noise, ensuring passengers can hear the dialog always. Your next set of earbuds or headphones will be able to do that too. Maybe even better.
Superior Streaming Audio
More and more people are streaming their music rather than listening to locally-downloaded high-quality albums. Especially as the prior trend of smarter listening devices continues, the quality level of the experience will be a differentiating factor.
There’s a tradeoff when it comes to quality audio streaming. Conventional wisdom says that the smaller you can make a file, the faster it will stream. But smaller audio files means less fidelity.
The big streaming audio platforms - Amazon, Spotify, Pandora, Apple, & others - aren’t sitting on their heels. They’re busy developing new compression algorithms that can work in conjunction with the chipsets in smart listening devices to push out higher quality files without impacting bandwidth. I haven’t done a full audio test of this, but I feel like the audio quality of some songs played on Spotify’s new “shows with music” actually sounded better than the track from the album also streamed on Spotify. (Or I’m just projecting what I want to be true.)
Binaural and Spatial Audio
The gear necessary to capture binaural and spatial audio has been around for a while now. The challenge is that listeners often don’t have the listening device to properly reproduce the deep and immersive experiences captured this way.
But that’s changing, and many wireless earbuds are already giving listeners a chance to hear what innovative creators are creating. And here’s the best news: shows presented in binaural or spatial audio tend to be backward compatible. So even if a listener still uses the wired earbuds their phone came with back in 2007, they can still hear the content, just not in its full glory. But they can hear it!
As these trends increase, listeners will demand more and better sound. Content creators will create content that meets that demand, which in turn will spur engineers and developers to come up with new ways to not only capture better sound, but to better reproduce it for the listeners.
And not just extreme audiophiles. Like anything, the tech will come down in price to the point where it’s almost harder to exclude a chipset than include it, making smart listening devices that can make amazing streaming audio - and podcast - listening experiences a commodity.
Preparing Your Podcast For The Coming Hi-Fi World
There are three things you can do (or start doing) right now to ensure that when this hi-fi world comes to pass in podcasting your content is a near-perfect fit:
1. Save your build/project files.
If you have the uncompressed audio files saved in your DAWs structure, it’s a low-effort job to re-export all of your previous episodes’ .mp3 files in a higher quality format. No, it won’t be fun, but it will be rather straightforward to ensure your show provides a sound-rich environment. Yes, even if it’s just your voice.
2. Upgrade and reject mediocre sound.
If you know your sound quality could be better, fix it. Bouncing down to .mp3 hides a lot of sins, but those sins come raging back when you start upping the quality. If you’re using cheap equipment because you can only afford cheap equipment, I understand. You’re just going to have to spend more time doing more work to make it not sound like it was recorded with cheap equipment.
3. Stop exporting to mono.
Mono files aren’t any smaller than stereo files unless you start cutting bit rate, and cutting bit rate is the opposite direction you want to go if you want your show to sound good in the future. If spatial sound is best, with binaural second, then stereo is third. Mono is an incredibly distant fourth place, and you should not accept fourth place as good enough for your podcasts’ sound.
Hi-fi is coming to podcasting, but I doubt we’ll call it hi-fi. We’ll probably just call it “audio that sounds great everywhere people listen”. Or something catchier.
Next week, as of this writing, the calendar turns to November. November marks the beginning of Evo’s Long Winters Nap, where I take a break from producing new episodes of Podcast Pontifications until the new year. If you're a working podcaster and you would like to do your own pontificating for an episode of this show while I’m on break, get in touch with me: firstname.lastname@example.org.
If you love the content I'm producing and you really want me to come back (I'm coming back) after the break, please consider going to BuyMeACoffee com/EvoTerra and show me a little love.
Finally, tell another working podcaster about Podcast Pontification. The only way this show grows is when podcasters like you tell other podcasters like your friends about Podcast Pontifications.
I’ll be back tomorrow for yet another Podcast Pontifications.