Overall, I'm pretty optimistic about the future. I've internalized the wisdom of Theodore Parker quoted by greats like Martin Luther King Jr: The moral arc of the universe is long, but it bends towards justice. At the risk of belittling the concept, I believe in that same progressive—or advancing—view of podcasting.
Because podcasting is getting better. Not just on its own but because we, the podcasters of today, are helping drive podcasting forward. And because of continuing advancements in technology, like AI, machine learning, neural nets, and other advanced computing breakthroughs that we (I) barely understand.
Podcasting benefits from those advances in digital processing power, from hosting services to listening apps to even the software and hardware that you and I use to bring our creations to our listener's ears.
There's no doubt that the process of making a podcast today is easier and the experience for us better than it ever has been in the past. Barring any rogue space rocks that have us in their crosshairs or the Omega variant (?) wiping us all out completely; we're going to keep riding that train. The process of making will keep getting easier and the experience better for us podcasters.
However, predicting how that ease and betterment will manifest itself is far from an exact science.
Over the past few years, I've been making the bold statement that podcasters like you and me who've been at this for a while will be the last generation of podcasters to use DAWs at the center of our creation process. We're reliant on Hindenburg Pro, Reaper, Audition... even Audacity to do the heavy lifting.
But the next generation will take a completely different approach, putting text-based editing tools like Descript at the center of their creation process. And the listening public won't know the difference.
I still stand by that assertion. What I'm worried about is the middle ground between these two approaches to making audio. And by worried, I mean I don't think there is a middle ground.
And Old Podcasting Dog Learns A New Trick
During my long winter's nap of 2021, I decided to take the plunge and go all-in with the new paradigm of using a text-based approach to produce episodes for one of my clients' podcasts.
Descript was already integral to our collaborative approach to editorial editing. My client was already comfortable with using the machine transcription generated by Descript to remove blocks of content that were part of the interview but didn't need to be part of the final episode. With that already part of our process, moving my audio engineering efforts to that same platform was a clear path.
If it worked well, I planned on acquiring and mastering the necessary skills to shift more production work over to that paradigm. Perhaps all of it, including Podcast Pontifications. I'm a big fan of the future, and I like to put my money (or my time, in this case) where my mouth is.
So I sat aside my DAW and produced three episodes of the show using only Descript. It took some time to learn the subtle (and not so subtle) nuances of that platform and paradigm, of course. But I was markedly faster by the third episode. And best of all, every single episode sounded not just great but indistinguishable from other episodes of the show made with Hindenburg Pro.
Mission accomplished! My experience proved my assertion that the next generation of podcasters will make content in this text-based way, not the waveform editing way. The text-based way was much more intuitive and will likely be easier for someone brand new to audio work to pick up.
But I stopped after three episodes and switched back to Hindenburg Pro for that show.
And that brings me back to my assertion that there isn't much of a middle ground between these two paradigms.
I had thought—naively—the experience would going to give me two vastly different approaches to making great-sounding audio in my toolkit. And having two skillsets would allow me to choose which path would be the most optimal for any job at hand.
But it didn't work out that way. Not for me, at least. I blame it on my brain.
My brain has been hard-wired to work with waveforms over the last 17 years. But that hard-writing itself was a paradigm shift for me. Prior to that, I was editing audio with nothing more than studio monitors and a razor blade. 4-track reel-to-reel machines from the '80s didn't have a display.
Couple that with biology and the unrelenting advance of time... and my brain is arguably not nearly as flexible now as it was 17 years ago. And certainly not 35 years ago. So while I was able to do it, I didn't get the benefits I had hoped. And I'm not sure that creating another three, six, or sixty episodes with that process would have gotten me there.
Your Brain Mileage May Vary
But don't let my problem be your problem. If you're relatively new to podcasting or audio work and have a relatively young brain, try out the new, future way of making audio without a DAW. Or you have people on your team who are new and also have relatively young brains, get them some training on this new way forward.
I'm still bullish on tools like Descript. And no, this is not a paid endorsement. Just an honest one. I still stand by my assertion that text-based editors will become the dominant way that podcasts are made in the future. How far away that future is rather questionable.
The good news is that old dogs like us with mad skills to make great sounding audio the DAW-based way won't be displaced by young whippersnappers with their fancy text-based processes anytime real soon. Not as long as the output sounds the same to the ears of our listeners.
So breathe easy. For now.
I shall be back on Monday with yet another Podcast Pontifications.