Article
In just a moment I’m going to stop espousing theory and give you the exact audio engineering steps you should follow to make amazing sounding and accessible episodes of your podcast.
But first, I need to thank some people. Because I’m not just making this up. I enlisted the help of four really smart audio engineers to vet my thinking: Marcus DePaula, Tom Kelly, Josh Wade, and Chris Curran. These steps are the same steps they follow, so just doing what I tell you will get you closer to their level, and these fellas are at the top of their game.
Having said that, keep in mind that every DAW (digital audio workstation) is different, so the plugins I use may not be the plugins you use. Some of these settings might be called by different names in your DAW. So I'm gonna speak in general about the goal of each step and then let you do a little exploring on your own to figure out how to achieve it specifically with your toolset.
Step 1: Clean up the noise.
Audio tracks, especially those recorded in less-than-perfect conditions, often have some noise on them. Background noise, lip smacks, iron-lung breathing… Clean as much of that out as you can before you do anything. Yes, that probably means a noise removal filter, so hopefully you've a high-end noise removal tool such as the one comes with Hindenburg Journalist Pro. I also use a magical tool: iZotope’s Voice De-noise. Do whatever you can to get rid of as much noise from the vocal tracks as you possibly can, because there's no reason to process noise, right?
Step 2: Control the dynamic range.
Dynamic range refers to the difference in volume of the really, really, loud parts of a vocal track and then really, really, soft parts. You don’t want to eliminate dynamic range, of course. But you do want to exercise some control over that range. You can exercise that control a couple different ways. One is quite manual, where you go in and adjust the volume of each and every word or phrase in an attempt to lift and/or lower the volume to make that bit more even compared to the rest of the track.
(Aside: Both “quietness” and “loudness” can be communicated in ways other than shouts and whispers. Better stated, you can communicate both a shout and a whisper without blowing out eardrums or making it inaudible.)
I recently started using a plugin called Vocal Rider from Waves. It's amazing. It's as if there was a super-talented engineer in the control room riding the fader (volume) up and down as the voice actor/narrator was talking into the mic, making sure to keep it nice and even, while still allowing for some dynamic range.
Step 3: Tweak the EQ.
Now you've a clean vocal track that doesn't have a huge swinging, dynamic range, and it’s time to sweeten the sound of that voice with a little equalization. With EQ, you can fine-tune the voice you’re working on. If someone's really bass-heavy, you can take out some of the low-end with an EQ. If a voice is way too sibilant, taking out some high-end will knock that down. You can adjust each voice to make it as clear as you can by going into each frequency band and adjusting as you like. Mastering EQ takes a deft touch, and you can spend hours trying to get a sound just right. That either excites or terrifies you.
Step 4: Compress to impress.
Compression gets a bad rap, but I find it an invaluable tool to make vocal tracks sound amazing and accessible. Also, people have a misconception about what compression does. I thought up a good analogy this morning as I was in making coffee with my Aeropress, actually compressing the coffee to make an amazing cuppa. That’s what compression is, only for audio instead of coffee. There's a myth out there that compression will actually make quiet parts of your audio louder. It won't. Compression squeezes down the track. No, that’s not the same as controlling the dynamic range, which we did in step #2. It's a very different process, and it’s a good step to add. Can you over-compress? Probably, and that’s why compression has a bad rap. But there are way too many under-compressed podcast episodes out there. For accessibility -- and just listenability in general -- compression is good. You probably won’t overdo it.
(Bonus: Some add-on compressors -- software or hardware -- add their own special “tone” during compression. But that’s very advanced and easy to overdo, so you’ll want to get some training before you go down that expensive and expansive rabbit hole.)
Step 5: Mix well and export at -16 LUFS.
The final step, once you have a cleaned, controlled, EQ-d, and compressed vocal track is to… do that same thing to the rest of your vocal tracks! Yes, individually. Because every voice is different.
Once you have ALL of your vocal tracks ready to roll, you’re going to add your music and your effects as separate tracks. Why? Because you may want to adjust the sound of those non-voice tracks, and you’ll want to adjust them individually.
Now, with filtering, plugins, and adjustments applied to all tracks, you’re finally ready to mix them together in such a way that all tracks sound good together. But please: Make sure you're listening along to the output of all combined tracks. Because what worked individually and in isolation may need some tweaking once that vocal track (or music/SFX track) is layered with other contents. So please make sure your hard work sounds good in situ.
Once you know your final mix sounds good, export that puppy at -16 LUFS! And if the tool you are using does not allow you to export your final audio at -16 LUFS… get another tool. Yes, there are some after-the-fact tools and services that will boost the perceived volume of your final product to the -16 LUFS standard. But seriously. Get another tool that incorporates this step. Modern DAWs designed for making podcast episodes have this built-in. Yours should too.
Those are the five straightforward steps that will allow you to make audio files for your podcast that are accessible. People like me and a lot of other people who are listening to your show who also have hearing loss -- some mild, some moderate, and even some severe but still correctable -- will be able to hear all of your content. All of your dialogue. All of those helpful asides that are so key to the story you're trying to tell. We’ll be able to hear them not only in quiet environments, but we'll also be able to hear them when we're in the car, when we're on the bus, or when we are in any other noisy environment that you take for granted.
And best of all, not one of these steps compromises your artistic ability. Not a one. You will still convey the same message the same way. You're just assembling your audio in a way that ensures that everybody -- even those of us with hearing loss -- can enjoy the contents of your podcast’s episodes.
Once again, special thanks to Marcus dePaula, Tom Kelly, Josh Wade, and Chris Curran for giving me their thoughts on this straightforward process.
Two things before I go one:
- BuyMeACoffee/evoterra is where you should go if you enjoyed the tips and guidance in this miniseries. More miniseries is are coming, and you can show your appreciation to me at buying me a virtual coffee.
- The Flick group app for Podcast Pontifications is growing! People are talking about next week's planned episodes as well as talking about this week's miniseries. We’d love for you to join. It’s free!
Enjoy your weekend (because I don't do episodes on Fridays!) and I shall be back on Monday with a brand new miniseries on Podcast Pontifications.
Cheers!
Podcast Pontifications is written and narrated by Evo Terra. He’s on a mission to make podcasting better. Allie Press proofed the copy, corrected the transcript, and edited the video. Podcast Pontifications is a production of Simpler Media.