email: tom.bajoras@gmail.com    
       

Archive for Audio Tips

Mastering Audiobooks

The term “mastering” in the audio world conjures up images of a mad scientist in his laboratory, surrounded by oscilloscopes, Jacob’s Ladders, mysterious black boxes with large rotary dials built in the 1940s, and maybe even an animal about to be sacrificed on an altar between two speakers that cost $100K each.

That image might still be more or less accurate for a music mastering lab, but fortunately mastering an audiobook is fairly simple and doesn’t require expensive equipment or the harming of any animals.

For an audiobook you’re basically only worried about making all the chapters the right loudness, where “right” is defined by the audiobook distributor’s specifications. Most distributors use Audible’s specifications, so I’ll assume those throughout this article, but you can adjust accordingly if your distributor uses different specifications.

You’ll need two plugins: a maximizer and a level meter. I recommend the Waves L2 Maximizer, but almost any maximizer can produce acceptable results. For the level meter, you’ll need something like the Waves WLM plugin that can measure “loudness units relative to full scale” (LUFS). This might be a new concept, but for our purposes here it just means “how loud the audio seems to be to the human ear.” If you want to learn more about LUFS, I recommend this video.

Insert the maximizer and level meter plugins on the master channel (in that order). The goal is to have the long-term LUFS be around -21, according to “EBU method / ITU 1770 weighting” (which is the default in the WLM plugin). If you’re using the L2 Maximizer, use the “CD high res” setting with the output ceiling pushed down to -2. Whatever maximizer you choose, there’s probably a setting for producing a CD, and that should work. You don’t want massive amounts of compression like a bad Nirvana remastering; you just want to even out the obviously loud and soft sections so the audiobook listener is never tempted to adjust the volume on their playback device.

Once your plugins are in place, play some of the first chapter. About a minute should be enough. See what the LUFS is. Adjust the maximizer level accordingly and play the same audio. You want the LUFS to be -21. If all the chapters were recorded in the same studio, on the same equipment, with the same narrator, they shouldn’t vary by more than 1 or 2 dB, so you probably won’t need to adjust the maximizer very much from chapter to chapter. If the LUFS of a chapter is much below -24, you’ll probably need to adjust the level of the audio feeding into the master channel before adjusting the maximizer, otherwise you’ll start noticing the compression effect of the maximizer (i.e. you’ll end up with a bad Nirvana remastering).

Once you’ve found the right maximizer settings to achieve an LUFS of -21, select the whole chapter and apply the maximizer to it, rendering the audio back to the chapter. (Oh… and make a backup of the chapter before you do this!) Now if you bypass the maximizer on the master channel and you play back some of the chapter, the level meter should show an LUFS of -21.

Then simply repeat that procedure for each chapter in the audiobook.

Here’s a dirty little secret: If the LUFS is a little off, say -20 or -22, you’ll be fine. The big picture is that you don’t want a noticeable loudness difference between two chapters, or between two audiobooks. Compare the mastered versions of chapters recorded on different days to make sure they sound the same loudness. Also compare your mastered chapters to a reference audiobook, preferably something that has won an award for audio quality. There’s no point in making sure your audiobook sounds as good as someone else’s bad-sounding audiobook!

Spoken Word Production

I do a lot of spoken word production. This includes stuff like audio books, instructional videos, and voice-overs. In this type of work it’s very common for the narrator (sometimes called “the reader”) to have to read long passages of text. For example, let’s say you’re recording a self-guided audio tour for a historical site. There might be 20 stops on the tour, each one with a couple minutes of audio. For most narrators it’s going to be impossible to read all those words without making a mistake somewhere. Obviously you want all the words to be pronounced correctly, with no accidentally skipped words, and you want the pacing and tone of the narration to be right.

There are multiple approaches to correcting mistakes. In the first approach, whenever the narrator realizes that they’ve made a mistake, they simply pause, then back up to before the mistake occurred and restart. In this approach, the editor will later go back and delete the mistakes, closing up the holes. For example, if the narrator is trying to say “Jack and Jill went up the hill to fetch a pail of water,” but they accidentally say “Jack and Jill went up the hill to catch a pail of water,” it might end up sounding like this: “Jack and Jill went up the hill to catch a… oops… went up the hill to fetch a pail of water.” Then the editor would delete “went up the hill to catch a… oops…” and close up the hole, resulting in the correct “Jack and Jill went up the hill to fetch a pail of water.” The advantage to this approach is that the narrator stays “in the groove,” which can be important if the text is difficult. However, the disadvantage is that it takes a surprising amount of skill to do this. Otherwise, when the mistake is deleted, it ends up sounding like, well, like a mistake was deleted.

The second approach is to correct the mistakes while recording. So when the narrator says “to catch a… oops,” the engineer stops the recording, sets a punch-in point with pre-roll, and restarts the recording.” The advantage to this approach obviously is that there is no editing to be done afterward. The disadvantage is that it requires good communication between the narrator and the engineer, otherwise there can be a lot of wasted time for a conversation like “Where do you want to take it from?” “How about “went up?” “No, there’s not enough space there. Let’s take it from Jack.” Then the engineer needs to select the right amount of pre-roll so that the narrator can settle into a matching tone, loudness, and pacing. This is not as hard as it might sound; I’ve found that after working with the same narrator on multiple projects, we know each other well enough that we can do the whole process without even talking to each other.

In actual practice, it might not be possible to correct all mistakes while recording. Even if you think you’re doing that, you might discover upon listening to the recording that there are a few mistakes that need to be corrected. In that case you’ll need to mark the regions that need to be replaced, then punch each of them. This can be challenging if multiple days have gone by; you’ll need to recreate the exact same mic setup, and the narrator will have to match their voice from a previous day. Still though, it can be done. It does require skill, but after all, that’s why a narrator gets paid to narrate, and a recording engineer gets paid to record. 🙂

Here’s a tip that I’d like to leave you with. When punching in, I find that it’s best to start the pre-roll at the beginning of a sentence. For example, if you’re punching in at the word “fetch,” it’s better to pre-roll from “Jack” (the start of the sentence) rather than “went.” That’s because a pre-roll starting mid-sentence can throw the narrator off balance momentarily, and they may not be fully “recovered” at the punch-in point. This is a subtle effect, but I’ve noticed that punches sound more natural when the pre-roll starts at the end of the previous sentence. By the way, in case you don’t know it, in the Pro Tools edit window, option click to set the pre-roll point. After you’ve done this kind of work for a while, the keyboard and mouse gestures become automatic; you just do them without thinking, like you speak without thinking of the words.

Here’s some spoken word that was recorded earlier this year in my studio. It’s an audio clip from the book, Echoes of Tattered Tongues, by John Guzlowski. The reader is Jon Brandi. This was a difficult, but rewarding, project. Recording poetry has a lot of similarities to song production, because you’re not just trying to pronounce all the words correctly; you’re also working with the narrator to create exactly the right pacing and tone of each line to convey the author’s intentions, which in this case (as might often be the case) was even more complicated because the author was on the other side of the country while we were recording. Even though each poem typically took up only 1 or 2 printed pages, there were often dozens of stops and restarts, multiple passes, and edits. Hopefully it ends up sounding transparent, as though the reader is reading the whole poem in one take, getting every nuance of language perfect on the first try.

Why Remove Frequencies That Aren’t There?

I’ve read a number of articles that recommend removing “all subsonic frequencies, even if there aren’t any.” What does this statement actually mean? I assume they’re not literally recommending removing something that isn’t there; that would be like saying “you should remove the leopard in your back yard.” I’m guessing what they really mean is “do something that would remove any subsonic frequencies, in case there are any, and it won’t do any harm if there aren’t.”

A subsonic frequency, by definition, is one that’s too low for the human ear to hear, so of course you can’t detect such frequencies simply by listening. The nominal range for human hearing is 20 Hz to 20 KHz, but that range shrinks with age (especially the upper end of it). If, like most rock musicians, you’ve been exposed to loud sounds repeatedly without adequate hearing protection, I hate to deliver the bad news, but you’re probably already deaf to frequencies above 15 KHz.

But today we’re not talking about high frequencies; we’re talking about low frequencies, subsonic frequencies, the ones below 20 Hz. We’ll consider four questions:

  1. How might subsonic frequencies end up in your mix?
  2. How can you know if they’re there?
  3. How should you remove them?
  4. What harm could they do if you don’t remove them?

How might subsonic frequencies end up in your mix?

First of all, if you’re writing some kind of experimental music where you deliberately synthesized subsonic frequencies, then this article isn’t for you. This article is for people who accidentally have subsonic frequencies in their mix.

If you’re recording any sort of “normal” music, it’s unlikely that you’ve captured any subsonic frequencies through a microphone. (If so, I want that microphone!) Microphones simply don’t record frequencies that low.

But synthesizers can easily make frequencies that low. Say for example you have a bass patch that’s doubled down an octave or two. If you play notes that are too low, you’re now hearing only the fundamental, and the doubled part has passed into the subsonic range.

It’s also possible (but not likely) that subsonic frequencies could have been created by some sort of faulty audio processing. For example, it could be a bad plugin.

Another possibility is that subsonic frequencies can be created by amplitude or frequency modulation. So certain kinds of audio processing, such as ring modulators, can create them.

Ultimately it doesn’t really matter how the frequencies got there; the more important thing is to be able to know whether there are in fact any subsonic frequencies in your mix.

How can you know if they’re there?

As already explained, you can’t hear a subsonic frequency, so you’ll need something other than a human ear. That thing you need is called a frequency analyzer (or sometimes a “spectrum analyzer”). Here’s a free one that works really well.

A frequency analyzer shows you what frequencies and how much of each are in your music. It has many uses aside from the one we’re talking about today. For example, it can help you pinpoint an annoying hum that you’re trying to remove from a track.

First try putting the frequency analyzer on your master output. Play the entire song and see if any subsonic frequencies showed up over the course of the song. Remember that the problem might only occur in certain parts of the song, for example where the synth bass plays.

If you see subsonic frequencies in your mix, try muting and soloing different track combinations to figure out which tracks contain the subsonic frequencies.

How should you remove them?

The obvious tool to reach for is some kind of high pass filter (HPF), probably part of an EQ plugin. But you need to remember that a HPF has a slope; it can’t just block all frequencies below 20 Hz while letting in 20 Hz and higher. A real HPF looks more like this:

 

hpf

 

 

 

 

 

 

 

 

 

 

Therefore some people recommend removing all frequencies below a higher number, such as 80 or even 100 Hz. The idea isn’t that frequencies below 100 Hz are harmful, but that you need to remove all frequencies below 100 Hz in order to ensure that no frequencies below 20 Hz are left.

If you are able to isolate the problem to specific tracks, it’s best to insert the EQ on those tracks rather than inserting it on the master output. Even the best EQ isn’t 100% transparent. A HPF can introduce a phase shift, perceived as a bump, at the cutoff frequency. That effect might be imperceptible on your bass track, but you wouldn’t want your entire mix to be phase shifted around 100 Hz.

If your mix is being sent to a mastering lab, then they may be responsible for removing subsonic frequencies, and you might think that you don’t need to worry about it. But I try not to make any assumptions about where my music goes after it leaves my studio, so I want it to be as close as possible to a finished product. In the same way, you wouldn’t want to produce a mix that has a harsh 2-4 KHz frequency bump and rely on listeners to adjust the EQ on their playback system.

What harm could they do if you don’t remove them?

First let’s review how speakers turn electricity into sound. The electricity that goes into your speakers fluctuates in voltage very rapidly. Whatever the voltage is at any given instant corresponds to the position of the speaker cone. If you were to feed the lowest possible subsonic frequency, 0 Hz, into your speaker and change the voltage manually, you would see the speaker cone moving in and out corresponding to the movement of your hand on the manual voltage dial. Electricity of 0 Hz frequency has a special name, DC, and in the world of amplifiers and speakers, DC is a VBT (a “very bad thing”). Why? Well, turn up that voltage dial too far, and your speaker cone will eventually tear away from the rest of the speaker, and aside from the loud pop and impressive puff of smoke, this is not cool, because you now have a nonfunctioning speaker. For this reason, almost every audio amplifier has built-in circuitry to block DC current.

But what about very low frequency electricity, not all the way down at 0 Hz, but something between 0 and 20? Well, the situation for the speaker is not much better. If the speaker cone is moving that slowly, it’s not producing any sound that your ear can hear, but it is moving. But you have no way of hearing how “loud” this sound is, so if it gets very “loud” (i.e. the voltage gets too high)… pop goes the speaker.

Of course the same thing is true in the sonic frequency range. Obviously, if the sound is too loud, it might damage your speakers. But your ears will also hear the sound, and it won’t come as a surprise when the speakers get damaged. In fact, your ear drums might be damaged before the speakers.

It’s highly unlikely that you accidentally have a subsonic frequency loud enough to damage your speakers. But what’s far more likely is that you have a subsonic frequency mixed in with the frequencies that you can hear. The problem with this situation however is that your music is added on top of the subsonic frequency, which means the music all has to fit into a smaller range of speaker movement. The easiest way to illustrate this is to go back to the most extreme kind of subsonic frequency, 0 Hz, i.e. DC voltage. Imagine that you have a DC voltage mixed into your music. That DC voltage positions the speaker cone somewhere other than its normal rest position. Say your speaker cone can move an inch. You’d like to use that whole inch to reproduce your music. But if the DC voltage is pushing the speaker cone out to 1/4″, then the available range of motion to play back your music is only between 1/4″ and 1″. Again, this is an extreme example to illustrate the point, but if the speaker cone is moving back and forth slowly between 0 and 1/4″, then your music is being represented by speaker movements between 0 and 1″ sometimes and 1/4″ and 1″ sometimes. Anyway, all of this can be summed up by saying “removing subsonic frequencies frees up headroom.”

Ironically, perhaps the most compelling argument for removing low frequencies (including subsonic frequencies, but even frequencies up to 100 Hz) is that doing so will actually give your music a more powerful low end. If you have a lot of low frequency content in multiple tracks, the mix will just end up sounding like a big muddy mess. The secret to creating a powerful, but not muddy, low end is to remove frequencies below 100 Hz on all but a few tracks (maybe even just one track) and letting those tracks use the frequencies below 100 Hz. In most popular music styles, the low end is going to come from the bass and kick drum. In many cases, it’s just the kick drum. This means that when you solo the other tracks (guitar, keyboards, etc.) they may sound like they don’t have enough bass, but if the kick drum and bass parts are written well, performed well, and recorded well, then those parts are all you need to supply enough low end. You can safely remove frequencies below 100 Hz on every other track.

In summary, there’s a variety of ways that subsonic frequencies might end up in your mix. The most likely way is bass synthesizer tracks. You can’t hear subsonic frequencies, but you can detect them with a frequency (spectrum) analyzer. If you detect them, you can remove them by inserting a high pass filter on individual tracks, and roll off frequencies below 100 Hz. Doing so will free up headroom in your mix and will also give your music a clearer, more powerful low end. It will also reduce the risk of damaging speakers, although that risk is already pretty low.

That’s all for now. Until next time, may all your mixes have no subsonic frequencies (unless you want them), and your yard have no leopards (unless you want them)!

Rendering Virtual Instrument Tracks to Save Processing Power and Decrease Load Times

virtual violin“Virtual instruments” are a powerful tool in today’s computer-based composer’s tool bag. Back in the dark ages (i.e. when I was a kid, when I didn’t yet own a computer) if you wanted to record, for example, a violin, you had to have a violin and a violinist. But in 2016 you can buy a library of stringed instrument samples including a wide variety of playing techniques (arco, pizzicato, spiccato, sul ponticello, etc.) so that a pretty convincing violin recording can be made, even if there’s no violin or violinist anywhere in the room. Hence, the term “virtual.” In all fairness to violinists, the real deal still far surpasses the virtual one, but the latter can suffice for a composer to hear their work and get quick answers to questions like “how would this part sound an octave higher?” or “what if the cello doubled this part?” not to mention “what if we add sleigh bells and an oud?”

It is a very powerful tool indeed. But “powerful” also means literally computing power. My studio computer has more processing power than the whole world did at the time of the Apollo moon landing, but it’s not an infinite amount of power. Once you have 30 tracks of virtual instruments, convolution reverb putting those tracks on the stage of the Konzerthausorchester Berlin, some multiband parametric EQs, and autotune on the lead vocals, my 2012 era Mac Pro begins to sweat like the forehead of a 1969 rocket scientist operating his slide rule as the LEM approaches the surface of the moon. Dropouts and other nasty artifacts start appearing in the audio, and eventually the dreaded “you are running out of processing power” alert appears.

The other problem with virtual instruments is that they can take up a lot of disk space. My favorite grand piano library, for example, came on 32 double-sided DVDs and took a whole weekend to install. When you open a Pro Tools session that uses this library, even though only a small fraction of the whole library has to be loaded for that session, you might be staring at a progress indicator for five minutes.

So if you use a lot of virtual instruments, you’ll be doing a lot of waiting for samples to load, and you’ll want to keep your system resources monitor open at all times to make sure you don’t run into the “out of processing power” error. The ugly truth is that even before you actually run out of resources, software can start to behave erratically, and it might even crash before you have a chance to know that you’re running low.

The solution is called “rendering,” which is the process of turning virtual instrument tracks into audio tracks. Compared to virtual instrument tracks, audio tracks take far less processor power, and they take almost no time to load, due to the magic of nonlinear editing. That’s because when playing back audio tracks your digital audio workstation doesn’t actually load the audio; instead it loads a list of audio file names and pointers into those files specifying where to start and stop playing.

Here’s how I convert a virtual instrument track into an audio track in Pro Tools; maybe there’s an easier way to do it, like maybe your software has a “render virtual instruments” function… that would be nice. (Avid, are you listening?)

  1. Route the virtual instrument track’s output to a bus that you’re not currently using for anything else.
  2. Create an audio track whose input is that bus. Set its output to your main outputs. What I like to do is rename the virtual track to have a “v” at the start of the name and name the corresponding audio track without the v. For example “vViolin” and “Violin.”
  3. Move (not copy) any plugins such as EQ, reverb, compression, etc. from the virtual instrument track to the audio track.
  4. Set the virtual instrument track’s playback level to +0 dB.
  5. Turn off any automation (level, panning, etc.) on the virtual instrument track.
  6. Set the audio track to record. Set the virtual instrument track to play.
  7. Play through the whole song, or only the sections where the virtual instrument is playing, while the audio track is recording.
  8. Confirm that there’s a waveform on the audio track corresponding to everywhere where there’s midi data on the virtual instrument track.
  9. Copy any automation from the virtual track to the audio track
  10. Play back the audio track to make sure it sounds the same as the virtual instrument track.
  11. Disable and hide the virtual instrument track.

You can do this for any virtual instrument track that no longer needs to be edited. If however you discover that you need to make a change that can’t be made in the audio track (for example, you need to change a note), then you can reactivate the virtual track (you’ll have to wait for the virtual instruments to load), make the edit, and then repeat the above process.

Rendering a virtual instrument track isn’t something you’ll typically do until late in the song production, perhaps right before you officially go into “mixing mode.” At that point all the parts are probably locked down, and there’s no need to change the actual notes that were played.

Of course, who knows, maybe in the future our computers will be so powerful that we won’t need to worry about this. But given the history of computing, I wouldn’t be surprised if future composers look back at us like we do to those NASA engineers in 1969. And I can’t even imagine the software that will be pushing their computers to the limit.

Better Mixing by Panning Instrument Reverbs

Typically when mixing an ensemble of instruments, regardless of whether it’s a rock band, a big band, a salsa band, a barbershop quartet, or even just three guys banging on metal cans, you’ll want to place the ensemble into some sort of acoustic space. The space might be a scoring stage, a club, a church, a cave, or even a parking garage. The most common way of doing this is to create an aux channel, insert a reverb plugin on that channel, and then send various amounts of the other channels to the aux channel’s input bus.

Also typically in this kind of mix, each of the instruments will be panned to a position in the left-right stereo image to arrange their order on an imaginary stage. For example, with a four-piece rock band (lead singer, guitar, bass player, and drummer) you might pan them to place the lead singer and drummer in the center, the guitarist more toward the left side of the stage, and the bass player more toward the right side of the stage. If -50 is “hard left” and +50 is “hard right,” then this could be accomplished by setting the leader singer’s and drummer’s pan to 0 (center), the guitarist’s pan to -30, and the bass player’s to +30.

If you stop there, however, the stereo imaging is incomplete. You also need to set the panning of each channel’s send to the reverb bus. It is a common mistake to make the reverb panning match the channel’s panning. It is actually more accurate to pan the reverb the opposite of the channel. For example, with our hypothetical rock band example, you would pan the guitarist’s reverb to +30 and the bass player’s reverb to -30.

Why is this more accurate? Think about what causes reverb. Reverb is caused by the reflection of sound from the surfaces in the room. Instruments on the left side of the room have reverb from the surfaces on the right of the room, and vice versa.

How much of a difference does this make? It’s hard to say. I suggest comparing two mixes, one with the reverbs panned the same as the instruments, and the other with the reverbs panned the opposite. Listen to both and see which sounds better. Play the mixes for some other people and see which one they prefer. It’s been my experience that good mixing is often the sum of many small decisions. Any one of those decisions on its own probably won’t make or break the mix, but over the course of mixing a song, you can accumulate things that add up to a good mix. Reverb panning is one of those things!

Techniques for Synchronizing Lead and Background Vocals

Here are some techniques that I use for synchronizing backup and lead vocals.

These techniques only work in sections of a song where background vocals are accompanying a lead vocal in unison or harmony. In other words, one of the vocal tracks has to be clearly the lead vocal, and the others have to be clearly background vocals.  These approaches won’t work for a duet, where both of the vocals are sharing the lead.  They all also won’t work for a choir, where none of the vocals is the lead. And, lastly, the vocals have to be some sort of parallel harmony; these techniques won’t work for contrapuntal parts.

Imagine that the lyric is “happy birthday to you,” and imagine that you have a single background vocal track singing a harmony with the lead vocal track. Solo just the vocal tracks and listen carefully.  Chances are, the “hard” consonants in the two vocal tracks don’t line up perfectly.  In this example, the consonants that potentially cause problems are p, b, d, and t:  “haPPy BirthDay To you.”  Linguists called these “plosives.” If you listen carefully you may here something like this:  “haPPPy BBirthDDay TTo you.”  The background vocal may be a little behind or a little ahead of the lead vocal, but in either case you’ll hear the plosives doubled because the lead and background vocals are out of sync.

Here’s my first technique.  Just remove the plosives from the background vocal!  In whatever digital audio editor you’re using, you probably have a tool for drawing volume curves.  Find the offending plosives in the background vocal and draw a steep notch in the volume curve around each of those plosives.  Now if you were to listen to the background vocal by itself you’d hear something like “ha-y -irth-ay  -o you.” Of course that would be unacceptable for the lead vocal, but the background vocal, since it’s a harmony part, will probably be lower in the mix, and if there are enough other instruments playing at the same time (maybe this is a symphonic metal arrange of “Happy Birthday”) no one will notice the missing plosives.  That’s the funny thing about the ear: It’s really good at hearing that something is in the wrong place, but it’s not very good at hearing that something’s missing.

That’s not the only way to solve this problem.  The second way is to use time stretching (such as Pro Tools’ “elastic audio”) feature to get the plosives to line up by squeezing and stretching the background vocals (without pitch shifting) between the plosives.

Now, you might be asking “what if the vowels also don’t line up?”  You can fade out a background vocal vowel at the end of a word to make it the same length as the corresponding vowel in the lead vocal track.  You can time stretch (without pitch shifting) a vowel to make it longer.  But these two techniques only work if the vowel is at the end of a word.

What if a vowel in the middle of a word is out of sync, or what if non-plosive consonants aren’t lined up?  Well, then there’s the third technique to solve the problem: Re-record the background vocals.

That’s the funny thing about all these cool software tools we have.  Sometimes we reach for exotic plugins or complicated sequences of editing commands, when really we should just be pressing the record button.

Say Goodbye to Hiss

Hiss… unless you’re a reptile, it’s probably not a welcome sound, especially when it’s invading your music production. “Hiss” of course is just a vague term for any sort of undesired non-pitched noise. It might originate from the mic preamp, effects, plugins, or even the instrument itself (remember the original DX7?). As usual, an ounce of prevention is worth a pound of cure, so for example if you can adjust mic positions and cut back on mic preamp levels, you should do that before reaching for your de-noiser plugin.

The universe unfortunately is a little less than ideal (some would say a lot less), so you can’t prevent all hiss. Even if you’ve managed to eliminate all but the slightest trace of hiss on each track, if you’ve got 40 tracks playing back, all that hiss can add up. Of course, if you’ve got 40 tracks, chances are the music will overwhelm the hiss. After all, it’s not hiss per se that’s problematic; it’s the hiss/music ratio. So if you’ve got a tiny amount of hiss on each of your 40 tracks, it probably doesn’t matter if those 40 tracks are for an epic symphonic metal opus. But on the other hand, if you’ve got 40 very quiet tracks of sound effects for a movie soundtrack, then the hiss might be unacceptable.

I did mention de-noiser plugins. De-noiser plugins typically work by being shown a bit of audio that contains noise without music. The plugin “learns” what the noise looks like and can then hunt it down and remove it from the rest of the track even where there is music and noise playing at the same time. (The trick is to set the amount of de-noising so that the music doesn’t get damaged.) I swear by (and sometimes at) the iZotope Denoiser plugin.

Knowing how the de-noiser plugin works allows you to play a little trick. When recording, record some extra sound before the instrument starts playing.  This will give you a chunk of noise with no music in it, which is there deliberately for the purpose of training your de-noiser plugin.

In some cases, there is such a wide separation in the frequency domain between the music and the hiss, that a simple low-pass EQ can remove the hiss without damaging the music. This might be true for example on a bass track.

Lastly, if all else fails, here’s a little trick that I’ve learned: Set each track to mute wherever there is no music on it. This is especially effective at the start and end of a song. For example, if you have an acoustic guitar solo to start off the song, then all the other tracks should be muted during it. Likewise, at the end of the song, if there’s a track that extends beyond the end of the music on that track, mute the track at the end of the music, not the end of the song. Note that muting a track is different from simply removing unwanted audio clips. Sometimes plugins add to noise, so even if a track contains no audio clips, a plugin inserted on that track generates noise. In that case, you have to mute the track to mute the noise.

Similarly, if there’s a section of a song that gets very quiet, with most instruments dropping out, mute the instruments that have dropped out during that section. Whenever muting a track, be careful not to mute before reverb tails have finished.

I hope you find these techniques useful. Until next time, may all your recordings be hiss-free.