email: tom.bajoras@gmail.com    
       

Archive for Pro Tools Tips

Rendering Virtual Instrument Tracks to Save Processing Power and Decrease Load Times

virtual violin“Virtual instruments” are a powerful tool in today’s computer-based composer’s tool bag. Back in the dark ages (i.e. when I was a kid, when I didn’t yet own a computer) if you wanted to record, for example, a violin, you had to have a violin and a violinist. But in 2016 you can buy a library of stringed instrument samples including a wide variety of playing techniques (arco, pizzicato, spiccato, sul ponticello, etc.) so that a pretty convincing violin recording can be made, even if there’s no violin or violinist anywhere in the room. Hence, the term “virtual.” In all fairness to violinists, the real deal still far surpasses the virtual one, but the latter can suffice for a composer to hear their work and get quick answers to questions like “how would this part sound an octave higher?” or “what if the cello doubled this part?” not to mention “what if we add sleigh bells and an oud?”

It is a very powerful tool indeed. But “powerful” also means literally computing power. My studio computer has more processing power than the whole world did at the time of the Apollo moon landing, but it’s not an infinite amount of power. Once you have 30 tracks of virtual instruments, convolution reverb putting those tracks on the stage of the Konzerthausorchester Berlin, some multiband parametric EQs, and autotune on the lead vocals, my 2012 era Mac Pro begins to sweat like the forehead of a 1969 rocket scientist operating his slide rule as the LEM approaches the surface of the moon. Dropouts and other nasty artifacts start appearing in the audio, and eventually the dreaded “you are running out of processing power” alert appears.

The other problem with virtual instruments is that they can take up a lot of disk space. My favorite grand piano library, for example, came on 32 double-sided DVDs and took a whole weekend to install. When you open a Pro Tools session that uses this library, even though only a small fraction of the whole library has to be loaded for that session, you might be staring at a progress indicator for five minutes.

So if you use a lot of virtual instruments, you’ll be doing a lot of waiting for samples to load, and you’ll want to keep your system resources monitor open at all times to make sure you don’t run into the “out of processing power” error. The ugly truth is that even before you actually run out of resources, software can start to behave erratically, and it might even crash before you have a chance to know that you’re running low.

The solution is called “rendering,” which is the process of turning virtual instrument tracks into audio tracks. Compared to virtual instrument tracks, audio tracks take far less processor power, and they take almost no time to load, due to the magic of nonlinear editing. That’s because when playing back audio tracks your digital audio workstation doesn’t actually load the audio; instead it loads a list of audio file names and pointers into those files specifying where to start and stop playing.

Here’s how I convert a virtual instrument track into an audio track in Pro Tools; maybe there’s an easier way to do it, like maybe your software has a “render virtual instruments” function… that would be nice. (Avid, are you listening?)

  1. Route the virtual instrument track’s output to a bus that you’re not currently using for anything else.
  2. Create an audio track whose input is that bus. Set its output to your main outputs. What I like to do is rename the virtual track to have a “v” at the start of the name and name the corresponding audio track without the v. For example “vViolin” and “Violin.”
  3. Move (not copy) any plugins such as EQ, reverb, compression, etc. from the virtual instrument track to the audio track.
  4. Set the virtual instrument track’s playback level to +0 dB.
  5. Turn off any automation (level, panning, etc.) on the virtual instrument track.
  6. Set the audio track to record. Set the virtual instrument track to play.
  7. Play through the whole song, or only the sections where the virtual instrument is playing, while the audio track is recording.
  8. Confirm that there’s a waveform on the audio track corresponding to everywhere where there’s midi data on the virtual instrument track.
  9. Copy any automation from the virtual track to the audio track
  10. Play back the audio track to make sure it sounds the same as the virtual instrument track.
  11. Disable and hide the virtual instrument track.

You can do this for any virtual instrument track that no longer needs to be edited. If however you discover that you need to make a change that can’t be made in the audio track (for example, you need to change a note), then you can reactivate the virtual track (you’ll have to wait for the virtual instruments to load), make the edit, and then repeat the above process.

Rendering a virtual instrument track isn’t something you’ll typically do until late in the song production, perhaps right before you officially go into “mixing mode.” At that point all the parts are probably locked down, and there’s no need to change the actual notes that were played.

Of course, who knows, maybe in the future our computers will be so powerful that we won’t need to worry about this. But given the history of computing, I wouldn’t be surprised if future composers look back at us like we do to those NASA engineers in 1969. And I can’t even imagine the software that will be pushing their computers to the limit.

Techniques for Synchronizing Lead and Background Vocals

Here are some techniques that I use for synchronizing backup and lead vocals.

These techniques only work in sections of a song where background vocals are accompanying a lead vocal in unison or harmony. In other words, one of the vocal tracks has to be clearly the lead vocal, and the others have to be clearly background vocals.  These approaches won’t work for a duet, where both of the vocals are sharing the lead.  They all also won’t work for a choir, where none of the vocals is the lead. And, lastly, the vocals have to be some sort of parallel harmony; these techniques won’t work for contrapuntal parts.

Imagine that the lyric is “happy birthday to you,” and imagine that you have a single background vocal track singing a harmony with the lead vocal track. Solo just the vocal tracks and listen carefully.  Chances are, the “hard” consonants in the two vocal tracks don’t line up perfectly.  In this example, the consonants that potentially cause problems are p, b, d, and t:  “haPPy BirthDay To you.”  Linguists called these “plosives.” If you listen carefully you may here something like this:  “haPPPy BBirthDDay TTo you.”  The background vocal may be a little behind or a little ahead of the lead vocal, but in either case you’ll hear the plosives doubled because the lead and background vocals are out of sync.

Here’s my first technique.  Just remove the plosives from the background vocal!  In whatever digital audio editor you’re using, you probably have a tool for drawing volume curves.  Find the offending plosives in the background vocal and draw a steep notch in the volume curve around each of those plosives.  Now if you were to listen to the background vocal by itself you’d hear something like “ha-y -irth-ay  -o you.” Of course that would be unacceptable for the lead vocal, but the background vocal, since it’s a harmony part, will probably be lower in the mix, and if there are enough other instruments playing at the same time (maybe this is a symphonic metal arrange of “Happy Birthday”) no one will notice the missing plosives.  That’s the funny thing about the ear: It’s really good at hearing that something is in the wrong place, but it’s not very good at hearing that something’s missing.

That’s not the only way to solve this problem.  The second way is to use time stretching (such as Pro Tools’ “elastic audio”) feature to get the plosives to line up by squeezing and stretching the background vocals (without pitch shifting) between the plosives.

Now, you might be asking “what if the vowels also don’t line up?”  You can fade out a background vocal vowel at the end of a word to make it the same length as the corresponding vowel in the lead vocal track.  You can time stretch (without pitch shifting) a vowel to make it longer.  But these two techniques only work if the vowel is at the end of a word.

What if a vowel in the middle of a word is out of sync, or what if non-plosive consonants aren’t lined up?  Well, then there’s the third technique to solve the problem: Re-record the background vocals.

That’s the funny thing about all these cool software tools we have.  Sometimes we reach for exotic plugins or complicated sequences of editing commands, when really we should just be pressing the record button.

Selecting Discontiguous Regions in Pro Tools

One thing that’s bothered me about Pro Tools for years now is that there’s no (obvious) way to select  discontinguous regions.  I put the word “obvious” in parentheses, because I’ve always suspected there was a way but just not something that didn’t involve animal sacrifices and incantations in ancient Babylonian.

I just figured it out, and fortunately it didn’t require the death of any animals.  Select the “grabber tool.”  If you continue to hold down the mouse button you’ll see that the grabber tool is actually a collection of 3 tools:

object grabber

 

 

 

 

 

Select the third one, the “object grabber” tool.  (Note that the object grabber tool can’t be used in shuffle or spot mode; if you’re in one of those modes, Pro Tools will warn you and then switch into slip mode.)  Now that you’re in object grabber mode, the mouse cursor will change into an “object selection” shape whenever it passes over a clip (or “region” if you’re still working with an older version of Pro Tools).  You can click on a clip to select it.  But here’s where the magic happens:  You can hold down the shift key and click on a clip to select it in addition to previously selected clips.  The selected clips can be discontiguous, and they don’t even need to be all on the same track.  Having selected any number of clips, you can apply all sorts of editing operations to them.  I haven’t experimented to see if there are some editing operations that can’t be applied to discontiguous selections.

This has many useful applications.  For example, let’s say you have a pop song with a standard intro-verse-chorus-verse-chorus-bridge-chorus-chorus structure.  You can select all the verses and do something to them, perhaps a volume trim, or mute/unmute some of the instruments.

The only limitation seems to be that the object grabber tool only operates on clips.  You can’t just select arbitrary ranges of time on arbitrary tracks.  But this doesn’t seem like too big a limitation.  If you’re trying to apply an edit to multiple discontiguous regions, you just have to remember to find those regions first and turn them into clips.  This of course has to be done region by region, because there’s no way to select all those discontiguous regions without them first being made into clips.

Pro Tools Tip: Creating a Stereo Track from Two Mono Tracks

Sometimes when you’re using a big complicated program like Pro Tools, you run into a task that seems so simple and common that there should be a command to just make it happen. Especially when there seem to be commands for practically everything else you can think of, including things that aren’t that common.

Here’s an example. You want to split a stereo track into two mono tracks? No problem. Just select the stereo track that you want to split, then select “Split into Mono” on the Track menu. But what about the opposite? Let’s say you have two mono tracks, and you want to merge them into one stereo track? There’s no “Merge into Stereo” command.

So, here’s what you can do:

  • Create a new empty stereo audio track
  • Select the 2 mono audio tracks that you want to merge
  • Copy
  • Select the stereo track
  • Paste

Voila. You now have a stereo audio track.