Tom Bajoras Blog
email: tom.bajoras@gmail.com    
         

Nope, you’re not a genius.

12 Things I Learned About Starting a Business (by Starting a Business)

The first thing I learned by starting a business is that it’s best to just do something. If you have an idea for a business, and you love the idea, and you believe it to be a good idea, then just run with it. If it doesn’t turn out great, or even if it fails, learn from your mistakes, and do it better next time. If you sit around just thinking about how to do something perfectly, you’ll never do anything. So rather than sitting around thinking about how to write the perfect article about this, I’m just going to jump in and start writing! Here then is a list, starting with the second thing that I learned by starting a business.

2. If you don’t love what you’re doing, don’t do it! Even if it’s something that can make a lot of money. Failure isn’t the worst thing; the worst thing is succeeding at the wrong thing.

3. Be sure that what you’re doing is a business and not just a hobby. If your thing is disruptive performance art, then by all means go ahead and play bagpipes while riding a camel through a subway station… but don’t think it’s a business.

4. No matter how much you love what you’re doing, it’s not going to succeed as a business unless it’s something that people want to buy.

5. Starting a business is SUPER difficult and requires a FULL commitment. You can’t do it in your “spare time.” You will have no spare time! If you’re already married, it’s going to be even harder. If you have kids, you can pretty much forget about it.

6. If you’ve got what it takes, starting a business can be the most rewarding thing you’ve ever done in your life. Yes, it’s difficult, but it’s also wonderful.

7. You can’t do it alone. So, for example, if you need to do marketing, hire a marketing person. If you need to do PR, hire a PR person. If you need to create a web site, hire a web designer.

8. If you can’t afford to hire people, then you’ll have to share equity. If your dream is worth believing in, then there will be people who will be willing to share your dream.

9. You’ll be amazed by how good other people are at what they do. You need to surround yourself with smart, passionate people, who know and love the thing that they do as much as you know and love the thing that you do. If your thing is programming, or baking, or fixing cars, then do that. Don’t waste your time trying to do marketing. You’re a not a marketing person. Someone else is a marketing person, not a programmer/baker/mechanic.

10. Ask yourself what you’re good at. No, not just good. Ask
yourself what you’re freaking amazing at! Those are the things you should spend your time doing.

11. Never hire friends. Never work for friends.

12. Business plans are worth less than the paper they’re printed on. But if you want to write one to help organize your thoughts, go ahead and do so. Just remember to throw it away when you’re done writing it.

Each one of these points was learned by painful trial and error. I could probably expand on each of these points. Maybe I should even write a book. Except I’m not a writer. See #9 above.


I’ve been working with sound or music in one form or another for over 50 years. For half of those years, starting in 1991, I took a detour and founded Art & Logic, a custom software development company. Starting in 2008 I’ve been able to take a step back from daily operations of Art & Logic. To enjoy my semi-retirement I did what any other insane person would have done in my situation: start another company. So I’m now running a small audio production studio. I’m happy to report that I’m learning all the lessons over again that I learned from starting my previous company.

Mastering Audiobooks

The term “mastering” in the audio world conjures up images of a mad scientist in his laboratory, surrounded by oscilloscopes, Jacob’s Ladders, mysterious black boxes with large rotary dials built in the 1940s, and maybe even an animal about to be sacrificed on an altar between two speakers that cost $100K each.

That image might still be more or less accurate for a music mastering lab, but fortunately mastering an audiobook is fairly simple and doesn’t require expensive equipment or the harming of any animals.

For an audiobook you’re basically only worried about making all the chapters the right loudness, where “right” is defined by the audiobook distributor’s specifications. Most distributors use Audible’s specifications, so I’ll assume those throughout this article, but you can adjust accordingly if your distributor uses different specifications.

You’ll need two plugins: a maximizer and a level meter. I recommend the Waves L2 Maximizer, but almost any maximizer can produce acceptable results. For the level meter, you’ll need something like the Waves WLM plugin that can measure “loudness units relative to full scale” (LUFS). This might be a new concept, but for our purposes here it just means “how loud the audio seems to be to the human ear.” If you want to learn more about LUFS, I recommend this video.

Insert the maximizer and level meter plugins on the master channel (in that order). The goal is to have the long-term LUFS be around -21, according to “EBU method / ITU 1770 weighting” (which is the default in the WLM plugin). If you’re using the L2 Maximizer, use the “CD high res” setting with the output ceiling pushed down to -2. Whatever maximizer you choose, there’s probably a setting for producing a CD, and that should work. You don’t want massive amounts of compression like a bad Nirvana remastering; you just want to even out the obviously loud and soft sections so the audiobook listener is never tempted to adjust the volume on their playback device.

Once your plugins are in place, play some of the first chapter. About a minute should be enough. See what the LUFS is. Adjust the maximizer level accordingly and play the same audio. You want the LUFS to be -21. If all the chapters were recorded in the same studio, on the same equipment, with the same narrator, they shouldn’t vary by more than 1 or 2 dB, so you probably won’t need to adjust the maximizer very much from chapter to chapter. If the LUFS of a chapter is much below -24, you’ll probably need to adjust the level of the audio feeding into the master channel before adjusting the maximizer, otherwise you’ll start noticing the compression effect of the maximizer (i.e. you’ll end up with a bad Nirvana remastering).

Once you’ve found the right maximizer settings to achieve an LUFS of -21, select the whole chapter and apply the maximizer to it, rendering the audio back to the chapter. (Oh… and make a backup of the chapter before you do this!) Now if you bypass the maximizer on the master channel and you play back some of the chapter, the level meter should show an LUFS of -21.

Then simply repeat that procedure for each chapter in the audiobook.

Here’s a dirty little secret: If the LUFS is a little off, say -20 or -22, you’ll be fine. The big picture is that you don’t want a noticeable loudness difference between two chapters, or between two audiobooks. Compare the mastered versions of chapters recorded on different days to make sure they sound the same loudness. Also compare your mastered chapters to a reference audiobook, preferably something that has won an award for audio quality. There’s no point in making sure your audiobook sounds as good as someone else’s bad-sounding audiobook!

Spoken Word Production

I do a lot of spoken word production. This includes stuff like audio books, instructional videos, and voice-overs. In this type of work it’s very common for the narrator (sometimes called “the reader”) to have to read long passages of text. For example, let’s say you’re recording a self-guided audio tour for a historical site. There might be 20 stops on the tour, each one with a couple minutes of audio. For most narrators it’s going to be impossible to read all those words without making a mistake somewhere. Obviously you want all the words to be pronounced correctly, with no accidentally skipped words, and you want the pacing and tone of the narration to be right.

There are multiple approaches to correcting mistakes. In the first approach, whenever the narrator realizes that they’ve made a mistake, they simply pause, then back up to before the mistake occurred and restart. In this approach, the editor will later go back and delete the mistakes, closing up the holes. For example, if the narrator is trying to say “Jack and Jill went up the hill to fetch a pail of water,” but they accidentally say “Jack and Jill went up the hill to catch a pail of water,” it might end up sounding like this: “Jack and Jill went up the hill to catch a… oops… went up the hill to fetch a pail of water.” Then the editor would delete “went up the hill to catch a… oops…” and close up the hole, resulting in the correct “Jack and Jill went up the hill to fetch a pail of water.” The advantage to this approach is that the narrator stays “in the groove,” which can be important if the text is difficult. However, the disadvantage is that it takes a surprising amount of skill to do this. Otherwise, when the mistake is deleted, it ends up sounding like, well, like a mistake was deleted.

The second approach is to correct the mistakes while recording. So when the narrator says “to catch a… oops,” the engineer stops the recording, sets a punch-in point with pre-roll, and restarts the recording.” The advantage to this approach obviously is that there is no editing to be done afterward. The disadvantage is that it requires good communication between the narrator and the engineer, otherwise there can be a lot of wasted time for a conversation like “Where do you want to take it from?” “How about “went up?” “No, there’s not enough space there. Let’s take it from Jack.” Then the engineer needs to select the right amount of pre-roll so that the narrator can settle into a matching tone, loudness, and pacing. This is not as hard as it might sound; I’ve found that after working with the same narrator on multiple projects, we know each other well enough that we can do the whole process without even talking to each other.

In actual practice, it might not be possible to correct all mistakes while recording. Even if you think you’re doing that, you might discover upon listening to the recording that there are a few mistakes that need to be corrected. In that case you’ll need to mark the regions that need to be replaced, then punch each of them. This can be challenging if multiple days have gone by; you’ll need to recreate the exact same mic setup, and the narrator will have to match their voice from a previous day. Still though, it can be done. It does require skill, but after all, that’s why a narrator gets paid to narrate, and a recording engineer gets paid to record. 🙂

Here’s a tip that I’d like to leave you with. When punching in, I find that it’s best to start the pre-roll at the beginning of a sentence. For example, if you’re punching in at the word “fetch,” it’s better to pre-roll from “Jack” (the start of the sentence) rather than “went.” That’s because a pre-roll starting mid-sentence can throw the narrator off balance momentarily, and they may not be fully “recovered” at the punch-in point. This is a subtle effect, but I’ve noticed that punches sound more natural when the pre-roll starts at the end of the previous sentence. By the way, in case you don’t know it, in the Pro Tools edit window, option click to set the pre-roll point. After you’ve done this kind of work for a while, the keyboard and mouse gestures become automatic; you just do them without thinking, like you speak without thinking of the words.

Here’s some spoken word that was recorded earlier this year in my studio. It’s an audio clip from the book, Echoes of Tattered Tongues, by John Guzlowski. The reader is Jon Brandi. This was a difficult, but rewarding, project. Recording poetry has a lot of similarities to song production, because you’re not just trying to pronounce all the words correctly; you’re also working with the narrator to create exactly the right pacing and tone of each line to convey the author’s intentions, which in this case (as might often be the case) was even more complicated because the author was on the other side of the country while we were recording. Even though each poem typically took up only 1 or 2 printed pages, there were often dozens of stops and restarts, multiple passes, and edits. Hopefully it ends up sounding transparent, as though the reader is reading the whole poem in one take, getting every nuance of language perfect on the first try.

Why Remove Frequencies That Aren’t There?

I’ve read a number of articles that recommend removing “all subsonic frequencies, even if there aren’t any.” What does this statement actually mean? I assume they’re not literally recommending removing something that isn’t there; that would be like saying “you should remove the leopard in your back yard.” I’m guessing what they really mean is “do something that would remove any subsonic frequencies, in case there are any, and it won’t do any harm if there aren’t.”

A subsonic frequency, by definition, is one that’s too low for the human ear to hear, so of course you can’t detect such frequencies simply by listening. The nominal range for human hearing is 20 Hz to 20 KHz, but that range shrinks with age (especially the upper end of it). If, like most rock musicians, you’ve been exposed to loud sounds repeatedly without adequate hearing protection, I hate to deliver the bad news, but you’re probably already deaf to frequencies above 15 KHz.

But today we’re not talking about high frequencies; we’re talking about low frequencies, subsonic frequencies, the ones below 20 Hz. We’ll consider four questions:

  1. How might subsonic frequencies end up in your mix?
  2. How can you know if they’re there?
  3. How should you remove them?
  4. What harm could they do if you don’t remove them?

How might subsonic frequencies end up in your mix?

First of all, if you’re writing some kind of experimental music where you deliberately synthesized subsonic frequencies, then this article isn’t for you. This article is for people who accidentally have subsonic frequencies in their mix.

If you’re recording any sort of “normal” music, it’s unlikely that you’ve captured any subsonic frequencies through a microphone. (If so, I want that microphone!) Microphones simply don’t record frequencies that low.

But synthesizers can easily make frequencies that low. Say for example you have a bass patch that’s doubled down an octave or two. If you play notes that are too low, you’re now hearing only the fundamental, and the doubled part has passed into the subsonic range.

It’s also possible (but not likely) that subsonic frequencies could have been created by some sort of faulty audio processing. For example, it could be a bad plugin.

Another possibility is that subsonic frequencies can be created by amplitude or frequency modulation. So certain kinds of audio processing, such as ring modulators, can create them.

Ultimately it doesn’t really matter how the frequencies got there; the more important thing is to be able to know whether there are in fact any subsonic frequencies in your mix.

How can you know if they’re there?

As already explained, you can’t hear a subsonic frequency, so you’ll need something other than a human ear. That thing you need is called a frequency analyzer (or sometimes a “spectrum analyzer”). Here’s a free one that works really well.

A frequency analyzer shows you what frequencies and how much of each are in your music. It has many uses aside from the one we’re talking about today. For example, it can help you pinpoint an annoying hum that you’re trying to remove from a track.

First try putting the frequency analyzer on your master output. Play the entire song and see if any subsonic frequencies showed up over the course of the song. Remember that the problem might only occur in certain parts of the song, for example where the synth bass plays.

If you see subsonic frequencies in your mix, try muting and soloing different track combinations to figure out which tracks contain the subsonic frequencies.

How should you remove them?

The obvious tool to reach for is some kind of high pass filter (HPF), probably part of an EQ plugin. But you need to remember that a HPF has a slope; it can’t just block all frequencies below 20 Hz while letting in 20 Hz and higher. A real HPF looks more like this:

 

hpf

 

 

 

 

 

 

 

 

 

 

Therefore some people recommend removing all frequencies below a higher number, such as 80 or even 100 Hz. The idea isn’t that frequencies below 100 Hz are harmful, but that you need to remove all frequencies below 100 Hz in order to ensure that no frequencies below 20 Hz are left.

If you are able to isolate the problem to specific tracks, it’s best to insert the EQ on those tracks rather than inserting it on the master output. Even the best EQ isn’t 100% transparent. A HPF can introduce a phase shift, perceived as a bump, at the cutoff frequency. That effect might be imperceptible on your bass track, but you wouldn’t want your entire mix to be phase shifted around 100 Hz.

If your mix is being sent to a mastering lab, then they may be responsible for removing subsonic frequencies, and you might think that you don’t need to worry about it. But I try not to make any assumptions about where my music goes after it leaves my studio, so I want it to be as close as possible to a finished product. In the same way, you wouldn’t want to produce a mix that has a harsh 2-4 KHz frequency bump and rely on listeners to adjust the EQ on their playback system.

What harm could they do if you don’t remove them?

First let’s review how speakers turn electricity into sound. The electricity that goes into your speakers fluctuates in voltage very rapidly. Whatever the voltage is at any given instant corresponds to the position of the speaker cone. If you were to feed the lowest possible subsonic frequency, 0 Hz, into your speaker and change the voltage manually, you would see the speaker cone moving in and out corresponding to the movement of your hand on the manual voltage dial. Electricity of 0 Hz frequency has a special name, DC, and in the world of amplifiers and speakers, DC is a VBT (a “very bad thing”). Why? Well, turn up that voltage dial too far, and your speaker cone will eventually tear away from the rest of the speaker, and aside from the loud pop and impressive puff of smoke, this is not cool, because you now have a nonfunctioning speaker. For this reason, almost every audio amplifier has built-in circuitry to block DC current.

But what about very low frequency electricity, not all the way down at 0 Hz, but something between 0 and 20? Well, the situation for the speaker is not much better. If the speaker cone is moving that slowly, it’s not producing any sound that your ear can hear, but it is moving. But you have no way of hearing how “loud” this sound is, so if it gets very “loud” (i.e. the voltage gets too high)… pop goes the speaker.

Of course the same thing is true in the sonic frequency range. Obviously, if the sound is too loud, it might damage your speakers. But your ears will also hear the sound, and it won’t come as a surprise when the speakers get damaged. In fact, your ear drums might be damaged before the speakers.

It’s highly unlikely that you accidentally have a subsonic frequency loud enough to damage your speakers. But what’s far more likely is that you have a subsonic frequency mixed in with the frequencies that you can hear. The problem with this situation however is that your music is added on top of the subsonic frequency, which means the music all has to fit into a smaller range of speaker movement. The easiest way to illustrate this is to go back to the most extreme kind of subsonic frequency, 0 Hz, i.e. DC voltage. Imagine that you have a DC voltage mixed into your music. That DC voltage positions the speaker cone somewhere other than its normal rest position. Say your speaker cone can move an inch. You’d like to use that whole inch to reproduce your music. But if the DC voltage is pushing the speaker cone out to 1/4″, then the available range of motion to play back your music is only between 1/4″ and 1″. Again, this is an extreme example to illustrate the point, but if the speaker cone is moving back and forth slowly between 0 and 1/4″, then your music is being represented by speaker movements between 0 and 1″ sometimes and 1/4″ and 1″ sometimes. Anyway, all of this can be summed up by saying “removing subsonic frequencies frees up headroom.”

Ironically, perhaps the most compelling argument for removing low frequencies (including subsonic frequencies, but even frequencies up to 100 Hz) is that doing so will actually give your music a more powerful low end. If you have a lot of low frequency content in multiple tracks, the mix will just end up sounding like a big muddy mess. The secret to creating a powerful, but not muddy, low end is to remove frequencies below 100 Hz on all but a few tracks (maybe even just one track) and letting those tracks use the frequencies below 100 Hz. In most popular music styles, the low end is going to come from the bass and kick drum. In many cases, it’s just the kick drum. This means that when you solo the other tracks (guitar, keyboards, etc.) they may sound like they don’t have enough bass, but if the kick drum and bass parts are written well, performed well, and recorded well, then those parts are all you need to supply enough low end. You can safely remove frequencies below 100 Hz on every other track.

In summary, there’s a variety of ways that subsonic frequencies might end up in your mix. The most likely way is bass synthesizer tracks. You can’t hear subsonic frequencies, but you can detect them with a frequency (spectrum) analyzer. If you detect them, you can remove them by inserting a high pass filter on individual tracks, and roll off frequencies below 100 Hz. Doing so will free up headroom in your mix and will also give your music a clearer, more powerful low end. It will also reduce the risk of damaging speakers, although that risk is already pretty low.

That’s all for now. Until next time, may all your mixes have no subsonic frequencies (unless you want them), and your yard have no leopards (unless you want them)!

Correcting Mistakes in Recorded Narration

I do a lot of spoken-word production. This includes stuff like audio books, instructional videos, and voice-overs. In this type of work it’s very common for the narrator (sometimes called “the reader”) to have to read long passages of text. For example, let’s say you’re recording a self-guided audio tour for a historical site. There might be 20 stops on the tour, each one with a couple minutes of audio. For most narrators it’s going to be impossible to read all those words without making a mistake somewhere. Obviously you want all the words to be pronounced correctly, with no accidentally skipped words, and you want the pacing and tone of the narration to be right.

There are multiple approaches to correcting mistakes. In the first approach, whenever the narrator realizes that they’ve made a mistake, they simply pause, then back up to before the mistake occurred and restart. In this approach, the editor will later go back and delete the mistakes, closing up the holes. For example, if the narrator is trying to say “Jack and Jill went up the hill to fetch a pail of water,” but they accidentally say “Jack and Jill went up the hill to catch a pail of water,” it might end up sounding like this: “Jack and Jill went up the hill to catch a… oops… went up the hill to fetch a pail of water.” Then the editor would delete “went up the hill to catch a… oops…” and close up the hole, resulting in the correct “Jack and Jill went up the hill to fetch a pail of water.” The advantage to this approach is that the narrator stays “in the groove,” which can be important if the text is difficult. However, the disadvantage is that it takes a surprising amount of skill to do this. Otherwise, when the mistake is deleted, it ends up sounding like, well, like a mistake was deleted.

The second approach is to correct the mistakes while recording. So when the narrator says “to catch a… oops,” the engineer stops the recording, sets a punch-in point with pre-roll, and restarts the recording.” The advantage to this approach obviously is that there is no editing to be done afterward. The disadvantage is that it requires good communication between the narrator and the engineer, otherwise there can be a lot of wasted time for a conversation like “Where do you want to take it from?” “How about “went up?” “No, there’s not enough space there. Let take it from Jack.” Then the engineer needs to select the right amount of pre-roll so that the narrator can settle into a matching tone, loudness, and pacing. This is not as hard as it might sound; I’ve found that after working with the same narrator on multiple projects, we know each other well enough that we can do the whole process without even talking to each other.

In actual practice, it might not be possible to correct all mistakes while recording. Even if you think you’re doing that, you might discover upon listening to the recording that there are a few mistakes that need to be corrected. In that case you’ll need to mark the regions that need to be replaced, then punch each of them. This can be challenging if multiple days have gone by; you’ll need to recreate the exact same mic setup, and the narrator will have to match their voice from a previous day. Still though, it can be done. It does require skill, but after all, that’s why a narrator gets paid to narrate, and a recording engineer gets paid to record. 🙂

Here’s a tip that I’d like to leave you with. When punching in, I find that it’s best to start the pre-roll at the beginning of a sentence. For example, if you’re punching in at the word “fetch,” it’s better to pre-roll from “Jack” (the start of the sentence) rather than “went.” That’s because a pre-roll starting mid-sentence can throw the narrator off balance momentarily, and they may not be fully “recovered” at the punch-in point. This is a subtle effect, but I’ve noticed that punches sound more natural when the pre-roll starts at the end of the previous sentence. By the way, in case you don’t know it, in the Pro Tools edit window, option click to set the pre-roll point. After you’ve done this kind of work for a while, the keyboard and mouse gestures become automatic; you just do them without thinking, like you speak without thinking of the words.

Ten Things that J. S. Bach and I Have in Common

It can be very encouraging to discover that you and someone you greatly admire have something in common. For example, let’s say you really like Zachary Quinto’s depiction of Mr. Spock in the new Star Trek movies. But then you learn that, like you, Zachary was born in Pittsburgh, PA. Now you feel a closeness to him that transcends his acting roles. If you’re ever having lunch with Zachary, you can talk about the Three Rivers Arts Festival, Primanti Brothers, and the Steelers, in addition to Spock’s relationship with Uhura.

I was doing some research on J. S. Bach, and I found out that he and I have not just one, but ten things in common. So if heaven has cafes, Johann and I should have enough to talk about over lunch.

1. To start with the most obvious: We are both composers. Of course, this is like saying that Kobe Bryant and I can both dribble a basketball. If Johann asked what I’ve written, I would point him to https://www.youtube.com/watch?v=NEyWU2Crfbc, which has almost 10,000 views. He in turn would tell me that I might have heard of a few of his compositions—like Toccata and Fugue in D Minor, The Well Tempered Clavier, The Brandenburg Concertos, and Mass in B Minor. He would also point me to a few of his pieces on YouTube, including https://www.youtube.com/watch?v=ho9rZjlsyYY, which has over 8 million views. From then on I’d be more careful when answering questions like “what have you written?”

2. Our last names both start with “ba.” In fact, the third letters in our names also aren’t too far apart alphabetically, so if you file your CDs by composers’ last names, my 20 CDs (http://www.amazon.com/s/ref=nb_sb_noss/185-3007671-1736048?url=search-alias%3Daps&field-keywords=Tom+Bajoras) might end up next to Bach’s 7382 CDs (http://www.amazon.com/s?ie=UTF8&page=1&rh=n%3A5174%2Cp_32%3AJohann%20Sebastian%20Bach)

3. Our music isn’t/wasn’t famous during our lifetimes. Bach’s music, like mine,  was often considered old-fashioned and irrelevant. About 80 years after Bach’s death, Mendelssohn rediscovered and popularized Bach’s music. Maybe there is some kid reading this who, years from now, will promote my portfolio to the rest of the world.

4. Neither of us has any known living descendants. In Bach’s case it wasn’t for lack of trying. He had 20 children. I was never into the whole procreation thing. I guess I’m just too busy writing music. Although 1128 of Bach’s compositions have survived, it is estimated that he wrote over 11,000 pieces, so apparently he didn’t have any trouble finding time for writing music in addition to having children.

5. Neither of us ever met Handel. For me that’s probably excusable, since Handel died 201 years before I was born. But for Bach it’s more surprising, since he was born in the same year as Handel. In fact, on a number of occasions Bach tried to arrange for a meeting, but it never worked out. Ironically, the same eye surgeon, John Taylor (not Duran Duran’s bass player), operated unsuccessfully on both Handel and Bach and was most likely responsible for both their deaths.

6. We both worked as church organists and sometimes got into trouble for our musical choices. In 1705 Bach returned to work as an organist in Arnstadt after traveling a bit and soaking up some cool new musical ideas that he was looking forward to putting into practice. But his congregation found Bach’s new ideas confusing, which caused the church council to reprimand him. Similarly, when I was in high school—although I still can’t grasp why this was objectionable—my supervisor disagreed on whether a Pink Floyd song was a suitable prelude to the Catholic mass.

7. We both were avid coffee drinkers. Bach even wrote an opera, “Schweigt stille, plaudert nicht” (“Be still, stop chattering”), about coffee. It was first performed in a coffeehouse. I’ve had a number of pieces premiered in coffeehouses too, although I haven’t yet written specifically on the subject of coffee.

8. We both enjoyed taking long walks. Bach walked 29 miles to go hear organist Dietrich Buxtehude play. I’ve hiked an 8-hour stretch of the Na Pali coast in Kauai, and I’ve driven 29 miles in LA traffic, which Bach never had to contend with, to go hear prog rock bands with names similar to “Buxtehude.”

9. Neither of us had a degree in music. Bach didn’t have a college degree, but he received private music instruction from his parents (who both died when Bach was only 10) and from his older brother. He was trained in organ, singing, violin, and composition. Likewise, I do not have a degree in music, and all my training has been through private lessons. I wish I could say that I studied as much as Bach did when I was a kid, but I was too busy watching Star Trek reruns. By the way, I think Leonard Nimoy’s version of Mr. Spock is better than Zachary Quinto’s.

10. Neither of us have ever had a Snapchat account. I don’t even know what Snapchat is. Neither did Bach. Surely this is not a coincidence!

 

 

 

Nine Ways To Disaster-Proof Your Studio

We’ve all been there. It’s 11pm on Friday, and the deadline to deliver the finished mix is noon on Saturday. You settle into your studio cockpit, and your computer greets you with “cannot find the audio files for this session.” Or maybe you’re in the middle of recording the most awesome vocal performance that this planet has ever heard… and there’s a power failure. Or, heaven forbid, you walk into your studio only to find that the computer, your 1959 Gibson Les Paul Sunburst guitar, a dozen bottles from the wine cellar, and the emergency cash are all missing.

You can never be 100% safe from all possible disasters. For example, if the zombie apocalypse starts tonight, I wouldn’t sweat the string arrangement for tomorrow’s session. But as a professional you owe it to your clients to be prepared for the most common kinds of disaster.

1. back up your data

1. Back up your data.

I bet you saw this one coming. You do back up your data, right? No? Then go do it right now. Seriously. Even if you don’t come back and read the rest of this article, I’ve done my job.

Back up often. Back up everything. At the very least, just connect an external hard drive, and copy all your data to it once a week. These days, external hard drives cost only $40 per TB. Each time you do this, create a new folder with today’s date. When you eventually run out of space on the external hard drive,  then just delete the oldest backup folder to make room for the newest one each week. With this simple manual solution you won’t lose more than a week’s worth of data. If you’re paranoid, then back up more often. You can also do daily partial backups (back up just the projects that you worked on since the previous backup), and do a full backup once a week.

There are also online backup services. Probably the best known is Carbonite, but there are others. Modern operating systems come with their own backup utilities, so you probably don’t even need to buy additional software or subscribe to an online service unless you want more control over the backup process: If you use a Mac, you already have Time Machine; if you use Windows, you already have the built-in Windows backup utility.

2. save often

2. Save often.

Software sometimes crashes. It’s a fact of life. And here’s a theory based on personal experience: When software crashes, it’s usually right when you’re doing something important and haven’t saved your work in a while.

So get in the habit of saving your work often. When (not if) your software crashes, you’ll only lose the work that you’ve done since the last time you saved it. People notice that the “S” is wearing off my computer keyboard; that’s because I use the key command to save my work so often. I don’t even do it consciously anymore; it’s not like a timer goes off inside my head every 5 minutes that says “time to save your work.” I just do it without even thinking about it. Save immediately after you’ve done an important edit, or right after you’ve recorded something, or right after you’ve typed a sentence. (Yes, I just saved my work after typing that.)

Depending on what software you’re using, it might even have an “auto-save” feature, which saves your work at regular intervals. It might be wise to enable that feature, but be aware that it could interfere with audio functions that access the hard drive. If you save your work manually (but frequently) you can make sure that you only save when doing so doesn’t interfere with something else.

3. use a UPS

3. Use a UPS.

That’s “uninterruptible power supply,” not “United Parcel Service.” A UPS is basically a big battery that goes between your equipment and the AC outlet. If the electricity shuts off (for example, a tornado knocks down the telephone pole in front of your house, or you forgot to pay your utility bill), the battery keeps your equipment running for a few minutes. How long depends on the size of the battery and how much you have plugged into it, but it means you’ll have time to shut everything down gracefully before the UPS battery runs out.

This is important because otherwise equipment can be damaged, or data can be lost.

4. have extras

4. Have extras of everything.

There are things that you can’t have too many of: batteries, pens and pencils, blank recordable CDs, cables (audio, power, data), bottled water, business cards, etc. Keep a good supply on hand.

In situations where you need to provide a certain number of something, add one or two beyond the number that you need. For example, last year I was in charge of the PA system for a friend’s outdoor wedding. When I asked him how many mics they would need, he said “just one.” To his surprise, I said “OK, I’ll bring 3 mics, 3 mic stands, and 3 mic cables.” When we got to the wedding site and started to set up, it became apparent that the bride and groom couldn’t share the mic with the singer, because the singer was standing quite a distance from the couple, and it would have looked terrible to have the mic move back and forth between the two locations. And then, sure enough, the pianist arrived and
came up to me and said “I’ll be singing harmony; do you have another mic?” So we used all three of the mics, stands, and cables that I brought.

That’s just one example of why you might need more mics than you thought you did. There’s also the possibility that one of the mics won’t work, and you’ll be glad that you brought an extra.

Other things for which you’ll want to have extras in case something stops working or gets lost right when you need it: your computer mouse, headphones, and whatever is needed for your particular instrument(s) such as strings, picks, rosin, and reeds.

5. heed early warnings

5. Heed the early warning signs.

Sophisticated electronic gadgets sometimes are polite enough to announce their impending demise. A few scattered bright green dots on your monitor might mean a dying video card. A high pitched sound coming from the hard disk might mean all your files are about to go to file-heaven.

If you are fortunate enough to be given a warning, assume the worst. Save your work. Back up your data. Replace that video card or hard disk.

Too often we continue to drive a car, day after day, even though the “check engine” light is on. Then one day, probably in the most inconvenient time and place, the car dies. It’s the same way in the recording studio. I don’t know why, but people will say “that whining sound? Oh, that’s just my hard disk. It’s been doing that for the last month. What did you say? Back up my data? No…why?”

6. keep your gear in shape

6. Keep your gear in shape.

You know what they say about an “ounce of prevention,” right? Create a studio maintenance checklist. Once a month, run through the checklist and make sure everything is working. Keep your software up to date. Test the odd piece of gear that you only use once a year. Make sure the refrigerator is stocked with bottled water and craft beers.

7. install a security system

7. Install a security system.

Part of successfully operating a recording studio is to advertise its existence and location, and to brag about the cool expensive instruments/equipment that live at that location. Unfortunately this also means that you are broadcasting to potential burglars.

So it just makes sense that you should have a good lock on the door, and you should keep it locked. But I also recommend installing a security system. Companies like ADT (www.adt.com) provide video surveillance and 24/7 monitoring. Smartphones and apps provide numerous possibilities for protecting your studio against burglaries, fires, floods, and so on.

And that brings us to a related precaution…

8. have insurance

8. Have insurance.

There are two kinds of insurance that you should have: property insurance and liability insurance. Property insurance covers the replacement cost of your equipment if your studio burns down, gets blown over, gets flooded, or is flattened by a stampeding herd of cattle. Liability insurance covers medical and other expenses if someone other than you gets injured while in your studio.

It’s reasonable, and usually not very expensive, to have both kinds of insurance. In the first case you’re probably laughing and saying “Yeah, right, what are the chances that my studio will burn down?” and in the other case you’re probably laughing about improbable scenarios like a singer trying to hit a really high note and suffering a brain aneurysm. To be sure, both kinds of event are very unlikely, which is why the insurance won’t cost much. But if you are extremely unlucky, and one of these things happens, without insurance you might be faced with enormous repair costs or legal fees.

If your studio is in your home or on your property (such as a detached garage), don’t just assume that it is covered under your homeowner’s policy. Consult with your insurance company. It could be, for example, that your homeowner’s policy doesn’t cover your studio if it is being run as a business.

9. no liquids

9. Enforce a “no liquid” policy.

One of the lesser known consequences of General Relativity is that the gravitational attraction between a cup of coffee and a piece of equipment is proportional to the replacement cost of that equipment.

I’ve been there on more than one occasion, so let me summarize briefly: cappuccino + laptop = not pretty. So next time you’re tempted to set your cup of coffee, water bottle, glass of wine, or anything else in liquid form next to your mixing console, Steinway grand, or MacBook Air, stop. Instead, put it somewhere that’s at least ten feet away from anything valuable. Consider also posting a sign on the studio door saying “no liquids, please.” Even if no clients ever walk through that door, the sign will serve as a reminder to yourself.

 

Did I leave anything out? Of course. It’s the unknown unknowns that eventually get us. The zombie apocalypse might even have already started. But if the zombies cut the power lines, at least my UPS will let me save my work before they break down my door.

Rendering Virtual Instrument Tracks to Save Processing Power and Decrease Load Times

virtual violin“Virtual instruments” are a powerful tool in today’s computer-based composer’s tool bag. Back in the dark ages (i.e. when I was a kid, when I didn’t yet own a computer) if you wanted to record, for example, a violin, you had to have a violin and a violinist. But in 2016 you can buy a library of stringed instrument samples including a wide variety of playing techniques (arco, pizzicato, spiccato, sul ponticello, etc.) so that a pretty convincing violin recording can be made, even if there’s no violin or violinist anywhere in the room. Hence, the term “virtual.” In all fairness to violinists, the real deal still far surpasses the virtual one, but the latter can suffice for a composer to hear their work and get quick answers to questions like “how would this part sound an octave higher?” or “what if the cello doubled this part?” not to mention “what if we add sleigh bells and an oud?”

It is a very powerful tool indeed. But “powerful” also means literally computing power. My studio computer has more processing power than the whole world did at the time of the Apollo moon landing, but it’s not an infinite amount of power. Once you have 30 tracks of virtual instruments, convolution reverb putting those tracks on the stage of the Konzerthausorchester Berlin, some multiband parametric EQs, and autotune on the lead vocals, my 2012 era Mac Pro begins to sweat like the forehead of a 1969 rocket scientist operating his slide rule as the LEM approaches the surface of the moon. Dropouts and other nasty artifacts start appearing in the audio, and eventually the dreaded “you are running out of processing power” alert appears.

The other problem with virtual instruments is that they can take up a lot of disk space. My favorite grand piano library, for example, came on 32 double-sided DVDs and took a whole weekend to install. When you open a Pro Tools session that uses this library, even though only a small fraction of the whole library has to be loaded for that session, you might be staring at a progress indicator for five minutes.

So if you use a lot of virtual instruments, you’ll be doing a lot of waiting for samples to load, and you’ll want to keep your system resources monitor open at all times to make sure you don’t run into the “out of processing power” error. The ugly truth is that even before you actually run out of resources, software can start to behave erratically, and it might even crash before you have a chance to know that you’re running low.

The solution is called “rendering,” which is the process of turning virtual instrument tracks into audio tracks. Compared to virtual instrument tracks, audio tracks take far less processor power, and they take almost no time to load, due to the magic of nonlinear editing. That’s because when playing back audio tracks your digital audio workstation doesn’t actually load the audio; instead it loads a list of audio file names and pointers into those files specifying where to start and stop playing.

Here’s how I convert a virtual instrument track into an audio track in Pro Tools; maybe there’s an easier way to do it, like maybe your software has a “render virtual instruments” function… that would be nice. (Avid, are you listening?)

  1. Route the virtual instrument track’s output to a bus that you’re not currently using for anything else.
  2. Create an audio track whose input is that bus. Set its output to your main outputs. What I like to do is rename the virtual track to have a “v” at the start of the name and name the corresponding audio track without the v. For example “vViolin” and “Violin.”
  3. Move (not copy) any plugins such as EQ, reverb, compression, etc. from the virtual instrument track to the audio track.
  4. Set the virtual instrument track’s playback level to +0 dB.
  5. Turn off any automation (level, panning, etc.) on the virtual instrument track.
  6. Set the audio track to record. Set the virtual instrument track to play.
  7. Play through the whole song, or only the sections where the virtual instrument is playing, while the audio track is recording.
  8. Confirm that there’s a waveform on the audio track corresponding to everywhere where there’s midi data on the virtual instrument track.
  9. Copy any automation from the virtual track to the audio track
  10. Play back the audio track to make sure it sounds the same as the virtual instrument track.
  11. Disable and hide the virtual instrument track.

You can do this for any virtual instrument track that no longer needs to be edited. If however you discover that you need to make a change that can’t be made in the audio track (for example, you need to change a note), then you can reactivate the virtual track (you’ll have to wait for the virtual instruments to load), make the edit, and then repeat the above process.

Rendering a virtual instrument track isn’t something you’ll typically do until late in the song production, perhaps right before you officially go into “mixing mode.” At that point all the parts are probably locked down, and there’s no need to change the actual notes that were played.

Of course, who knows, maybe in the future our computers will be so powerful that we won’t need to worry about this. But given the history of computing, I wouldn’t be surprised if future composers look back at us like we do to those NASA engineers in 1969. And I can’t even imagine the software that will be pushing their computers to the limit.

Better Mixing by Panning Instrument Reverbs

Typically when mixing an ensemble of instruments, regardless of whether it’s a rock band, a big band, a salsa band, a barbershop quartet, or even just three guys banging on metal cans, you’ll want to place the ensemble into some sort of acoustic space. The space might be a scoring stage, a club, a church, a cave, or even a parking garage. The most common way of doing this is to create an aux channel, insert a reverb plugin on that channel, and then send various amounts of the other channels to the aux channel’s input bus.

Also typically in this kind of mix, each of the instruments will be panned to a position in the left-right stereo image to arrange their order on an imaginary stage. For example, with a four-piece rock band (lead singer, guitar, bass player, and drummer) you might pan them to place the lead singer and drummer in the center, the guitarist more toward the left side of the stage, and the bass player more toward the right side of the stage. If -50 is “hard left” and +50 is “hard right,” then this could be accomplished by setting the leader singer’s and drummer’s pan to 0 (center), the guitarist’s pan to -30, and the bass player’s to +30.

If you stop there, however, the stereo imaging is incomplete. You also need to set the panning of each channel’s send to the reverb bus. It is a common mistake to make the reverb panning match the channel’s panning. It is actually more accurate to pan the reverb the opposite of the channel. For example, with our hypothetical rock band example, you would pan the guitarist’s reverb to +30 and the bass player’s reverb to -30.

Why is this more accurate? Think about what causes reverb. Reverb is caused by the reflection of sound from the surfaces in the room. Instruments on the left side of the room have reverb from the surfaces on the right of the room, and vice versa.

How much of a difference does this make? It’s hard to say. I suggest comparing two mixes, one with the reverbs panned the same as the instruments, and the other with the reverbs panned the opposite. Listen to both and see which sounds better. Play the mixes for some other people and see which one they prefer. It’s been my experience that good mixing is often the sum of many small decisions. Any one of those decisions on its own probably won’t make or break the mix, but over the course of mixing a song, you can accumulate things that add up to a good mix. Reverb panning is one of those things!