Audio mixing strategy, part 2

In my previous post, I discussed creating split-track audio – also know as stems – for the dialogue, sound effects and music components of the composite stereo mix. One useful aspect of the QuickTime format is that it can be a multi-track file, holding numerous discrete audio tracks within the same file. Likewise, Apple Final Cut Pro can create and use multi-track audio in a discrete fashion. The trick is in how you set up your sequence settings and in how you use the mixer panel.

Set the sequence audio configuration to Channel Grouped or Discrete Channels. This lets you control the output destination of your audio channels and whether they work as stereo pairs or as individual mono tracks.

In the Audio Outputs tab, establish how many target outputs the sequence will have. If you want three separate stereo pairs for dialogue, sound effects and music, then this tab should be set to six outputs of stereo pairs or dual mono tracks. If you are using stereo instead of multi-channel audio hardware (an Avid Mbox2 Mini in my case), you’ll receive a warning message alerting you that all tracks cannot be monitored. Just ignore it.

The last step is to make sure that your new sequence is actually set to output to the assigned tracks. Right-click on each track of the track panel and make sure your audio outputs are properly assigned. A1 and A2 to 1 & 2, A3 and A4 to 3 & 4 and A5 and A6 to 5 & 6.

Edit the stereo stem files to their appropriate tracks.

Notice a separate meter bar for each output track in the master section of the Audio Mixer. At this point you will only hear the output of audio track A1 and A2, due to your stereo audio hardware.

To monitor the composite mix, enable stereo downmix in the master section. Now all tracks are monitored. Muting and soloing specific tracks will let you isolate parts of the mix to hone in on a section. Working with stems can be very useful when the client calls to say they liked the mix, but can you bring the music down a bit. Instead of having your outside audio studio remix the track, simply make the level adjustment using these stems.

To archive your master file with discrete, split-audio tracks, export a self-contained file using Current Settings.

You can check this file in QuickTime Player 7 (Show Movie Properties) and verify the separate sound tracks embedded within the file.

In addition, you can import this file back into FCP, edit it across to a new sequence and confirm that the tracks are indeed discrete.

If you did make level changes to create a new mix from the stems, then it is also possible to export a self-contained version of the file with this new composite stereo track. Duplicate the sequence and change the settings back to a two-channel output. Make sure all track assignments are reset to 1 & 2. A self-contained export from this sequence will contain a single mixed stereo track.

You might also want to revisit “Sitting in the Mix” for more on mixing strategies.

©2011 Oliver Peters

Audio mixing strategy, part 1

Modern nonlinear editors have good tools for mixing audio within the application, but often it makes more sense to send the mix to a DAW (digital audio workstation) application, like Pro Tools, Logic or Soundtrack Pro. Whether you stay within the NLE or mix elsewhere, you generally want to end up with a mixed track, as well as a set of “split track stems”. I’ll confine the discussion to stereo tracks, but understand that if you are working on a 5.1 surround project, the track complexity increases accordingly.

The concept of “stems” means that you will do a submix for components of your composite mix. Typically you would produce stems for dialogue, sound effects and music. This means a “pre-mixed” stereo AIFF or WAVE file for each of these components. When you place these three stereo pairs onto a timeline, the six tracks at a zero level setting should correctly sum to equal a finished stereo composite mix. By muting any of these pairs, you can derive other versions, such as an M&E (music+effects minus dialogue) or a D&E (dialogue+effects minus music) mix. Maintaining a “split-track, superless” master (without text/graphics and with audio stems) will give you maximum flexibility for future revisions, without starting from scratch.

A recent project that I edited for the Yarra Valley winemakers was cut in Avid Media Composer 5, but mixed in Apple Soundtrack Pro. I could have mixed this in Media Composer, but I felt that a DAW would give me better control. Since I don’t have Pro Tools, Soundtrack Pro became the logical tool to use.

I’ve had no luck directly importing Avid AAF or OMF files into Soundtrack Pro, so I would recommend two options:

a)    Export an AAF and then use Automatic Duck Pro Import FCP to bring those tracks into Final Cut Pro. Then “send to” Soundtrack Pro for the mix.

b)   Export individual tracks as AIFF audio files. Import those directly into Soundtrack Pro or into FCP and then “send to” Soundtrack Pro.

For this spot, I used option B. First, I checker-boarded my dialogue and sound effects tracks in Media Composer and extended each clip ten frames to add handles. This way I had some extra media for better audio edits and cross fades as needed in Soundtrack Pro. Next, I exported individual tracks as AIFF files. These were then imported into Final Cut Pro, where I re-assembled my audio-only timeline. In FCP, I trimmed out the excess (blank portion) of each track to create individual clips again on these checker-boarded tracks. Finally, I sent this to Soundtrack Pro to create a new STP multi-track project.

Soundtrack Pro applies effects and filters onto a track rather than individual clips. Each track is analogous to a physical track on a multi-track audio recorder and a connected audio mixer; therefore, any processing must be applied to the entire track, rather than only a portion within that track. My spot was made up entirely of on-camera dialogue from winemakers in various locations and circumstances. For example, some of these were recorded on moving vehicles and needed some clean-up to be heard distinctly. So, the next thing to do was to create individual tracks for each speaking person.

In STP, I would add more tracks and move the specific clips up or down in the track layout, so that every time the same person spoke, that clip would appear on the same track. In doing so, I would re-establish the audio edits made in Media Composer, as well as clean up excess audio from my handles. DAWs offer the benefit of various cross fade slopes, so you can tailor the sound of your audio edits by the type of cross fade slope you pick for the incoming and outgoing media.

The process of moving dialogue clips around to individual tracks is often referred to as “splitting out the dialogue”. It’s the first step that a feature film dialogue editor does when preparing the dialogue tracks for the mix. Now you can concentrate on each individual speaking part and adjust the track volume and add any processing that you feel is appropriate for that speaker. Typically I will use EQ and some noise reduction filters. I’ve become quite fond of the Focusrite Scarlett Suite and used these filters quite a bit on the Yarra Valley spot.

Soundtrack Pro’s mixer and track sheet panes are divided into tracks, busses, submixes and a master. I added three stereo submixes (for dialogue, sound effects/ambiances and music) and a master. Each individual track was assigned to one of these submixes. The output of the submixes passed through the master for the final mix output. Since I adjusted each individual track to sound good on its own, the submix tracks were used to balance the levels of these three components against each other. I also added a compressor for the general sound quality onto the submix, as well as a hard limiter on the master to regulate spikes, which I set to -10dB.

By assigning individual dialogue, effects and music tracks to these three submixes, stems are created by default. Once the mix is done to your satisfaction, export a composite mix. Then mute two of the three submixes and export one of the stems. Repeat the process for the other two. Any effects that you’ve added to the master should be disabled whenever you export the stems, so that any overall limiting or processing is not applied to the stems. Once you’ve done this, you will have four stereo AIFF files – mix plus dialogue, sound effects and music stems.

I ended the Yarra Valley spot with a nine-way tag of winemakers and the logo. Seven of these winemakers each deliver a line, but it’s intended as a cacophony of sound rather than being distinguishable. I decided to build that in a separate project, so I could simply import it as a stereo element into the master project. All of the previous dialogue lines are centered as mono within a stereo mix, but I wanted to add some separation to all the voices in the tag.

To achieve this I took the seven voices and panned them to different positions within the stereo field. One voice is full left, one is full right, one is centered. The others are partially panned left or right at increments to fill up the stereo spectrum. I exported this tag as a stereo element, placed it at the right timecode location in my main mix and completed the export steps. Once done, the AIFF tracks for mix and stems were imported into Media Composer and aligned with the picture to complete the roundtrip.

Audio is a significant part of the editing experience. It’s something every editor should devote more time to, so they may learn the tools they already own. Doing so will give you a much better final product.

©2011 Oliver Peters

Euphonix Artist Series

As a video editor who started in the days of linear suites, tactile control surfaces are near and dear to my heart. It’s one of the things I miss in the modern nonlinear edit suite. Control devices, such as transport controls and mixing panels make you more efficient and elevate the performance capabilities of the room, not to mention, lessen operator fatigue. Euphonix entered the market years ago as a manufacturer of large, digitally-controlled, analog mixing consoles. They are a leader today in digital consoles for recording studios, live broadcast and video/film post production.

From this heritage, Euphonix has developed the Artist Series – a line of smaller audio/video controllers, based on their EuCon communications protocol. These products include MC Control, MC Mix, MC Transport and MC Color. The first three units can be used with various audio applications, like Nuendo, Digital Performer and Pro Tools. When Apple introduced Final Cut Pro 7 late last year, EuCon support was added, so Final Cut Pro, Soundtrack Pro, Color and Logic Pro can now communicate with these Euphonix surfaces in their native protocol. You aren’t limited to emulation using Mackie Control or HUI protocol.

The four Artist Series controllers are designed to be mixed and matched based on your needs. MC Transport is a control unit to drive your timeline, similar to Contour Design’s Shuttle Pro, the Lightworks edit controller or the discontinued Avid MUI. It has a large jog/shuttle knob and a number of programmable soft keys. MC Mix features eight motorized faders with additional soft keys and adjustment knobs. It is intended purely for mixing without any dedicated transport control section. MC Control combines transport, application commands and mixing into a single unit.

The real news came when Euphonix introduced MC Control, a control surface designed specifically for Apple Color. Tangent Devices and JL Cooper already made panels for Color, but at $1499, the Euphonix product finally brought the price into a range that made it attractive for the average Final Cut Studio owner.

Getting started

Euphonix loaned me an MC Control and MC Color for a few weeks. They were tested at different times and not connected together, but there’s no issue in running multiple panels at once. After a simple installation process, the EuControl software is placed into your Applications folder and runs resident on your Mac. The panels themselves connect to either your Ethernet port or an Ethernet router.  Multiple panels require a router or switch.

A couple of key points. If your Mac Pro has two Ethernet ports, then only Port 1 works correctly. In my case, I also had to turn off the Airport (wireless) card to have the panel be recognized. Once each was set up and working, the panels performed without issue on both a 17” MacBook Pro laptop and a MacPro tower. The last step in the process is to select the Euphonix controller in each application’s Control Surface dialogue.

MC Control

MC Control works with either EuCon, Mackie or HUI protocol, so it can be used with FCP6 as well as FCP7. EuCon control adds functions not available under the others. The first feature to jump out at you is the colorful central touch screen, surrounded by a series of soft keys and soft knobs. These are application-specific, so if you have both Final Cut Pro and Soundtrack Pro open at the same time, the display and button functions will change as you toggle between the two applications.

The right third of the panel is home for navigation and transport controls. The left portion houses four motorized faders. These have a very smooth tactile feel and are the biggest selling point for the unit. The faders function the same way as the virtual faders do within the application, so you can set clip levels or use them to write automation mix passes. Like most mix controllers, MC Control has Nudge and Bank functions, so the four physical faders can be used with more than four timeline tracks. Nudge shifts the group over one track at a time. If you press Nudge once, then faders 1-4 shift to tracks 2-5. If you press Bank, it shifts in groups of four tracks at a time, so faders 1-4 control tracks 5-8.

Final Cut Pro’s mix tool is based on mono tracks. A stereo track in FCP is simply two linked mono tracks that are panned left and right. Soundtrack Pro, however, combines a stereo pair into a single stereo track. An eight-track FCP timeline made up of four stereo pairs shows up as four stereo tracks when sent to Soundtrack Pro. In other words, a stereo clip ties up two faders in Final Cut, but only one in Soundtrack Pro.

Fortunately MC Control is smart enough to follow this. I set up a test mix of the same material in both Final Cut Pro and Soundtrack Pro and then quickly bounced back and forth. MC Control had no difficulty in going between the two – each time resetting the fader positions, redrawing the touch screen and swapping between stereo and mono tracks. MC Control gives you a wide range of access to each application’s common commands, however, you are not able to control some items, like filter parameters. That communication isn’t sent out from FCP over EuCon to the device.

MC Color

MC Color is the first Euphonix panel to extend beyond an audio-centric world. It is optimized for Apple Color and features three trackballs with z-rings, touch-sensitive soft knobs, programmable soft keys, transport controls and dedicated keys to copy and paste four color grades. Euphonix did a good job of packing Color’s various tabs, buttons, rooms and controls into this panel. That’s no easy feat, as Apple Color is the most complex and foreign GUI that a typical FCP editor will encounter. It does take a while to get used to MC Color’s layout. The controls all do multiple duties and are contextual – changing as you move through Color’s various tabs, known as “rooms”. Once you use MC Color for a while, you’ll learn which common tasks are mapped to a knob, soft key, trackball or z-ring.

The main reason you’d use a control surface for color grading is the trackballs and that’s where I’ll focus. The three trackball/z-ring controls are designed to adjust the color wheels. These are the main tools for shadow, midrange and highlight color balance and levels. That’s a common function of nearly every color grading application. The trackballs move smoothly, but the default range of movement is very fine. It takes a lot of spins to move from one side to the other of the on-screen color wheel. You can adjust the sensitivity for faster movement, as well as assign a multiplier button to accelerate the amount of travel.

I cranked up the sensitivity to 50 (about midway in its range), which made the cursor travel faster, though actual cursor movement on-screen seemed a bit coarse. The tactile response of the trackball itself was still smooth, however. Since the trackballs works with optical sensors, you can’t just give the trackballs a hard spin and have inertia move the cursor faster. You get better results with a slower, steadier approach. Euphonix suggests a sensitivity setting of around 33 and then use the 10x multiplier soft key when you want to accelerate mouse/trackball movement.

Another favored colorist’s tool is Curves. This requires a mouse or a pen to place points along the curve graph. MC Color lets you turn the center trackball into a conventional trackball mouse. You can use this to navigate around the curves and inject and adjust points along a curve. Even though MC Color controls Apple Color well, I’m not sure I would use it exclusively without mouse or keyboard. At times, I found it simply faster to click or move something with the mouse than to use a soft key or trackball. Bear in mind that I approach it with a video editor’s mentality and the design of MC Color reflects input from a number of professional colorists.


Euphonix’s Artist panels are top-notch controllers. They are well designed and well-constructed. Light, but not light-weight. One good reason to buy a surface is to ease the wear and tear on your wrist from repetitive stress disorder, caused by long-term mouse use. An even bigger reason is to be faster and more productive. You mix better when you can grab more than one fader at a time. You fly through color grading when you can use both hands to adjust multiple parameters simultaneously. This is something mixers and colorists have known for years.

Each of these panels is designed with different tasks and working styles in mind. I’m a big keyboard user, so I prefer using it for transport control – a throwback to the linear days, I suppose. I hate to mix automation passes with the mouse; therefore, MC Mix holds more attraction than MC Transport or MC Control. If I were doing daily color grading sessions, MC Color would definitely be a “must have” accessory. Thanks to the small form factor of the Artist Series panels, I could easily fit both of these panels side-by-side on my desk. They would neatly fit between my keyboard and the two computer displays of my system. Obviously another editor might choose to mix and match panels in a different configuration. The good news is that Euphonix is offering a lot of power at a very attractive price. Even adding all four panels costs less than many of the other items purchased for a professionally-equipped Final Cut Studio suite.

NOTE: This review was written prior to the announcement at NAB and completion of Avid’s acquisition of Euphonix. The Artist panels currently work with ProTools under Mackie emulation, but one can only assume that down the road, Avid’s audio and video products will integrate the EuCon protocol. At this time it is unknown whether a panel like MC Color will eventually work with Media Composer, Symphony or DS. According to comments from Avid personnel, it is their intention to see the Artist Series panels continue to work with as many systems – including competitors – as possible.

Written for Videography magazine (NewBay Media, LLC).

©2010 Oliver Peters


Production music is a subjective decision. You can never have enough resources to satisfy clients. I routinely use a variety of options, including SmartSound, Adobe’s tracks for Soundbooth, Apple’s tracks for Soundtrack Pro and the whole range of music from Killer Tracks, FirstCom and others.

Now editors have a new option: MyMusicSource, which comes complete with a new plug-in for Apple Final Cut Pro. The plug-in was developed in partnership with and marketed through BorisFX. Right up front, let me disclose that I know the principals, have a little stock in the company and have been involved in some consulting and beta testing. MyMusicSource is the brainchild of Michael Redman – a veteran composer, producer, recording engineer, facility owner and software entrepreneur. In addition to Final Cut Pro, My Music Source is also actively developing other import plug-ins for various NLEs and DAWs.

Getting started

The release of the FCP import plug-in is of interest to Final Cut editors, of course, but anyone can use MyMusicSource with or without this plug-in. It’s a web-based, online resource for production music, so you can access, search, license, purchase and download music tracks using any regular web browser. The beauty of the FCP plug-in is that you can start and end the process from inside the FCP interface, but it isn’t essential. The plug-in itself is a free download, as is establishing an account with MyMusicSource. The company makes its money licensing music for productions.

Here’s a quick overview of how the plug-in and process works. Once you install the MyMusicSource plug-in (download from BorisFX), an option for MyMusicSource is added to FCP’s import menu, alongside XML, Sony XDCAM, EDL, etc. Select this and it launches your default web browser to the start page for FCP users. Log in using your established account and you are off and running. At this point the process is similar to other online music services. You can select and preview music by various search criteria in different genres. As you browse clips, add them to a project cart for later review.

One key difference from other companies is that MyMusicSource is upfront about licensing costs. Their whole approach is to “pre-clear” the music before you can download. At the beginning of your search, you should establish the intended production use for the music, before you add a track to your project cart. As a producer, you may purchase tracks with a Preview License for $.99 per track. This allows you to purchase and download a full-length, full-quality track and temporarily use it within your production (in-house preview use only).

Once a final set of tracks has been decided upon and the correct use established, you may purchase an upgrade to the license for legal use of that music. If you know in advance what the target use will be for the production, such as non-commercial web, you have the option to select that license rate instead. Each cut of music will display a price based on the selected licensing, so you instantly know what it will cost as you browse through the inventory. Non-commercial rates for personal use start at $5.

Project carts may be shared with your clients. If you’ve selected a handful of possible tracks for a client’s review, then share the cart and the client can access and preview these tracks. As with any shopping cart system, finalize your choices and proceed through checkout. Once you’ve paid, move on to the download center, where you find three options: Send to Final Cut Pro, Zip and Download NOW or Zip and Email. The last two options are the same as if you accessed the site without the FCP plug-in. Option one is enabled if you have the FCP plug-in installed.

You may also select between MP3 and 48K AIFF audio file formats. MP3 files are a faster download, but require a render in your FCP timeline. AIFF files will take a bit longer and are larger files, but work fine inside FCP. One option is to download MP3 files (using method 2 or 3) and then drag them into FCP via Digital Heaven’s Loader application. This converts the MP3 files into 48K AIFF. Another option is to convert MP3s using QuickTime Player Pro. These last two approaches work fine, but it means a tad more work and obviously detours away from the roundtrip magic. I normally opt for the AIFF files. One issue I’ve found is that the Send to Final Cut Pro feature has some access issues with FireFox, so use Safari 4 if you encounter these when using this method.

The last step of the roundtrip is back into FCP. A MyMusicSource media folder (containing the downloaded tracks) is placed into the same folder as your active FCP project file. A bin with the tracks is imported into the FCP project and shows up in your FCP browser. If you have more than one project open, you’ll receive a prompt to let the plug-in script know which project to use. Another handy feature of MyMusicSource is that when tracks are downloaded, you will also receive a PDF of the actual licensing information. This is great for the end of the project when you have to turn in music cue sheets and clearance information. It’s all right there from the very start!

OK, so the process is simple and straightforward, but what about the music itself? As I said at the start, music is subjective. The choices are good, but a big difference with the MyMusicSource inventory is an attempt to have a very contemporary sound. The selections are more artist-centric than I tend to see in the competition. There are also more vocal selections. A popular production trend is to use songs instead of just scores. That can get very expensive if you try to license songs that you’ve heard on the radio or on iTunes. In my opinion, MyMusicSource offers a wider selection of good vocal tunes than other libraries, so if your production needs the catchy sound of some indie, alt-rock band, then you’ve got plenty of options to choose from!

©2010 Oliver Peters

Sitting in the Mix


Like most video editors, audio mixing isn’t necessarily my forte, but there are plenty of projects, where I end up “playing a mixer on TV”. I’ll be the first to recommend that – budget permitting – you should have an experienced audio editor/mixer handle the sound portion of your project. I work with several and they aren’t all equal. Some work best with commercials that grab your attention and others are better suited for the nuance of long-form projects. But they all have one thing in common. The ears to turn out a great mix.

Unfortunately there are plenty of situations where you are going to have to do it yourself “in the box”. Generally, these are going to be projects involving basic voice-overs, sound effects and music, which is typical of most commercials and corporate videos. The good news is that you have all the tools you need at your disposal. I’d like to offer some ideas to use for the next time that the task falls to you.

Most NLEs today have a decent toolset for audio. Sony Vegas Pro is by far the best, because the application started life as a multitrack DAW and still has those tools at its core. Avid Media Composer is much weaker, probably in large part because Avid has put all the audio emphasis on Pro Tools. Most other NLEs fall somewhere in between. If you purchased Apple’s Final Cut Studio or one of the Adobe bundles, then you have excellent audio editing and mixing software in the form of Soundtrack Pro or Soundbooth.

Mixing a commercial track that cuts through the clutter employs all the same elements as creating a winning song. It’s more than simply setting the level of announcer against the music. Getting the voice to sound right is part of what’s called getting it to “sit right in the mix”. It’s the same concept as getting a singer’s voice or solo lead instrument to cut through the background music within the overall mix.


1. Selection

The most important choice is the proper selection of the vocal talent and the music to be used. Most often you are going to use needledrop music from one of the many CD or online libraries. As you audition music, be mindful of what works with the voice qualities of the announcer. Think of it like the frequency ranges of an instrument. The music selected should have a frequency “hole” that is in the range of the announcer’s voice. The voice functions as an instrument, so a male announcer with a deep bass voice, is going to sound better against a track that lets his voice shine. A female voice is going to be higher pitched and often softer, so it may not work with a heavy metal track. Think of the two in tandem and don’t force a square peg into a round hole.


Soundtrack Pro, Soundbooth, GarageBand and SmartSound Sonicfire Pro are all options you may use to create your own custom score. One of the useful features in the SmartSound and Soundbooth scores is that you can adjust the intensity of arrangements to better fit under vocals. These two apps each use a different approach, but they both permit the kind of tailoring that isn’t possible with standard needledrop music.


2. Comping the VO track

It’s rare that a single read of a voice-over is going to nail the correct inflection for each and every phrase or word. The standard practice is to record multiple takes of the complete spot and also multiple takes of each sentence or phrase. As the editor, don’t settle for one overall “best” read, but edit together a composite track, so each phrase comes through with meaning. At times this will involve making edits within the word – using the front half from one take and the back half from another. Using a pro audio app instead of an NLE will help to make such edits smooth and seamless.


3. Pen tools and levels

I personally like to mix with an external fader controller, but there are times when you just have to get in with the pen tool and add specific keyframes to properly adjust levels. For instance, on a recent track, our gravely-voiced announcer read the word “dreamers”. The inflection was great, but the “ers” portion simply trailed off and was getting buried by the music. This is clearly a case, where surgical level correction is needed. Adding specific keyframes to bump up the level of “ers” versus “dream” solved the issue.


4. EQ

Equalizers are a good tool to affect the timbre of your talent’s voice. Basic EQs are used to accentuate or reduce the low, middle or high frequencies of the sound. Adding mids and highs can “brighten” a muddy-sounding voice. Adding lows can add some gravity to a standard male announcer. Don’t get carried away. Look through your effects toolset for an EQ that does more than the basics, by splitting the frequency ranges into more than just three bands.


5. Dynamics

The two tools used most often to control dynamics are compressors and limiters. These are often combined into a single tool. Most vocals sound better in a commercial mix with some compression, but don’t get carried away. All audio filters are “controlled distortion devices”, as a past chief engineer was fond of saying! Limiters simply stop peaks from exceeding a given level. This is referred to as “brick wall” limiting. A compressor is more appropriate for the spoken voice, but is also the trickiest to handle for the first time user.

Compressors are adjusted using three main controls: threshold, ratio and gain. Threshold is the level at which gain reduction kicks in. Ratio is the amount of reduction to be applied. A 2:1 ratio means that for every 2dB of level above the threshold setting, the compressor will give you 1dB of output above that threshold. Higher ratios mean more aggressive level reduction. As you get more aggressive, the audible output is lower, so then the gain control is used to bring up the average volume of the compressed signal. Other controls, like attack and release times and knee, determine how quickly the compressor works and how “rounded” or how “harsh” the application of the compression is. Extreme settings of all of these controls can result in the “pumping” effect that is characteristic of over-compression. That’s when the noise floor is quickly made louder in the silent spaces between the announcer’s audio.


6. Effects

The selective use of effects filters is the “secret sauce” to make a VO sparkling. I’ll judicially use reverb units, de-essers and exciters. Let me again emphasize subtlety. Reverb adds just a touch of “liveness” to a very dry vocal. You want to pick a reverb sound that is appropriate to the voice and the situation. The better reverb filters base their presets on room geometry, so a “church” preset will sound different than a “small hall” preset. One will have more echo than the other, based on the simulated times that it would take for audio to bounce off of a wall in a room this size.

Reverbs are pretty straightforward, but the other two may not be. De-essers are designed to reduce the sibilance in a voice. Essentially a de-esser acts as a multi-band EQ/compressor that deals with the frequency ranges of sibilant sounds, like the letter “s”. An exciter works by increasing the harmonic overtones present in all audio. Sometimes these two may be complementary and at other times they will conflict. An exciter will help to brighten the sound and add a feeling of openness, while the de-esser will reduce natural and added sibilance.

The exact mixture of EQ, compression and effects becomes the combination that will help you make a better vocal track, as well as give a signature sound to your mixes.


7. Sound design

Let’s not forget sound effects. Part of the many-GBs of data installed with Final Cut Studio are tons of sound effects. Soundbooth includes an online link to Adobe’s Resource Central. Here you can audition and download a wealth of SFX right inside the Soundbooth interface. Targeted use of sound effects for ambience or punctuation can add an interesting element to your project.

In a recent spot that I cut, all the visuals were based on the scenario of a surfer at the beach. This was filmed MOS, so the spot’s audio consisted of voice-over and music. To spruce up the mix, it was a simple matter of using the Soundtrack Pro media browser to search for beach, wave and seagull SFX – all content that’s part of the stock Final Cut Studio installation. Soundtrack Pro makes it easy to search, import and mix, all within the same interface.

Being a better editor means paying attention to sound as well as picture. The beauty of all of these software suites is that you have many more audio tools at your disposal than a decade ago. Don’t be afraid to use them!

© 2009 Oliver Peters

Scoring with Sonicfire Pro


Music choices are very subjective and can often be the most difficult part of finishing a production. There is no replacement for a true custom score that’s right on the money, but rarely do clients have a budget to support that, especially in the world of corporate video. I’ve frequently built videos with music changes every :30 or so. I’m essentially scoring the video without the help of a composer. That takes a lot of time to audition cues online through a needledrop library like Killer Tracks and often clients don’t have the budget to pay for 20 or 30 cues on a longer production. This is where royalty-free music sources can really shine. There are various options, including the music cues that come with Apple Soundtrack Pro or Adobe Soundbooth, but neither of these options is as comprehensive as SmartSound.




SmartSound is really two entities – the Sonicfire Pro music customization software and the supporting SmartSound music libraries. In order to get the best out of Sonicfire Pro, you really need to use SmartSound music. To build up my own library, I treat myself to a few new discs each Christmas and whenever a new project can support it!




The Sonicfire Pro software lets you non-destructively change the variation (arrangement), “mood” (orchestration), tempo and length of these music selections. It can do this because each song file is made up of blocks. When you change the duration or pick a different variation to a song, Sonicfire Pro intelligently rearranges the block to avoid something that sounds overly repetitive. The newer offerings in the library include “mood mapping”. This means that the discs are multilayered with instruments mixed into stems – rhythm, lead, percussion, etc. Inside Sonicfire Pro, you can change the “mood” by selecting from a preset or by changing the relative mix of these stems.




Compositions are based on a fixed tempo measured in BPM (beats per minute). If setting the duration doesn’t quite nail the length you need, you can also alter the tempo. This changes the metadata for the cue and speeds up or slows down the song by altering the BPM. This is a non-destructive process, so tempo files can be deleted and the original native BPM restored. Remember that these song arrangements are made up of real instruments and real compositions, not simply a series of loops in the way that a music cue might be created using Apple’s Garageband.




The Sonicfire Pro interface works in two parts: the SmartSound Express Track window and the project (timeline or tracksheet) window. Express Track is a smart media browser that lets you search, audition and modify your music selections. It also lets you browse SmartSound’s online library for additional music. Express Track also displays composer and license information that may be used for music licensing cue sheets. Many editors mistakenly believe that they have unlimited rights to use royalty-free compositions for any purpose. SmartSound music covers most typical productions, but not everything. For example, you are covered for regional TV spots, web videos, film festivals or cable networks like Discovery; but, you are not covered if your show runs on HBO or if Warner Bros. buys your film and distributes prints to thousands of theaters. Check with SmartSound if you have licensing inquiries.




Most of the actual music adjustment is done in the project window. Music is inserted onto a track and the song’s length can be adjusted by dragging the end of the selection. Each song is composed of an arrangement of separate thematic subsections that are each made up of blocks. You can opt to protect certain of these arrangements as you lengthen or shorten a track. This prevents Sonicfire Pro from completely changing the arrangement based on its internal algorithms. The subsections change at keyframed points and the keyframes can be slid earlier or later, thus changing when one subsection transitions to another within the same song.




You build up your score by assembling various music cues onto separate tracks, complete with crossfades and even add “hits”, like a cymbal crash, bell effects and more. A multilayered or “mood-mapped” cue can be split into sections on the same track to change the mood within that music cue. For example, if your video starts and ends with a high energy visual montage that bookends a talking-head spokesperson in the middle section, simply add a mood change at the start and end of this dialogue sequence. Choose a more subdued variation in the mood setting or manually reduce the level of some of the more intrusive instruments by using the property sliders or the appropriate volume envelopes in the tracksheet.


What about Final Cut Pro?


So far this has been a quick overview of how Sonicfire Pro 5 Scoring Edition works in general. SmartSound has worked with numerous NLE manufacturers over the years and in fact, has already developed timeline integration with Avid Media Composer, using MetaSync tracks. New since NAB 2009 is Sonicfire Pro 5.1 with the Final Cut Pro plug-in.




To use Sonicfire Pro with Final Cut, simply edit your FCP timeline as you normally would, adding markers to indicate music changes or the start/end points for a music cue. Once you’ve edited the sequence, save it and leave FCP open. Now launch Sonicfire Pro. The first way is simply to select individual songs and customize them to length. To do this, choose “import from Final Cut Pro” in the Express Track window’s right-hand slide-out drawer. Your FCP markers will be displayed with timing information. By highlighting a marker or a range of markers, Sonicfire Pro will customize the length of the song to fit that duration.




When you are done, choose “Export Soundtrack/Video”, check Final Cut Pro in the dialogue box and save the file (now a rendered/flattened AIFF). It will appear in your FCP browser, so drop it on the timeline and continue. If you need to roundtrip back to Sonicfire Pro, make sure you have set up your FCP preferences so that audio files use Sonicfire Pro as the external editor. If so, right-click the clip on the timeline and select “Open in Editor”, which sends you to Sonicfire Pro. The app knows to pull up that song and you are ready to make adjustments. Make the changes, export again (replacing the previously exported file) and FCP will reconnect to the newer media.




The second and more fun way to work with the two apps is to use Sonicfire Pro to score your complete FCP sequence. Choose “Import from Final Cut Pro” twice: once in both the Express Track window and again into a new project (under the file menu). If you exported a QuickTime reference file of your FCP sequence, you can also open this in Sonicfire Pro to sync the timeline and video. When you do so, the FCP markers will also show up in your Sonicfire Pro project timeline. You can also import the soundtrack from your video (presumably dialogue) and do a final mix right inside of Sonicfire Pro. In any case, this is a great way to make sure your music is perfectly customized to match your FCP sequence. When you are done, “Export Soundtrack/Video” (choosing FCP again) and your mix appears inside your FCP browser.




Over the years, I’ve been a happy SmartSound user. I like the quality of their compositions and I really appreciate that they continue to add to the available library. Although the newest selections are multilayered, that doesn’t obsolete the older, non-layered music. In fact, I still use QuickTracks, which originally came bundled with Premiere 6.5! As I said at the beginning, music is very subjective, so there’s no guarantee that a large SmartSound library is really going to cover every client’s need – but it sure helps. The new FCP plug-in simply makes it easier to use Sonicfire Pro and FCP together.




SmartSound has provided a number of video tutorials on their website that offer a better look at how the whole process works.


© 2009 Oliver Peters

PluralEyes- Help for that Syncing Feeling


If you’ve ever edited multi-cam shows where the production crew’s attitude seemed to have been, “Sync! We don’t need any stinkin’ sync!” – then this software is for you. Without a doubt, every editor friend I ran into at NAB that happened to pass by Singular Software’s booth raved about this product. “You have GOT to see it,” was the comment I often heard about PluralEyes during that week.


What’s the need?


When you work multi-cam shows, proper sync is essential to line up the camera iso recordings in post. Obviously timecode is ideal, but this only works properly when all cameras were fed from a genlocked master timecode generator. In the case of digital run-and-gun projects (like a low-budget rock concert or reality TV productions), the cameras are running wild and not necessarily synced to each other.


Under the best of circumstances you might be able to get the crew to internally sync the cameras to time-of-day timecode at the beginning of the day or get them to occasionally shoot a large LED timecode display somewhere within a concert venue. In a film-style shoot, they might have started each take with clapsticks. More often than not, this doesn’t go according to plan once in the thick of the production – or the timecode starts out close and drifts out. The latter often happens when a camera gets powered down and back up in the course of the production.


Most modern NLEs have multi-cam editing tools. Typically these let you sync clips by matching timecode or by marking an in-point at some common event and aligning the source clips accordingly. In the film example, the point where the sticks clap shut provide a good mark-in-point. In the absence of either clapsticks or valid timecode, you often find yourself looking for things that become a common reference point. For instance, the same frame in each camera angle, just where the singer touches his nose in a unique way! Obviously this can become incredibly time-consuming and often not very frame-accurate.




Enter PluralEyes


PluralEyes is designed for Final Cut Pro editors. It synchronizes clips based on common audio. It’s not a plug-in, but a standalone application that works in tandem with FCP. The software analyzes the waveform of a clip’s audio track and processes sync based on the commonality of the tracks among clips from different camera angles. In order to work, you must have in-camera sound, even if it’s only a scratch track. Picture-only (MOS) recordings still have to be lined up manually or by timecode.


On the other hand, audio-only clips can be synchronized. If you are shooting a concert, the wild audio from the cameras can be synchronized to the clean audio recording fed from the mixer to an audio recorder. PluralEyes won’t adjust for any sync drift, so such audio tracks still need to be properly recorded. In the examples I’ve seen and tested, the camera tracks can be pretty distorted, which means PluralEyes can still perform the analysis with less-than-pristine audio tracks, as long as it can sufficiently interpret the waveform to establish sync.


The bottom line is that PluralEyes gives you a way to quickly and accurately sync cameras without the use of timecode or manual reference marks. This makes it possible to use smaller, prosumer camcorders in multi-cam projects without creating a synchronization nightmare in post. It also lets you use full-blown pro camcorders in situations where establishing common timecode sync is impractical.




Working with PluralEyes


The way PluralEyes works is so simple for the editor, that it takes longer to explain “why” than to explain “how”. You start out by importing or ingesting all the clips into a Final Cut project. PluralEyes can synchronize clips in a sequence or in a bin, but the key is that you have to name the target to be analyzed “pluraleyes”. Either the bin or the sequence (whichever you want synced) has to be named “pluraleyes” and the project must have been saved for PluralEyes to work.




The most common approach would be to sync a sequence. To do this, place all your camera clips at the start of the timeline. Make sure there are no in or out marks. Stack the different camera angles (with audio) onto ascending video tracks. Camera 1 goes to V1/A1-2, Camera 2 to V2/A3-4, Camera 3 to V3/A5-6 and continuing up with more cameras. All cameras should be lined up at the head of the sequence and on separate tracks.




If you stopped and restarted the camera recordings during the production, then you can place all clips from a single camera onto the same video/audio tracks. I haven’t seen any Singular documentation that addresses this, but I was successful when I did this on a test project. In other words, Cam 1 clips can stay back-to-back on V1, Cam 2 clips on V2 and so on. I also haven’t seen any mention of a limitation as to the number of cameras. My tests included 2-4 cameras, but I’ve seen other internet posts where six cameras were used.




Once you’ve created the sequence to be synced and have saved the project, launch the PluralEyes application and select “sync”. The software takes a few minutes to analyze and process the tracks and to create a new synced sequence, as well as multi-clip groups. Singular’s short, downloadable sample project (3 cameras, 1 minute clips) only took several seconds to sync. Another project that I tested, which was a 3-camera, half-hour interview show, took a couple of minutes. These tests were both on a MacBook Pro.




Once PluralEyes is done, return to FCP and you will have a new synced sequence and source multi-clip groups. In my interview show test, the studio crew recorded it in segments, so each section was broken into a separate multi-clip group by PluralEyes. In the tests I’ve done so far, syncing has been fast and successful in each case. I have had one editor tell me it didn’t work when he tested it, but I have no idea if he was doing everything correctly. In any case, Singular lets you download a trial version to see for yourself.




I will offer one caveat about sync. Since the clips are aligned based on the audio, there is no guarantee that the audio recorded by the camera itself will be in perfect sync with its own video. For instance, if you are recording audio of a concert and the cameras are only picking up the ambient audio from the PA system, it’s quite likely that each camera will be visually out of sync by a frame or two (or more) when synced against a master audio recording from the board feed. This is due to the natural delay inherent in such live venues. Fortunately Final Cut offers some quick functions to adjust clip sync, either for the master clip itself or when trimming clips later during the edit.


Using PluralEyes is a no-brainer for any editor who works with multi-cam projects in Final Cut. There’s also an interesting bundling deal right now with the folks at CoreMelt and they’ve even done a quick tutorial showing how the two projects might be used in conjunction with each other. Check it out.


On another note, Singular is also working with post solutions for the Canon 5D Mark II, which can be found here and here.


© 2009 Oliver Peters