Euphonix Artist Series

As a video editor who started in the days of linear suites, tactile control surfaces are near and dear to my heart. It’s one of the things I miss in the modern nonlinear edit suite. Control devices, such as transport controls and mixing panels make you more efficient and elevate the performance capabilities of the room, not to mention, lessen operator fatigue. Euphonix entered the market years ago as a manufacturer of large, digitally-controlled, analog mixing consoles. They are a leader today in digital consoles for recording studios, live broadcast and video/film post production.

From this heritage, Euphonix has developed the Artist Series – a line of smaller audio/video controllers, based on their EuCon communications protocol. These products include MC Control, MC Mix, MC Transport and MC Color. The first three units can be used with various audio applications, like Nuendo, Digital Performer and Pro Tools. When Apple introduced Final Cut Pro 7 late last year, EuCon support was added, so Final Cut Pro, Soundtrack Pro, Color and Logic Pro can now communicate with these Euphonix surfaces in their native protocol. You aren’t limited to emulation using Mackie Control or HUI protocol.

The four Artist Series controllers are designed to be mixed and matched based on your needs. MC Transport is a control unit to drive your timeline, similar to Contour Design’s Shuttle Pro, the Lightworks edit controller or the discontinued Avid MUI. It has a large jog/shuttle knob and a number of programmable soft keys. MC Mix features eight motorized faders with additional soft keys and adjustment knobs. It is intended purely for mixing without any dedicated transport control section. MC Control combines transport, application commands and mixing into a single unit.

The real news came when Euphonix introduced MC Control, a control surface designed specifically for Apple Color. Tangent Devices and JL Cooper already made panels for Color, but at $1499, the Euphonix product finally brought the price into a range that made it attractive for the average Final Cut Studio owner.

Getting started

Euphonix loaned me an MC Control and MC Color for a few weeks. They were tested at different times and not connected together, but there’s no issue in running multiple panels at once. After a simple installation process, the EuControl software is placed into your Applications folder and runs resident on your Mac. The panels themselves connect to either your Ethernet port or an Ethernet router.  Multiple panels require a router or switch.

A couple of key points. If your Mac Pro has two Ethernet ports, then only Port 1 works correctly. In my case, I also had to turn off the Airport (wireless) card to have the panel be recognized. Once each was set up and working, the panels performed without issue on both a 17” MacBook Pro laptop and a MacPro tower. The last step in the process is to select the Euphonix controller in each application’s Control Surface dialogue.

MC Control

MC Control works with either EuCon, Mackie or HUI protocol, so it can be used with FCP6 as well as FCP7. EuCon control adds functions not available under the others. The first feature to jump out at you is the colorful central touch screen, surrounded by a series of soft keys and soft knobs. These are application-specific, so if you have both Final Cut Pro and Soundtrack Pro open at the same time, the display and button functions will change as you toggle between the two applications.

The right third of the panel is home for navigation and transport controls. The left portion houses four motorized faders. These have a very smooth tactile feel and are the biggest selling point for the unit. The faders function the same way as the virtual faders do within the application, so you can set clip levels or use them to write automation mix passes. Like most mix controllers, MC Control has Nudge and Bank functions, so the four physical faders can be used with more than four timeline tracks. Nudge shifts the group over one track at a time. If you press Nudge once, then faders 1-4 shift to tracks 2-5. If you press Bank, it shifts in groups of four tracks at a time, so faders 1-4 control tracks 5-8.

Final Cut Pro’s mix tool is based on mono tracks. A stereo track in FCP is simply two linked mono tracks that are panned left and right. Soundtrack Pro, however, combines a stereo pair into a single stereo track. An eight-track FCP timeline made up of four stereo pairs shows up as four stereo tracks when sent to Soundtrack Pro. In other words, a stereo clip ties up two faders in Final Cut, but only one in Soundtrack Pro.

Fortunately MC Control is smart enough to follow this. I set up a test mix of the same material in both Final Cut Pro and Soundtrack Pro and then quickly bounced back and forth. MC Control had no difficulty in going between the two – each time resetting the fader positions, redrawing the touch screen and swapping between stereo and mono tracks. MC Control gives you a wide range of access to each application’s common commands, however, you are not able to control some items, like filter parameters. That communication isn’t sent out from FCP over EuCon to the device.

MC Color

MC Color is the first Euphonix panel to extend beyond an audio-centric world. It is optimized for Apple Color and features three trackballs with z-rings, touch-sensitive soft knobs, programmable soft keys, transport controls and dedicated keys to copy and paste four color grades. Euphonix did a good job of packing Color’s various tabs, buttons, rooms and controls into this panel. That’s no easy feat, as Apple Color is the most complex and foreign GUI that a typical FCP editor will encounter. It does take a while to get used to MC Color’s layout. The controls all do multiple duties and are contextual – changing as you move through Color’s various tabs, known as “rooms”. Once you use MC Color for a while, you’ll learn which common tasks are mapped to a knob, soft key, trackball or z-ring.

The main reason you’d use a control surface for color grading is the trackballs and that’s where I’ll focus. The three trackball/z-ring controls are designed to adjust the color wheels. These are the main tools for shadow, midrange and highlight color balance and levels. That’s a common function of nearly every color grading application. The trackballs move smoothly, but the default range of movement is very fine. It takes a lot of spins to move from one side to the other of the on-screen color wheel. You can adjust the sensitivity for faster movement, as well as assign a multiplier button to accelerate the amount of travel.

I cranked up the sensitivity to 50 (about midway in its range), which made the cursor travel faster, though actual cursor movement on-screen seemed a bit coarse. The tactile response of the trackball itself was still smooth, however. Since the trackballs works with optical sensors, you can’t just give the trackballs a hard spin and have inertia move the cursor faster. You get better results with a slower, steadier approach. Euphonix suggests a sensitivity setting of around 33 and then use the 10x multiplier soft key when you want to accelerate mouse/trackball movement.

Another favored colorist’s tool is Curves. This requires a mouse or a pen to place points along the curve graph. MC Color lets you turn the center trackball into a conventional trackball mouse. You can use this to navigate around the curves and inject and adjust points along a curve. Even though MC Color controls Apple Color well, I’m not sure I would use it exclusively without mouse or keyboard. At times, I found it simply faster to click or move something with the mouse than to use a soft key or trackball. Bear in mind that I approach it with a video editor’s mentality and the design of MC Color reflects input from a number of professional colorists.


Euphonix’s Artist panels are top-notch controllers. They are well designed and well-constructed. Light, but not light-weight. One good reason to buy a surface is to ease the wear and tear on your wrist from repetitive stress disorder, caused by long-term mouse use. An even bigger reason is to be faster and more productive. You mix better when you can grab more than one fader at a time. You fly through color grading when you can use both hands to adjust multiple parameters simultaneously. This is something mixers and colorists have known for years.

Each of these panels is designed with different tasks and working styles in mind. I’m a big keyboard user, so I prefer using it for transport control – a throwback to the linear days, I suppose. I hate to mix automation passes with the mouse; therefore, MC Mix holds more attraction than MC Transport or MC Control. If I were doing daily color grading sessions, MC Color would definitely be a “must have” accessory. Thanks to the small form factor of the Artist Series panels, I could easily fit both of these panels side-by-side on my desk. They would neatly fit between my keyboard and the two computer displays of my system. Obviously another editor might choose to mix and match panels in a different configuration. The good news is that Euphonix is offering a lot of power at a very attractive price. Even adding all four panels costs less than many of the other items purchased for a professionally-equipped Final Cut Studio suite.

NOTE: This review was written prior to the announcement at NAB and completion of Avid’s acquisition of Euphonix. The Artist panels currently work with ProTools under Mackie emulation, but one can only assume that down the road, Avid’s audio and video products will integrate the EuCon protocol. At this time it is unknown whether a panel like MC Color will eventually work with Media Composer, Symphony or DS. According to comments from Avid personnel, it is their intention to see the Artist Series panels continue to work with as many systems – including competitors – as possible.

Written for Videography magazine (NewBay Media, LLC).

©2010 Oliver Peters


Production music is a subjective decision. You can never have enough resources to satisfy clients. I routinely use a variety of options, including SmartSound, Adobe’s tracks for Soundbooth, Apple’s tracks for Soundtrack Pro and the whole range of music from Killer Tracks, FirstCom and others.

Now editors have a new option: MyMusicSource, which comes complete with a new plug-in for Apple Final Cut Pro. The plug-in was developed in partnership with and marketed through BorisFX. Right up front, let me disclose that I know the principals, have a little stock in the company and have been involved in some consulting and beta testing. MyMusicSource is the brainchild of Michael Redman – a veteran composer, producer, recording engineer, facility owner and software entrepreneur. In addition to Final Cut Pro, My Music Source is also actively developing other import plug-ins for various NLEs and DAWs.

Getting started

The release of the FCP import plug-in is of interest to Final Cut editors, of course, but anyone can use MyMusicSource with or without this plug-in. It’s a web-based, online resource for production music, so you can access, search, license, purchase and download music tracks using any regular web browser. The beauty of the FCP plug-in is that you can start and end the process from inside the FCP interface, but it isn’t essential. The plug-in itself is a free download, as is establishing an account with MyMusicSource. The company makes its money licensing music for productions.

Here’s a quick overview of how the plug-in and process works. Once you install the MyMusicSource plug-in (download from BorisFX), an option for MyMusicSource is added to FCP’s import menu, alongside XML, Sony XDCAM, EDL, etc. Select this and it launches your default web browser to the start page for FCP users. Log in using your established account and you are off and running. At this point the process is similar to other online music services. You can select and preview music by various search criteria in different genres. As you browse clips, add them to a project cart for later review.

One key difference from other companies is that MyMusicSource is upfront about licensing costs. Their whole approach is to “pre-clear” the music before you can download. At the beginning of your search, you should establish the intended production use for the music, before you add a track to your project cart. As a producer, you may purchase tracks with a Preview License for $.99 per track. This allows you to purchase and download a full-length, full-quality track and temporarily use it within your production (in-house preview use only).

Once a final set of tracks has been decided upon and the correct use established, you may purchase an upgrade to the license for legal use of that music. If you know in advance what the target use will be for the production, such as non-commercial web, you have the option to select that license rate instead. Each cut of music will display a price based on the selected licensing, so you instantly know what it will cost as you browse through the inventory. Non-commercial rates for personal use start at $5.

Project carts may be shared with your clients. If you’ve selected a handful of possible tracks for a client’s review, then share the cart and the client can access and preview these tracks. As with any shopping cart system, finalize your choices and proceed through checkout. Once you’ve paid, move on to the download center, where you find three options: Send to Final Cut Pro, Zip and Download NOW or Zip and Email. The last two options are the same as if you accessed the site without the FCP plug-in. Option one is enabled if you have the FCP plug-in installed.

You may also select between MP3 and 48K AIFF audio file formats. MP3 files are a faster download, but require a render in your FCP timeline. AIFF files will take a bit longer and are larger files, but work fine inside FCP. One option is to download MP3 files (using method 2 or 3) and then drag them into FCP via Digital Heaven’s Loader application. This converts the MP3 files into 48K AIFF. Another option is to convert MP3s using QuickTime Player Pro. These last two approaches work fine, but it means a tad more work and obviously detours away from the roundtrip magic. I normally opt for the AIFF files. One issue I’ve found is that the Send to Final Cut Pro feature has some access issues with FireFox, so use Safari 4 if you encounter these when using this method.

The last step of the roundtrip is back into FCP. A MyMusicSource media folder (containing the downloaded tracks) is placed into the same folder as your active FCP project file. A bin with the tracks is imported into the FCP project and shows up in your FCP browser. If you have more than one project open, you’ll receive a prompt to let the plug-in script know which project to use. Another handy feature of MyMusicSource is that when tracks are downloaded, you will also receive a PDF of the actual licensing information. This is great for the end of the project when you have to turn in music cue sheets and clearance information. It’s all right there from the very start!

OK, so the process is simple and straightforward, but what about the music itself? As I said at the start, music is subjective. The choices are good, but a big difference with the MyMusicSource inventory is an attempt to have a very contemporary sound. The selections are more artist-centric than I tend to see in the competition. There are also more vocal selections. A popular production trend is to use songs instead of just scores. That can get very expensive if you try to license songs that you’ve heard on the radio or on iTunes. In my opinion, MyMusicSource offers a wider selection of good vocal tunes than other libraries, so if your production needs the catchy sound of some indie, alt-rock band, then you’ve got plenty of options to choose from!

©2010 Oliver Peters

Sitting in the Mix


Like most video editors, audio mixing isn’t necessarily my forte, but there are plenty of projects, where I end up “playing a mixer on TV”. I’ll be the first to recommend that – budget permitting – you should have an experienced audio editor/mixer handle the sound portion of your project. I work with several and they aren’t all equal. Some work best with commercials that grab your attention and others are better suited for the nuance of long-form projects. But they all have one thing in common. The ears to turn out a great mix.

Unfortunately there are plenty of situations where you are going to have to do it yourself “in the box”. Generally, these are going to be projects involving basic voice-overs, sound effects and music, which is typical of most commercials and corporate videos. The good news is that you have all the tools you need at your disposal. I’d like to offer some ideas to use for the next time that the task falls to you.

Most NLEs today have a decent toolset for audio. Sony Vegas Pro is by far the best, because the application started life as a multitrack DAW and still has those tools at its core. Avid Media Composer is much weaker, probably in large part because Avid has put all the audio emphasis on Pro Tools. Most other NLEs fall somewhere in between. If you purchased Apple’s Final Cut Studio or one of the Adobe bundles, then you have excellent audio editing and mixing software in the form of Soundtrack Pro or Soundbooth.

Mixing a commercial track that cuts through the clutter employs all the same elements as creating a winning song. It’s more than simply setting the level of announcer against the music. Getting the voice to sound right is part of what’s called getting it to “sit right in the mix”. It’s the same concept as getting a singer’s voice or solo lead instrument to cut through the background music within the overall mix.


1. Selection

The most important choice is the proper selection of the vocal talent and the music to be used. Most often you are going to use needledrop music from one of the many CD or online libraries. As you audition music, be mindful of what works with the voice qualities of the announcer. Think of it like the frequency ranges of an instrument. The music selected should have a frequency “hole” that is in the range of the announcer’s voice. The voice functions as an instrument, so a male announcer with a deep bass voice, is going to sound better against a track that lets his voice shine. A female voice is going to be higher pitched and often softer, so it may not work with a heavy metal track. Think of the two in tandem and don’t force a square peg into a round hole.


Soundtrack Pro, Soundbooth, GarageBand and SmartSound Sonicfire Pro are all options you may use to create your own custom score. One of the useful features in the SmartSound and Soundbooth scores is that you can adjust the intensity of arrangements to better fit under vocals. These two apps each use a different approach, but they both permit the kind of tailoring that isn’t possible with standard needledrop music.


2. Comping the VO track

It’s rare that a single read of a voice-over is going to nail the correct inflection for each and every phrase or word. The standard practice is to record multiple takes of the complete spot and also multiple takes of each sentence or phrase. As the editor, don’t settle for one overall “best” read, but edit together a composite track, so each phrase comes through with meaning. At times this will involve making edits within the word – using the front half from one take and the back half from another. Using a pro audio app instead of an NLE will help to make such edits smooth and seamless.


3. Pen tools and levels

I personally like to mix with an external fader controller, but there are times when you just have to get in with the pen tool and add specific keyframes to properly adjust levels. For instance, on a recent track, our gravely-voiced announcer read the word “dreamers”. The inflection was great, but the “ers” portion simply trailed off and was getting buried by the music. This is clearly a case, where surgical level correction is needed. Adding specific keyframes to bump up the level of “ers” versus “dream” solved the issue.


4. EQ

Equalizers are a good tool to affect the timbre of your talent’s voice. Basic EQs are used to accentuate or reduce the low, middle or high frequencies of the sound. Adding mids and highs can “brighten” a muddy-sounding voice. Adding lows can add some gravity to a standard male announcer. Don’t get carried away. Look through your effects toolset for an EQ that does more than the basics, by splitting the frequency ranges into more than just three bands.


5. Dynamics

The two tools used most often to control dynamics are compressors and limiters. These are often combined into a single tool. Most vocals sound better in a commercial mix with some compression, but don’t get carried away. All audio filters are “controlled distortion devices”, as a past chief engineer was fond of saying! Limiters simply stop peaks from exceeding a given level. This is referred to as “brick wall” limiting. A compressor is more appropriate for the spoken voice, but is also the trickiest to handle for the first time user.

Compressors are adjusted using three main controls: threshold, ratio and gain. Threshold is the level at which gain reduction kicks in. Ratio is the amount of reduction to be applied. A 2:1 ratio means that for every 2dB of level above the threshold setting, the compressor will give you 1dB of output above that threshold. Higher ratios mean more aggressive level reduction. As you get more aggressive, the audible output is lower, so then the gain control is used to bring up the average volume of the compressed signal. Other controls, like attack and release times and knee, determine how quickly the compressor works and how “rounded” or how “harsh” the application of the compression is. Extreme settings of all of these controls can result in the “pumping” effect that is characteristic of over-compression. That’s when the noise floor is quickly made louder in the silent spaces between the announcer’s audio.


6. Effects

The selective use of effects filters is the “secret sauce” to make a VO sparkling. I’ll judicially use reverb units, de-essers and exciters. Let me again emphasize subtlety. Reverb adds just a touch of “liveness” to a very dry vocal. You want to pick a reverb sound that is appropriate to the voice and the situation. The better reverb filters base their presets on room geometry, so a “church” preset will sound different than a “small hall” preset. One will have more echo than the other, based on the simulated times that it would take for audio to bounce off of a wall in a room this size.

Reverbs are pretty straightforward, but the other two may not be. De-essers are designed to reduce the sibilance in a voice. Essentially a de-esser acts as a multi-band EQ/compressor that deals with the frequency ranges of sibilant sounds, like the letter “s”. An exciter works by increasing the harmonic overtones present in all audio. Sometimes these two may be complementary and at other times they will conflict. An exciter will help to brighten the sound and add a feeling of openness, while the de-esser will reduce natural and added sibilance.

The exact mixture of EQ, compression and effects becomes the combination that will help you make a better vocal track, as well as give a signature sound to your mixes.


7. Sound design

Let’s not forget sound effects. Part of the many-GBs of data installed with Final Cut Studio are tons of sound effects. Soundbooth includes an online link to Adobe’s Resource Central. Here you can audition and download a wealth of SFX right inside the Soundbooth interface. Targeted use of sound effects for ambience or punctuation can add an interesting element to your project.

In a recent spot that I cut, all the visuals were based on the scenario of a surfer at the beach. This was filmed MOS, so the spot’s audio consisted of voice-over and music. To spruce up the mix, it was a simple matter of using the Soundtrack Pro media browser to search for beach, wave and seagull SFX – all content that’s part of the stock Final Cut Studio installation. Soundtrack Pro makes it easy to search, import and mix, all within the same interface.

Being a better editor means paying attention to sound as well as picture. The beauty of all of these software suites is that you have many more audio tools at your disposal than a decade ago. Don’t be afraid to use them!

© 2009 Oliver Peters

Scoring with Sonicfire Pro


Music choices are very subjective and can often be the most difficult part of finishing a production. There is no replacement for a true custom score that’s right on the money, but rarely do clients have a budget to support that, especially in the world of corporate video. I’ve frequently built videos with music changes every :30 or so. I’m essentially scoring the video without the help of a composer. That takes a lot of time to audition cues online through a needledrop library like Killer Tracks and often clients don’t have the budget to pay for 20 or 30 cues on a longer production. This is where royalty-free music sources can really shine. There are various options, including the music cues that come with Apple Soundtrack Pro or Adobe Soundbooth, but neither of these options is as comprehensive as SmartSound.




SmartSound is really two entities – the Sonicfire Pro music customization software and the supporting SmartSound music libraries. In order to get the best out of Sonicfire Pro, you really need to use SmartSound music. To build up my own library, I treat myself to a few new discs each Christmas and whenever a new project can support it!




The Sonicfire Pro software lets you non-destructively change the variation (arrangement), “mood” (orchestration), tempo and length of these music selections. It can do this because each song file is made up of blocks. When you change the duration or pick a different variation to a song, Sonicfire Pro intelligently rearranges the block to avoid something that sounds overly repetitive. The newer offerings in the library include “mood mapping”. This means that the discs are multilayered with instruments mixed into stems – rhythm, lead, percussion, etc. Inside Sonicfire Pro, you can change the “mood” by selecting from a preset or by changing the relative mix of these stems.




Compositions are based on a fixed tempo measured in BPM (beats per minute). If setting the duration doesn’t quite nail the length you need, you can also alter the tempo. This changes the metadata for the cue and speeds up or slows down the song by altering the BPM. This is a non-destructive process, so tempo files can be deleted and the original native BPM restored. Remember that these song arrangements are made up of real instruments and real compositions, not simply a series of loops in the way that a music cue might be created using Apple’s Garageband.




The Sonicfire Pro interface works in two parts: the SmartSound Express Track window and the project (timeline or tracksheet) window. Express Track is a smart media browser that lets you search, audition and modify your music selections. It also lets you browse SmartSound’s online library for additional music. Express Track also displays composer and license information that may be used for music licensing cue sheets. Many editors mistakenly believe that they have unlimited rights to use royalty-free compositions for any purpose. SmartSound music covers most typical productions, but not everything. For example, you are covered for regional TV spots, web videos, film festivals or cable networks like Discovery; but, you are not covered if your show runs on HBO or if Warner Bros. buys your film and distributes prints to thousands of theaters. Check with SmartSound if you have licensing inquiries.




Most of the actual music adjustment is done in the project window. Music is inserted onto a track and the song’s length can be adjusted by dragging the end of the selection. Each song is composed of an arrangement of separate thematic subsections that are each made up of blocks. You can opt to protect certain of these arrangements as you lengthen or shorten a track. This prevents Sonicfire Pro from completely changing the arrangement based on its internal algorithms. The subsections change at keyframed points and the keyframes can be slid earlier or later, thus changing when one subsection transitions to another within the same song.




You build up your score by assembling various music cues onto separate tracks, complete with crossfades and even add “hits”, like a cymbal crash, bell effects and more. A multilayered or “mood-mapped” cue can be split into sections on the same track to change the mood within that music cue. For example, if your video starts and ends with a high energy visual montage that bookends a talking-head spokesperson in the middle section, simply add a mood change at the start and end of this dialogue sequence. Choose a more subdued variation in the mood setting or manually reduce the level of some of the more intrusive instruments by using the property sliders or the appropriate volume envelopes in the tracksheet.


What about Final Cut Pro?


So far this has been a quick overview of how Sonicfire Pro 5 Scoring Edition works in general. SmartSound has worked with numerous NLE manufacturers over the years and in fact, has already developed timeline integration with Avid Media Composer, using MetaSync tracks. New since NAB 2009 is Sonicfire Pro 5.1 with the Final Cut Pro plug-in.




To use Sonicfire Pro with Final Cut, simply edit your FCP timeline as you normally would, adding markers to indicate music changes or the start/end points for a music cue. Once you’ve edited the sequence, save it and leave FCP open. Now launch Sonicfire Pro. The first way is simply to select individual songs and customize them to length. To do this, choose “import from Final Cut Pro” in the Express Track window’s right-hand slide-out drawer. Your FCP markers will be displayed with timing information. By highlighting a marker or a range of markers, Sonicfire Pro will customize the length of the song to fit that duration.




When you are done, choose “Export Soundtrack/Video”, check Final Cut Pro in the dialogue box and save the file (now a rendered/flattened AIFF). It will appear in your FCP browser, so drop it on the timeline and continue. If you need to roundtrip back to Sonicfire Pro, make sure you have set up your FCP preferences so that audio files use Sonicfire Pro as the external editor. If so, right-click the clip on the timeline and select “Open in Editor”, which sends you to Sonicfire Pro. The app knows to pull up that song and you are ready to make adjustments. Make the changes, export again (replacing the previously exported file) and FCP will reconnect to the newer media.




The second and more fun way to work with the two apps is to use Sonicfire Pro to score your complete FCP sequence. Choose “Import from Final Cut Pro” twice: once in both the Express Track window and again into a new project (under the file menu). If you exported a QuickTime reference file of your FCP sequence, you can also open this in Sonicfire Pro to sync the timeline and video. When you do so, the FCP markers will also show up in your Sonicfire Pro project timeline. You can also import the soundtrack from your video (presumably dialogue) and do a final mix right inside of Sonicfire Pro. In any case, this is a great way to make sure your music is perfectly customized to match your FCP sequence. When you are done, “Export Soundtrack/Video” (choosing FCP again) and your mix appears inside your FCP browser.




Over the years, I’ve been a happy SmartSound user. I like the quality of their compositions and I really appreciate that they continue to add to the available library. Although the newest selections are multilayered, that doesn’t obsolete the older, non-layered music. In fact, I still use QuickTracks, which originally came bundled with Premiere 6.5! As I said at the beginning, music is very subjective, so there’s no guarantee that a large SmartSound library is really going to cover every client’s need – but it sure helps. The new FCP plug-in simply makes it easier to use Sonicfire Pro and FCP together.




SmartSound has provided a number of video tutorials on their website that offer a better look at how the whole process works.


© 2009 Oliver Peters

PluralEyes- Help for that Syncing Feeling


If you’ve ever edited multi-cam shows where the production crew’s attitude seemed to have been, “Sync! We don’t need any stinkin’ sync!” – then this software is for you. Without a doubt, every editor friend I ran into at NAB that happened to pass by Singular Software’s booth raved about this product. “You have GOT to see it,” was the comment I often heard about PluralEyes during that week.


What’s the need?


When you work multi-cam shows, proper sync is essential to line up the camera iso recordings in post. Obviously timecode is ideal, but this only works properly when all cameras were fed from a genlocked master timecode generator. In the case of digital run-and-gun projects (like a low-budget rock concert or reality TV productions), the cameras are running wild and not necessarily synced to each other.


Under the best of circumstances you might be able to get the crew to internally sync the cameras to time-of-day timecode at the beginning of the day or get them to occasionally shoot a large LED timecode display somewhere within a concert venue. In a film-style shoot, they might have started each take with clapsticks. More often than not, this doesn’t go according to plan once in the thick of the production – or the timecode starts out close and drifts out. The latter often happens when a camera gets powered down and back up in the course of the production.


Most modern NLEs have multi-cam editing tools. Typically these let you sync clips by matching timecode or by marking an in-point at some common event and aligning the source clips accordingly. In the film example, the point where the sticks clap shut provide a good mark-in-point. In the absence of either clapsticks or valid timecode, you often find yourself looking for things that become a common reference point. For instance, the same frame in each camera angle, just where the singer touches his nose in a unique way! Obviously this can become incredibly time-consuming and often not very frame-accurate.




Enter PluralEyes


PluralEyes is designed for Final Cut Pro editors. It synchronizes clips based on common audio. It’s not a plug-in, but a standalone application that works in tandem with FCP. The software analyzes the waveform of a clip’s audio track and processes sync based on the commonality of the tracks among clips from different camera angles. In order to work, you must have in-camera sound, even if it’s only a scratch track. Picture-only (MOS) recordings still have to be lined up manually or by timecode.


On the other hand, audio-only clips can be synchronized. If you are shooting a concert, the wild audio from the cameras can be synchronized to the clean audio recording fed from the mixer to an audio recorder. PluralEyes won’t adjust for any sync drift, so such audio tracks still need to be properly recorded. In the examples I’ve seen and tested, the camera tracks can be pretty distorted, which means PluralEyes can still perform the analysis with less-than-pristine audio tracks, as long as it can sufficiently interpret the waveform to establish sync.


The bottom line is that PluralEyes gives you a way to quickly and accurately sync cameras without the use of timecode or manual reference marks. This makes it possible to use smaller, prosumer camcorders in multi-cam projects without creating a synchronization nightmare in post. It also lets you use full-blown pro camcorders in situations where establishing common timecode sync is impractical.




Working with PluralEyes


The way PluralEyes works is so simple for the editor, that it takes longer to explain “why” than to explain “how”. You start out by importing or ingesting all the clips into a Final Cut project. PluralEyes can synchronize clips in a sequence or in a bin, but the key is that you have to name the target to be analyzed “pluraleyes”. Either the bin or the sequence (whichever you want synced) has to be named “pluraleyes” and the project must have been saved for PluralEyes to work.




The most common approach would be to sync a sequence. To do this, place all your camera clips at the start of the timeline. Make sure there are no in or out marks. Stack the different camera angles (with audio) onto ascending video tracks. Camera 1 goes to V1/A1-2, Camera 2 to V2/A3-4, Camera 3 to V3/A5-6 and continuing up with more cameras. All cameras should be lined up at the head of the sequence and on separate tracks.




If you stopped and restarted the camera recordings during the production, then you can place all clips from a single camera onto the same video/audio tracks. I haven’t seen any Singular documentation that addresses this, but I was successful when I did this on a test project. In other words, Cam 1 clips can stay back-to-back on V1, Cam 2 clips on V2 and so on. I also haven’t seen any mention of a limitation as to the number of cameras. My tests included 2-4 cameras, but I’ve seen other internet posts where six cameras were used.




Once you’ve created the sequence to be synced and have saved the project, launch the PluralEyes application and select “sync”. The software takes a few minutes to analyze and process the tracks and to create a new synced sequence, as well as multi-clip groups. Singular’s short, downloadable sample project (3 cameras, 1 minute clips) only took several seconds to sync. Another project that I tested, which was a 3-camera, half-hour interview show, took a couple of minutes. These tests were both on a MacBook Pro.




Once PluralEyes is done, return to FCP and you will have a new synced sequence and source multi-clip groups. In my interview show test, the studio crew recorded it in segments, so each section was broken into a separate multi-clip group by PluralEyes. In the tests I’ve done so far, syncing has been fast and successful in each case. I have had one editor tell me it didn’t work when he tested it, but I have no idea if he was doing everything correctly. In any case, Singular lets you download a trial version to see for yourself.




I will offer one caveat about sync. Since the clips are aligned based on the audio, there is no guarantee that the audio recorded by the camera itself will be in perfect sync with its own video. For instance, if you are recording audio of a concert and the cameras are only picking up the ambient audio from the PA system, it’s quite likely that each camera will be visually out of sync by a frame or two (or more) when synced against a master audio recording from the board feed. This is due to the natural delay inherent in such live venues. Fortunately Final Cut offers some quick functions to adjust clip sync, either for the master clip itself or when trimming clips later during the edit.


Using PluralEyes is a no-brainer for any editor who works with multi-cam projects in Final Cut. There’s also an interesting bundling deal right now with the folks at CoreMelt and they’ve even done a quick tutorial showing how the two projects might be used in conjunction with each other. Check it out.


On another note, Singular is also working with post solutions for the Canon 5D Mark II, which can be found here and here.


© 2009 Oliver Peters

Proper Monitoring

Just like good lighting and camerawork are some of the fundamentals of quality production, good monitoring provides some of the same important building blocks for post-production. Without high quality video and audio monitors, as well as waveform monitors and vectorscopes, it is impossible to correctly assess the quality of the video and audio signals with which you are working. There are few if any instruments that truly tell an editor or mixer the degradation of signals as they travel through the system any better than the human eyes, ears and brain. You cannot read out the amount of compression applied to a digital file from some fancy device, but the eye can quickly detect compression artifacts in the image.


Such subjective quality evaluations are only valid when you are using professional, calibrated monitoring that shows you the good with the bad. The point of broadcast grade video monitors and studio grade audio monitors is not to show you a pleasing picture or great sounding mix, but rather to show you what’s actually there, so that you can adjust it and make it better. You want the truth and you won’t get that from a consumer video monitor or TV or from a set of discount boombox speakers.


Video Monitors


Let’s start with the picture. A proper post-production suite should have a 19 or 20-inch broadcast grade monitor for video evaluation. Smaller monitors can be used if budgets are tight, but larger is better. Most people tend to use Sonys, but there are also good choices from Panasonic and Barco. In the Sony line, you can choose between the BVM (broadcast) and the PVM (professional, i.e. “prosumer”) series. The BVMs are expensive but offer truer colors because of the phosphors used in the picture tube, but most people who work with properly calibrated PVM monitors are quite happy with the results. In no case at this point in time would I recommend flat panel monitors as your definitive QC video monitor – especially if you do any color-correction with your editing.


The monitor you use should have both component analog (or SDI) and composite analog feeds from your edit system. Component gives you a better image, but most of your viewers are still looking at the end product (regardless of source) via a composite input to a TV or monitor of some type. Frequently things can look great in component and awful in composite, so you should be able to check each type of signal. If you are using a component video feed, make sure your connections are solid and the cable lengths are equal, because reduced signal strength or unequal timing on any of the three cables can result in incorrect colorimetry when the video is displayed. This may be subtle enough to go unnoticed until it is too late.


Properly calibrated monitors should show a true black-and-white image, meaning that any mage, which is totally B&W, should not appear to be tinted with a cast of red, blue or green. Color bars should appear correct. I won’t go into it here, but there are plenty of resources which describe how to properly set up your monitor using reference color bars. Once the monitor is correctly calibrated, do not change it to make a bad picture look better! Fix the bad image!




Video monitors provide the visual feedback an editor needs, but waveform monitors and vectorscopes provide the technical feedback. These are the editor’s equivalent to the cinematographer’s light meter. The waveform monitor displays information about luminance (brightness, contrast and gamma) while the vectorscope displays information about color saturation and hue. The waveform can also tell you about saturation but not hue. Most nonlinear editing applications include software-based scopes, but these are pretty inadequate when compared to the genuine article. Look for products from Tektronix, Leader, Videotek or Magni. Their products include both traditional (CRT-based) self-contained units, as well as rack-mounted modules that send a display to a separate video monitor or computer screen. Both types are accurate. Like your video display, scopes can be purchased that take SDI, component analog or composite analog signals. SDI scopes are the most expensive and composite the least. Although I would recommend SDI scopes as the first choice, the truth of the matter is that monitoring your composite output using a composite waveform monitor or vectorscope is more than adequate to determine proper levels for luma and chroma.


Audio Monitors


Mixers, power amps and speakers make up this chain. It’s possible to set up a fine NLE suite with no mixer at all, but most people find that a small mixer provides a handy signal router for the various audio devices in a room. When I work in an AES/EBU-capable system (digital audio), I will use that path to go between the decks and the edit system. Then I only use the mixer for monitoring. On the other hand, in an analog environment, the mixer becomes part of the signal chain. There are a lot of good choices out there, but most frequently you’ll find Mackie, Behringer or Alesis mixers. Some all-digital rooms use the Yamaha digital mixers, but that generally seems to be overkill.


The choice of speakers has the most impact on your perception of the mix. You can get either powered speakers (no separate amp required) or purchase a separate power amp, depending on which speakers you purchase. Power amps seem to have less of an affect, but good buys are Yamaha, Crown, Alesis, Carvin and Haffler. The point is to match the amp with the speakers so that you provide plenty of power at low volumes in order to efficiently drive the speaker cones.


Picking the right speaker is a very subjective choice. Remember that you want something that tells you the ugly truth. Clarity and proper stereo imaging is important. Most edit suites are near-field monitoring environments, so huge speakers are pointless. You will generally want a set of two-way speakers, each with an eight-inch woofer and a tweeter. There are plenty of good choices from JBL, Alesis, Mackie, Behringer, Tannoy and Eastern Acoustic Works, but my current favorite is from SLS Loudspeakers (Superior Line Source). In the interest of full disclosure, I own stock in SLS and have them at home, but they are truly impressive speakers sporting innovative planar ribbon technology in their tweeter assembly.


VU Metering


VU, peak and PPM meters are the audio equivalent to waveform monitors. What these meters tell you is often hard to interpret because of the industry changes from analog to digital processing. An analog VU scale places desirable audio at around 0 VU with peaks hitting at no more than +3 db. Digital scales have a different range. 0 is the absolute top of the range and 0 or higher results in harsh digital distortion. The equivalent spot on this range to analog’s 0 VU is minus 12, 14 or 20 db. In effect, you can have up to 20 db of headroom before distortion, as compared to analog’s 3 to 6 db of headroom. The reason for the ambiguity in the nominal reference value is because many digital systems calibrate their VU scales differently. Most current applications set the 0 VU reference at –20 db digital. 


Mixing with software VUs can be quite frustrating because the meters are instantly responsive to peaks. You see more peaks than a mechanical, analog VU meter would ever show you. As a result, it is quite easy to end up with a mix that is really too low when you go to a tape machine. I generally fix this by setting the VTR inputs to a point where the level is in the right place for the VTR. Then I may change the level of the reference tone at the head of my sequence to match the proper level as read at the VTR. This may seem backwards, but it’s a real world workaround that works quite well.


© 2004 Oliver Peters