Edit Suite Design, Part III


Last year I wrote about designing a cost-effective HD edit suite and compared  three budget ranges for both an Avid and an FCP suite. This year I’ve put together one updated spreadsheet (download here). It’s interesting to see that over the course of this past year, some of the numbers have come down and you can assemble a very functional room for even less money.


About the numbers

What’s cost-effective to me might not be so to others. Let’s take a look at the bottom line. My total came in at about $70K for an FCP room. Bear in mind that this includes nearly everything: workstation, consoles, racks, storage, acoustic treatments, etc. Since I used a variety of online resources, actual prices will fluctuate. This is an FCP room using an AJA Kona LHi. This card covers most of your analog and digital i/o needs and is good for nearly all FCP jobs, short of 2K and 4:4:4 work. Add some room construction (or maybe none at all) and you are good to go.

My number also includes a 5% contingency, a 15% labor estimate (installation, wiring, integration), an extra $1,000 for last minute items (more connectors, cables, tools) and an extra $2,500 for miscellaneous software, like plug-ins. This means that if you are really frugal and are a total DIY-er, you might get the same room done for a bit over $55K.

On the other hand, I have not included shipping, sales tax, VTRs/decks/readers or any room construction costs. These will vary depending on state laws, vendors used and support needs, but suffice it to say that sales tax and shipping – plus room construction – could easily add $20K to this total. In other words, the realistic range for this same set of items is from $55K to under $100K, PLUS decks.


Software and systems

My estimate is for a fully-equipped Final Cut Studio room, but I have also included the Adobe Production Premium bundle. Although that package is centered around Premiere Pro, the reason I opted for the bundle is because Photoshop and After Effects are essential in many editorial workflows. These two apps alone justify the bundle, so you get the rest nearly for free. If you don’t see that working for you and just need basic graphic design and photo correction tools, then replace the bundle with something basic, like Adobe Photoshop Elements, Corel Paint Shop Pro or Pixelmator. Instead of $1,700, you would then only spend between $50 and $150.

I’m a fan of control surfaces and one big feature of the new Final Cut Studio update is support for the Euphonix EuCon protocol. I’ve loaded this estimate with three of their panels for mixing and color grading. Unfortunately these don’t work with Adobe or Avid NLE software. If you prefer to build an Adobe-centric room, then simply drop the Final Cut software and the Euphonix panels and save nearly $4,000.

Avid requires a little more mix-and-match. You’d make the same deletions as for Adobe, but also drop the AJA Kona LHi card and breakout box (another $2,000 deduction). You would replace this with either the Avid Media Composer Mojo DX or Nitris DX hardware/software bundle. Mojo DX is a digital unit. If you need analog i/o, then you would either have to pay the higher price for Nitris DX or augment the Mojo DX with external conversion. Options include the AJA FS1 or smaller AJA and/or Blackmagic Design converters. In any case, the uppermost number would be that of the Avid Media Composer Nitris DX bundle at approximately $15K.

To summarize, a Premiere Pro room, using all the same numbers would bring this total into the mid-$60K range and a Media Composer Nitris DX room would top out at about $80K. When you build a room like this, it is realistic to plan on a three-year initial investment before the next major upgrade (not counting routine software updates and support). The difference between a system costing $65K and one costing $80K isn’t that big for a productive business. In short, your choice of NLE will be based on personal preference more than the actual cost of just the software application itself.


Equipment selection

1. Monitors – CRTs are still important, but LCDs are catching up. You still need a CRT for checking interlacing issues, but I don’t think it really needs to be the prime monitor in the edit suite any longer. I put a CRT in the rack, but not in the suite to keep the clutter down. One big issue for an LCD or plasma in a suite is how well it displays standard definition video. Color/image reproduction is obviously important, but if you do a lot of SD work, it doesn’t matter how well the image looks in HD. Most of the professional LCDs work well, including Panasonic, JVC, FSI, TV Logic and others.

2. Scopes – This is an item that people are reluctant to spend money on – especially since many apps have built-in software scopes. I plugged in a Blackmagic Design UltraScope, because I think it fills a good niche at a relatively low cost. Remember that your built-in software scope won’t show the output of the i/o card nor the levels on a VTR. Avid doesn’t display software scopes outside of the color correction mode. FCP’s performance is often challenged when its scopes are active. These issues make an external scope desirable.

3. Audio – Edit suite audio signals are rarely passed through an analog mixer for any purpose other than simple monitoring. Unless you happen to have a personal preference for one brand or another, then go with a reasonable, inexpensive brand for speakers and mixers. Behringer is one such brand, but there are others. Make sure the preamps are decent if you record voice-overs through the mixer, but otherwise, nearly any low-cost mixer will do the job.

4. Racks / terminal gear – The particular configuration I’ve spelled out includes a set of prewired racks that are designed to accommodate several VTRs of various formats. The world is going tapeless, so many people are skipping the purchase of a dedicated VTR. Many rent the deck they need based on each project. That’s fine, but you have to be ready to integrate it into your system, which means having an available location to put the rental deck (like an open shelf in the rack) and a wiring harness that is ready to go. Whether you own or rent a deck or a tapeless device like an XDCAM or P2 player, this estimate includes cables, connectors and patching to make any permanent or ad hoc installation totally plug-and-play.

It’s also worth noting that this rack estimate includes enough spare capacity for the addition of a second room. If you decide to add a second suite later for edit, graphics or audio, then that installation will cost less. There will be less rack and wiring to buy, not to mention a lower labor cost.


Room layout

There are plenty of ways to design rooms and obviously this is going to be based on the space available. Most people don’t have the luxury of a blank slate. A typical office arrangement of two adjacent 12’ x 15’ rooms is enough room to accommodate an edit console, a client desk, equipment racks and a small voice-over booth. More space is nice, of course, but if most of the time only one editor is present in this space, then it will be just right.

The idea behind the two adjacent rooms is that it permits the equipment to be located outside of the edit suite (keeps the noise down), but yet close enough for the editor to get to it when needed. Only a few longer cables are needed, so there is no major expense in removing the gear from the room. If the distance is greater than just the other side of a wall, Cat5-to-DVI extenders will let you place the computers some distance away from their displays.


Acoustics and power

You are building a functional edit suite – not a recording studio. If you have the bucks for the latter, then go for it, but that’s not the situation most are in. Modern edit suites are found in all sorts of home and general office environments. The installed power and HVAC is usually adequate; so, even though it’s not ideal, there’s no real reason to spend tons of money for new circuits, A/C units or other, just to install one basic edit suite. The common 15-amp and 20-amp circuits you run into will power the gear in this spreadsheet. I swear by beefy UPS systems, however. These condition the power by evening out frequency and voltage fluctuation, which adds life to your gear and reduces file corruption. UPS systems also give you sufficient time to properly save, exit and shutdown in the event of building power failure.

How you handle room acoustics depends on: a) whether you are trying to keep out exterior sounds (like traffic noise); b) whether you are trying to keep edit noise inside the suite (speaker volume and privacy); or c) trying to just reduce the natural reverberation within the room. I have included sound treatment kits in the estimate designed to cover the issue of room reverberation (item C), as well as provide some “deadness” for a vocal booth.

If you intend to do more construction, then here are some additional tips to deal with all three circumstances.

1. Parallel walls – Most studios are built with non-parallel surfaces. Check out some of the showcase rooms at Walters-Storyk Design Group and you’ll get some “wow!” ideas based on many of the premier studio facilities in the world. If you are building your own, modest room, then add a slight angle to the walls wherever possible. This reduces “slap echo” – sound that bounces back and forth between two opposite walls. Typically this means that the editor end of the room will be narrower than the client end of the room.


2. Soundproofing walls – Most principles for soundproofing walls are based on density. Check out some of these ideas from Acoustic Sciences Corporation.

Quick fixes for the DIY-er:

– Double the sheetrock on each side of the wall

–  (4 sheets of gypsum board or 2 + 2 sheets of sound board)

–  Caulk all sheetrock seams

–  Screw the sheetrock to the studs, don’t use nails

–  Add a plastic vapor barrier in the wall, adhered to the studs

–  Stuff the wall with insulation


3. Ceilings – Unless you can build an enclosed room-with-a-room, you will have to contend with drop ceilings in an office space. Sound will be transmitted over the walls through the ceiling. If you have no other option, then the best remedy is simply to load up the ceiling with very thick rolled insulation on top of the ceiling tiles. Make sure the drop ceiling will support this, since the weight adds up.

4. Windows – I like rooms with outside light, but these can be a sound issue. The best approach is double-paned glass. Recording studios tackle this by installing custom-designed windows using two thick, angled glass panes. In the case of an edit suite, upgraded commercial windows will do the trick.

5. Doors – If you’ve done all of the above, then the doors will be the remaining source of sound transmission. Recording studios install massive doors and even sound locks, but this isn’t practical or warranted for most edit suites. Two easy fixes are solid-core, hardwood doors and weather stripping. Solid doors provide mass to stop the sound. The weather-stripping trim around the door and along the bottom of the door will help to seal off sound passing through these air gaps.

© 2009 Oliver Peters

Sitting in the Mix


Like most video editors, audio mixing isn’t necessarily my forte, but there are plenty of projects, where I end up “playing a mixer on TV”. I’ll be the first to recommend that – budget permitting – you should have an experienced audio editor/mixer handle the sound portion of your project. I work with several and they aren’t all equal. Some work best with commercials that grab your attention and others are better suited for the nuance of long-form projects. But they all have one thing in common. The ears to turn out a great mix.

Unfortunately there are plenty of situations where you are going to have to do it yourself “in the box”. Generally, these are going to be projects involving basic voice-overs, sound effects and music, which is typical of most commercials and corporate videos. The good news is that you have all the tools you need at your disposal. I’d like to offer some ideas to use for the next time that the task falls to you.

Most NLEs today have a decent toolset for audio. Sony Vegas Pro is by far the best, because the application started life as a multitrack DAW and still has those tools at its core. Avid Media Composer is much weaker, probably in large part because Avid has put all the audio emphasis on Pro Tools. Most other NLEs fall somewhere in between. If you purchased Apple’s Final Cut Studio or one of the Adobe bundles, then you have excellent audio editing and mixing software in the form of Soundtrack Pro or Soundbooth.

Mixing a commercial track that cuts through the clutter employs all the same elements as creating a winning song. It’s more than simply setting the level of announcer against the music. Getting the voice to sound right is part of what’s called getting it to “sit right in the mix”. It’s the same concept as getting a singer’s voice or solo lead instrument to cut through the background music within the overall mix.


1. Selection

The most important choice is the proper selection of the vocal talent and the music to be used. Most often you are going to use needledrop music from one of the many CD or online libraries. As you audition music, be mindful of what works with the voice qualities of the announcer. Think of it like the frequency ranges of an instrument. The music selected should have a frequency “hole” that is in the range of the announcer’s voice. The voice functions as an instrument, so a male announcer with a deep bass voice, is going to sound better against a track that lets his voice shine. A female voice is going to be higher pitched and often softer, so it may not work with a heavy metal track. Think of the two in tandem and don’t force a square peg into a round hole.


Soundtrack Pro, Soundbooth, GarageBand and SmartSound Sonicfire Pro are all options you may use to create your own custom score. One of the useful features in the SmartSound and Soundbooth scores is that you can adjust the intensity of arrangements to better fit under vocals. These two apps each use a different approach, but they both permit the kind of tailoring that isn’t possible with standard needledrop music.


2. Comping the VO track

It’s rare that a single read of a voice-over is going to nail the correct inflection for each and every phrase or word. The standard practice is to record multiple takes of the complete spot and also multiple takes of each sentence or phrase. As the editor, don’t settle for one overall “best” read, but edit together a composite track, so each phrase comes through with meaning. At times this will involve making edits within the word – using the front half from one take and the back half from another. Using a pro audio app instead of an NLE will help to make such edits smooth and seamless.


3. Pen tools and levels

I personally like to mix with an external fader controller, but there are times when you just have to get in with the pen tool and add specific keyframes to properly adjust levels. For instance, on a recent track, our gravely-voiced announcer read the word “dreamers”. The inflection was great, but the “ers” portion simply trailed off and was getting buried by the music. This is clearly a case, where surgical level correction is needed. Adding specific keyframes to bump up the level of “ers” versus “dream” solved the issue.


4. EQ

Equalizers are a good tool to affect the timbre of your talent’s voice. Basic EQs are used to accentuate or reduce the low, middle or high frequencies of the sound. Adding mids and highs can “brighten” a muddy-sounding voice. Adding lows can add some gravity to a standard male announcer. Don’t get carried away. Look through your effects toolset for an EQ that does more than the basics, by splitting the frequency ranges into more than just three bands.


5. Dynamics

The two tools used most often to control dynamics are compressors and limiters. These are often combined into a single tool. Most vocals sound better in a commercial mix with some compression, but don’t get carried away. All audio filters are “controlled distortion devices”, as a past chief engineer was fond of saying! Limiters simply stop peaks from exceeding a given level. This is referred to as “brick wall” limiting. A compressor is more appropriate for the spoken voice, but is also the trickiest to handle for the first time user.

Compressors are adjusted using three main controls: threshold, ratio and gain. Threshold is the level at which gain reduction kicks in. Ratio is the amount of reduction to be applied. A 2:1 ratio means that for every 2dB of level above the threshold setting, the compressor will give you 1dB of output above that threshold. Higher ratios mean more aggressive level reduction. As you get more aggressive, the audible output is lower, so then the gain control is used to bring up the average volume of the compressed signal. Other controls, like attack and release times and knee, determine how quickly the compressor works and how “rounded” or how “harsh” the application of the compression is. Extreme settings of all of these controls can result in the “pumping” effect that is characteristic of over-compression. That’s when the noise floor is quickly made louder in the silent spaces between the announcer’s audio.


6. Effects

The selective use of effects filters is the “secret sauce” to make a VO sparkling. I’ll judicially use reverb units, de-essers and exciters. Let me again emphasize subtlety. Reverb adds just a touch of “liveness” to a very dry vocal. You want to pick a reverb sound that is appropriate to the voice and the situation. The better reverb filters base their presets on room geometry, so a “church” preset will sound different than a “small hall” preset. One will have more echo than the other, based on the simulated times that it would take for audio to bounce off of a wall in a room this size.

Reverbs are pretty straightforward, but the other two may not be. De-essers are designed to reduce the sibilance in a voice. Essentially a de-esser acts as a multi-band EQ/compressor that deals with the frequency ranges of sibilant sounds, like the letter “s”. An exciter works by increasing the harmonic overtones present in all audio. Sometimes these two may be complementary and at other times they will conflict. An exciter will help to brighten the sound and add a feeling of openness, while the de-esser will reduce natural and added sibilance.

The exact mixture of EQ, compression and effects becomes the combination that will help you make a better vocal track, as well as give a signature sound to your mixes.


7. Sound design

Let’s not forget sound effects. Part of the many-GBs of data installed with Final Cut Studio are tons of sound effects. Soundbooth includes an online link to Adobe’s Resource Central. Here you can audition and download a wealth of SFX right inside the Soundbooth interface. Targeted use of sound effects for ambience or punctuation can add an interesting element to your project.

In a recent spot that I cut, all the visuals were based on the scenario of a surfer at the beach. This was filmed MOS, so the spot’s audio consisted of voice-over and music. To spruce up the mix, it was a simple matter of using the Soundtrack Pro media browser to search for beach, wave and seagull SFX – all content that’s part of the stock Final Cut Studio installation. Soundtrack Pro makes it easy to search, import and mix, all within the same interface.

Being a better editor means paying attention to sound as well as picture. The beauty of all of these software suites is that you have many more audio tools at your disposal than a decade ago. Don’t be afraid to use them!

© 2009 Oliver Peters

Canon EOS 5D Mark II in the real world


A case study on dealing with Canon 5D Mk2 footage on actual productions.

You could say that it started with Panasonic and Nikon, but it wasn’t until professional photographer Vincent Laforet posted his ground-breaking short film Reverie, that the idea of a shooting video with a DSLR (digital single lens reflex) camera caught everyone’s imagination. The concept of shooting high definition video with a relatively simple digital still camera was enough for Red Digital Cinema Camera Company to announce the dawn of the DSMC (digital still and motion camera) and push it to retool the concepts for its much anticipated Scarlet.

The Scarlet has yet to be released, but nevertheless, people have been busy shooting various projects with the Canon EOS 5D Mark II like the one used by Laforet. Check out these projects by directors of photography Philip Bloom and Art Adams. To meet the demand, companies like Red Rock Micro and Zacuto have been busy manufacturing a number of accessories designed specifically for the Canon 5D in order to make it a friendlier rig for the operator shooting moving video.


Frame from Reverie

Why use a still camera for video?

The HOW and WHY are pretty simple. Digital camera technology has advanced to the point that full-frame-rate video is possible using the miniaturized circuitry of a digital still photography camera. Nearly all DSLRs provide real-time video feedback to the LCD display on the back of the camera. Canon was able to use this concept to record the “live view” signal as a file to its memory card. The 5Dmk2 uses a large “full frame 35mm” 21.1 MP sensor, which is bigger than the RED One’s sensor or a 35mm motion picture film frame. Raw or JPEG stills captured with the camera are 5616×3744 pixels in a 5:3 aspect ratio (close to HD’s 16:9). The video view used for the live display is a downsampled image from the same sensor, which is recorded as a 1920×1080 high-def file. This is a compressed file (H264 codec) at a data rate of about 40Mbps. 16:9 is slightly wider than 5:3, so the file for the moving image is cropped on the top and bottom compared with a comparable still photo.

The true beauty of the camera is its versatility. A photographer can shoot both still images and motion video with the same camera and at the same settings. When JPEG images are recorded, then the same colorimetry, exposure and balance will be applied to both. Alternatively, one could opt for camera raw stills, in which case the photos can still be adjusted with great latitude after the fact, since this data would not be “baked in” as it is with the video. Stills from the camera use the full resolution of this large sensor, so photographs from the Canon 5D are much better than any stills extracted from an HD camera, including the RED One.


Frame from Reverie

Videographers have long used various film lens adapters to gain the lens selection and shallow depth-of-field advantages enjoyed by film DPs. The Canon 5D gives them the advantage of a wide range of glass that many may already own. The camera creates a relatively small footprint compared to the typical video and film camera – even with added accessories – so it becomes a very interesting option in run-and-gun situations, like documentaries. Last but not least, the camera body (no lenses) costs under $3K. So, compared with a Sony EX3 or a RED One, the 5Dmk2 starts to look even more attractive to low-budget filmmakers.

What you lose in the deal

As always, there are some trade-offs and the Canon EOS 5D Mark II is no exception. The first issue is recording time. The Canon 5D uses CF (CompactFlash) memory cards. These are formatted as FAT32 and have a 4GB file limit. Due to this limit, the maximum clip length for a single file recorded by the 5Dmk2 is about 12 minutes. Unlike P2 or EX, there is no provision for file spanning. The second issue is that the camera records at a true 30fps – not a video friendly 29.97 and not the highly desirable film rate of 23.98 or 24fps.

Audio is considered passable, but for serious projects, double-system, film-style sound is recommended. This workflow would be the same as if you were shooting on film. Traditional slates and/or software like PluralEyes (Singular Software) or FCPauxTC Reader (VideoToolshed) make post syncing picture and sound a lot easier.


Example of the rolling shutter effects used for interesting results

One major limitation cited by many is the rolling shutter that causes the so-called “jello” effect. The Canon 5D uses a single CMOS sensor and nearly all CMOS cameras have the same problem to some degree. This includes the RED One. This image artifact arises because the sensor is not globally exposed at the same point in time, like exposing a frame of 35mm film. Instead, portions of the sensor are sequentially exposed. This means that fast motion of an image or the camera translates into the image appearing to wobble or skew. In the worst case, the object in the frame takes on a certain rubbery quality, hence the name the “jello” effect. It can also show up with strobes and flashes. For example, I’ve seen it on strobe light and gun shot footage from a Sony EX3. In this case, the rolling shutter caused half of the frame to be exposed and the other half to be dark.

Skew or wobble becomes most obvious when there are distinct vertical lines within the frame, such as a lamp post or the edge of some furniture. Fast panning motion of the camera or subject can cause it, but it’s also quite visible in just the normal shakiness of handheld shots. If you notice many of the short films on the web, the camera is almost always stationary, tripod-mounted or moving very slowly. In addition, lens stabilization circuitry can also exacerbate the appearance of these artifacts. Yet, in other instances, it helps reduce the severity.


Note the skew on the passing subway cars

High-end CMOS cameras are engineered in ways that the effect is less noticeable, except in extreme circumstances. On the other hand, the Canon 5D competitor – the Nikon D90 – gained a bit of a reputation specifically for this artifact. To combat this issue, The Foundry recently announced RollingShutter, an After Effects and Nuke plug-in designed to tackle these image distortion problems.

Don’t let this all scare you away, though. Even a camera that is more subject to the phenomenon will turn out great images when the subject is organic in nature and care is taken with the camera movement. Check out some of the blog posts, like those from Stu Maschwitz, about these issues.


Frame from My Room video

But, how do you post it?

Like my RED blog post, I’ve given you a rather long-winded intro, so let’s take a look at a real-life project I recently posted that was shot using the Canon EOS 5D Mark II. Toby Phillips is a renowned international director, director of photography and Steadicam operator with tons of credits on commercials, music videos and feature films. I’ve worked with him on numerous spots where his medium of choice is 35mm film. Toby is also an avid photographer and Canon owner (including a 5D Mark II). We recently had a chance to use his 5Dmk2 for a good cause – a pro bono fundraiser for My Room, an Australian charity that assists the Children’s Cancer Centre at the Royal Children’s Hospital in Melbourne. Toby needed to shoot his scenes with minimal fuss in the ward. This became an ideal situation in which to test the capabilities of the Canon and to see how the concept translated into a finished piece in the real world.


Frame from My Room video

Toby has a definite shooting style. It typically involves keeping the camera in motion and pulling focus to just hit a point that’s optimally in focus at the sweet spot of the camera move. That made this project a good test bed for the Canon 5D in production. Lighting was good and the images had a warm and appealing quality. The footage generally turned out well, but Toby did express to me that shooting in this style – and shooting handheld without any of the Red Rock or Zacuto accessories or a focus puller – was tough to do. Remember that still camera lenses are not mechanically engineered like a motion picture lens. Focus and zoom ranges are meant to be set and left, not smoothly adjusted during the exposure time.


Posting footage from the 5Dmk2 is relatively easy, but you have to take the right steps, depending on what you want to end up with. The movie files recorded by the camera are QuickTime files using the H264 codec, so any Mac or PC QuickTime-compatible application can deal with the files. They are a true 30fps, so you can choose to work natively in 30fps (FCP) or first convert them to 29.97fps (for FCP or Avid). That speed change is minor, so there are no significant sync or pitch issues with the onboard audio. If you opt to edit with Media Composer, simply import the camera movies into a 29.97 project, using the RGB import settings and the result will be standard Avid media files. The camera shoots in progressive scan, so footage converted to 29.97 looks like that shot with any video camera in a 30p mode.

Canon 5D and Final Cut Pro

I edited the My Room project in Final Cut. Although I could have cut these natively (H264 at 30fps), I decided to first convert the files out of H264 for a smoother edit. I received the raw footage on a FireWire drive containing the clips copied from the CF cards. This included 150 motion clips for a total of about one hour of footage (18GB). The finished video would use a mixture of motion footage and moves on stills, so I also received another 152 stills from the 5Dmk2 plus 242 stills from a Canon G10 still camera.

Step one was file conversion to ProRes at 1920×1080. Apple Compressor on a MacBook Pro took under five hours for this step. Going to ProRes increased the storage needs from 18GB to 68GB.

Step two was frame rate conversion. The target audience is in Australia, so we decided to alter the speed to 25fps. This gives all shots a slight slomo quality as if the footage was shot in an overcranked setting. The 5Dmk2 by itself isn’t capable of variable frame rates or off-speed shooting, so any speed changes have to be handled in post. Although a frame rate change is possible in the Compressor setting (step 1), I opted to do it in Cinema Tools using the conform function. When you conform a file in Cinema Tools, you are altering the metadata information of that file. This tells a QuickTime-compatible application to play the file at a specific speed, such as 25fps instead of 30fps. I could also have used this to conform the rate to 29.97 or 23.98. Because only the metadata was changed, the time needed to conform a batch of 150 clips was nearly instantaneous.

Step three – pitch. Changing the frame rate through conform slows the clips, but it also affects the sync sound by making it slower and lowering the pitch. Our video was cut to a music track so that was no big deal; however, we did have one sync dialogue line. I decided to fix just the one line by using Soundtrack Pro. I went back to the original 30fps camera file and used STP’s TimeStretch. This let me adjust the sync speed (approximately 83% of the original) to 25fps, yet maintain the proper pitch.

Step four – stills. I didn’t want to deal with the stills in their full size within FCP. This would have been incredibly taxing on the system and generally overkill, even for an HD job. I created Photoshop actions to automate the conversion of the stills. The 152 5Dmk2 JPEG stills were converted from 5616×3744 to 3500×2333. The stills from the G10 come in a 4:3 aspect ratio (4416×3312) and were intended to be used as black-and-white portrait shots. Another Photoshop action made quick work of downsampling these to 3000×2250 and also converting them to black-and-white. Photoshop CS4 has a nice black-and-white adjustment tool, which generates slightly more pleasing results than a simple desaturation. These images were further cropped to 16:9 inside FCP during the edit.


Frame from My Room video


Once I had completed these conversions, the edit was pretty straightforward. The project was like any other PAL-based HD job (1920×1080, 25fps, ProRes). The Canon 5D creates files that are actually easier for an editor to deal with than RED, P2 or EX files. Naming follows the same convention as most what DSLRs use for stills, with files names such as MVI_0240.mov. There is no in-camera SMPTE timecode and all imported clips start from zero. File organization over a larger project would require a definite process, but on the other hand, you aren’t fighting something being done for you by the camera! There are no cryptic file names and copying the files from the card to other storage is as simple as any other QuickTime file. There is also no P2-style folder hierarchy to maintain, since the media is not MXF-based.

Singular Software and Glue Tools are both developing FCP-related add-ons to deal with native camera files from the Canon 5D. Singular offers an Easy Set-up for the camera files, whereas Glue Tools has announced a Log and Transfer plug-in. The latter will take the metadata from the file and apply the memory card ID number as a reel name. It uses the camera’s time-of-day stamp as a timecode starting point and interpolates clip timecode for the file. Thus, all clips in a 24-hour period would have a unique SMPTE timecode value, as long as they are imported using Log and Transfer.


Frame from My Room video

My final FCP sequence was graded in Apple Color. Not really because I had to, but rather to see how the footage would react. Canon positioned the 5Dmk2 in that niche between the high-end amateur and the entry level professional photographer, so it tends to have more automatic control than most pros would like. In fact, a recent firmware update added back some manual exposure control. In general, the camera tends to make good-looking images with rich saturation and contrast. Not necessarily ideal for grading, but Stu at ProLost offers this advice. Nevertheless, I really didn’t have any shots that presented major problems – especially given the nature of this shoot, which was closer to a documentary than a commercial shoot. I could have easily graded this with my standard “witches brew” of FCP plug-ins, but the roundtrip through Color was flawless.

As a first time out with the Canon EOS 5D Mark II, I think the results were pretty successful (click here to view). I certainly didn’t see any major compression artifacts to speak of and although the footage wasn’t immune from the “jello” effect, I don’t think it got in the way of the emotion we were trying to convey. A filmmaker who was serious about using this as the principal camera on a project could certainly deliver results on par with far more expensive HD cameras. To do that successfully, a) they would need to invest in some of the rigs and accessories needed to utilize the camera in a motion picture environment; and b) they would need to shoot carefully and adhere to set-ups that steer away from some of the problems.


What about 24fps?

25fps worked for us, but until Canon adds 24fps to the 5Dmk2 or a successor, filmmakers will continue to clamor for ways to get 24p footage out of the camera. Philip Bloom and others have posted innovative post “recipes” to achieve this.

I tested one of these solutions on my cut and was amazed at the results. If I needed to maintain sync dialogue on a project, yet wanted the “film look” of 24fps, this is the method I would use. It’s based on Bloom’s blog post (watch his tutorial video). Here are the steps if you are cutting with Final Cut Pro:

1. Edit your video at the native 30fps camera speed.
(Write down the accurate sequence duration in FCP.)

2. Export a self-contained QuickTime file.

3. Conform that exported file to 23.98fps in Cinema Tools.
(This will result in a longer, slowed down file.)

4. Bring the file into Compressor and create and apply a setting to convert the file, but leave the target frame rate at 23.98fps (or same as current file).

5. Click the applied setting to modify it in the Inspector window.

6. Enable Frame Controls and change the duration from “100% of source” to a new duration. Enter the exact original duration of the 30fps sequence (step 1). (Best results are achieved – but with the longest render times – when Rate Conversion is set to “Best – high quality motion compensated”.)

7. Import the converted file into FCP and edit it to a 23.98 fps timeline. This should match perfectly to a mixed version of the audio from the original 30fps sequence.

I was able to achieve a perfect conversion from 30fps to 23.98fps using these steps. There were no obvious optical flow artifacts or frame blending. This utilizes Compressor’s standards conversion technology, so even edited cuts in the self-contained file stayed clean without blending. Of course, your mileage may vary.

The edited video segment was 1:44 at 30fps and 2:10 at the slower 23.98fps rate. The retiming conversion necessary to get back to a 1:44-long 23.98 file took two hours on my MacBook Pro. This would be time-prohibitive if you wanted to process all of the raw footage first. Using it only on an edited piece definitely takes away the pain and leaves you with excellent results.

Cameras like the Canon EOS 5D Mark II are just the beginning of this DSMC journey. I don’t think Canon realized what they had until the buzz started. I’m sure you’ll soon see more of these cameras from Canon and Nikon, not to mention Panasonic and even Sony, too. Once RED finally starts shipping Scarlet, it will be interesting to see whether this concept really has legs. In any case, from an editor’s perspective, these formats aren’t your tape of old, but they also shouldn’t be feared.

©2009 Oliver Peters

Reliving the Zoetrope tradition – Walter Murch and Tetro


Age can sometimes be an impediment to inspired filmmaking, but Francis Ford Coppola, who recently turned 70, has tackled his latest endeavor with the enthusiasm and creativity of a young film school graduate. The film Tetro opened June 11th in New York and Los Angeles and will enter wider distribution in the weeks that follow. Coppola set up camp in a two-story house in Buenos Aires and much of the film was produced in Argentina. This house became the film’s headquarters for production and post in the same approach to filmmaking that the famed director adopted on Youth Without Youth (2007) in Romania.




Tetro is Francis Ford Coppola’s first original screenplay since The Conversation (1974) and is very loosely based on the dynamics within his own family. It is not intended to be autobiographical, but explores classic themes of sibling rivalry, as well as the competition between father and son. Coppola’s own father, Carmine (who died in 1991), was a respected musician and composer who also scored a number of his son’s films. One key figure in Tetro is the family patriarch Carlo (Klaus Brandauer), an acclaimed symphony conductor, who moved as a young music student from the family home in Argentina to Berlin and then to New York. Carlo’s younger son Bennie (Alden Ehrenreich) decided to head back to Buenos Aires in search of his older brother, the brooding poet Tetro (Vincent Gallo) – only to discover a different person than he’d expected.




Coppola put together a team of talented Argentine actors and crew, but also brought back key collaborators from his previous films, including Mihai Malaimare, Jr.(director of photography), Osvaldo Golijov (composer) and Walter Murch (editor and re-recording mixer). I caught up with Walter Murch via phone in London, where he spoke at the 1st Annual London Final Cut Pro User Group SuperMeet.


Embracing the American Zoetrope tradition


Tetro has a definite style and vision that sets it apart from current studio fare. According to Walter Murch, “Francis funded Tetro in the same fashion as his previous film Youth Without Youth. He has personal money in it from his Napa Valley winery, as well as that of a few other investors. This lets him make the film the way he wants to, without studio interference. Francis’s directing style is process-oriented – he likes to let the film evolve during the production – to make serendipitous discoveries based on the actors, the sets, the atmosphere of a new city. Many directors work this way, but Francis embraces it more than any other. In Coppola’s own words: ‘The director is the ringmaster of a circus that is inventing itself.’ I think that’s why, at age 69, he was enthusiastic about jumping into a country that was new to him and working with talented young local filmmakers.”




This filmmaking approach is reminiscent of Coppola’s original concept for American Zoetrope Studios . There Coppola pioneered early concepts in electronic filmmaking, hallmarked by the “Silverfish”, an AirStream trailer that provided on-set audio and editing support. Murch continued, “Ideally everything needed to make a Zoetrope film on location should be able to be loaded into two vans. The Buenos Aires building that was our base of operations reminded me of the Zoetrope building in San Francisco 40 years ago. The central idea was to break down the separation between tasks and to be as efficient and collaborative as possible. In other words, to operate more like a film-school crew. Zoetrope also has always embraced new technology – the classic ‘early adopter’ profile. Our crew in Buenos Aires was full of young, enthusiastic local film technicians and artists and on a number of occasions, rounding a corner, I felt like I was bumping into a 40-year-younger version of myself.”


A distinctive visual style


Initial Tetro reviews have commented on the striking visual style of the film. All modern day scenes are in 2.35 wide-screen black-and-white, while flashbacks appear in more classically-formatted 1.77 color. This is Coppola’s second digital film and it followed a similar workflow to that used on Youth Without Youth, shooting with two of the director’s own Sony F900 CineAlta HD cameras. As in the earlier film, the signals from both F900s were recorded onto one Sony SRW field recorder in the HDCAM-SR format. This deck recorded two simultaneous 4:2:2 video streams onto a single tape, which functioned as the “digital negative” for both the A and B cameras.


Simultaneously, another backup recording was made in the slightly more compressed 3:1:1 HDCAM format, using the onboard recorders of the Sony cameras. These HDCAM tapes provided safety backup as well as the working copies to be used for ingest by the editorial team. The HDCAM-SR masters, on the other hand, were set aside until the final assembly at the film’s digital intermediate finish at Deluxe.




Did the fact that this was a largely black-and-white film impact Murch’s editing style? “Not as much as I would have thought,” Murch replied. “The footage was already desaturated before I started cutting, so I was always looking at black-and-white material. However, a few times when I’d match-frame a shot, the color version of the source media would pop up and then that was quite a shock! But the collision between color and black-and-white ultimately provoked the decision to frame the color material with black borders and in a different ‘squarer’ aspect ratio – 1.77 vs. 2.35.”




Changes in the approach


Walter Murch continued to describe the post workflow, “It was similar to our methods in Romania on Youth Without Youth, although with a couple of major differences. Tetro was assembled and screened in 720p ProRes, instead of DV. We had done a ‘bake-off’ of different codecs to see which looked the best for screening without impacting the system’s responsiveness. We compared DVCPRO HD 720 and 1080 with ProRes 720 and 1080, as well as the HQ versions of ProRes. Since I was cutting on Final Cut Pro, we felt naturally drawn to the advantages of ProRes, and as it turned out for our purposes, the 720 version of ProRes seemed to give us the best quality balanced against rendering time. My cutting room also doubled as the screening room and, as we were using the Sim2 digital projector, I had the luxury of being able to cut and look at a 20-foot wide screen as I did so. Another change for me was that my son [Walter Slater Murch] was my first assistant editor. Sean Cullen, my assistant since 2000, was in Paris cutting a film for the first time as the primary editor. Ezequiel Borovinsky and Juan-Pablo Menchon from Buenos Aires rounded out the editorial department as second assistant and apprentice respectively.”


The RED camera has had all the buzz of late, so I asked Murch if Coppola had considered shooting the film with RED, instead of his own Sonys. Murch replied, “Francis is very happy with the look of his cameras, and of course, he owns them, so there’s also a budget consideration. Mihai [Malaimare, DP] brought in a RED for a few days when we needed to shoot with three cameras. The RED material integrated well with the Sony footage, but there is a significantly different workflow, because the RED is a tapeless camera. In the end, I would recommend shooting with one camera or the other if possible. A production shouldn’t mix up workflows unnecessarily.”




Walter Murch discusses future technology


It’s hard to talk film with Walter Murch and not discuss trends, philosophy and technology. He’s been closely associated with a number of digital advances, so I wondered if he saw a competitor coming to challenge either Avid Media Composer or Apple Final Cut Pro for film editing. “It’s hard to see into the future more than about three years,” he answered. “Avid is an excellent system and studios and rental operations have capital investment in equipment, so for the foreseeable future, I think Avid and Final Cut will continue to be the two primary editing tools. Four years from now, who knows? I see more possibility for sooner changes in the area of sound editing and mixing. I’ve done some promising work with Apple’s Soundtrack Pro. The Nuendo-Euphonix combination is also very interesting; but, for Tetro it seemed best to stay compatible with what the sound team was familiar using. Also, [fellow re-recording mixer] Pete Horner and I mixed on the ICON and that’s designed to work with Pro Tools.”


Murch continued, “I’d really like to see some changes in how timelines are handled. I’ve used a Filemaker database for all of my notes now for more than twenty years, starting back when I was still cutting on film. I tweak the database a bit with each film as the needs change. Tetro was the first film where I was able to get the script supervisor – Anahid Nazarian in this case – to also use Filemaker. That was great, because all of the script and camera notes were incorporated into the same Filemaker database from the beginning. Thinking into the future, I’d love to see the Filemaker workshare approach applied to Final Cut Pro. If that were the case, the whole team – picture and sound editors and visual effects – could have access to the same sequence simultaneously. If I was working in one area of the timeline, for example, I could put a ‘virtual dike’ around the section I was editing. The others would not be able to access it for changes, but would see its status prior to my current changes. Once I was done and removed the ‘dike’ the changes would ripple through, the timeline would be updated and everyone could see and work with the new version.”


Stereoscopic 3D is all the rage now, but you may not know that Walter Murch also worked on one of the iconic 3D short films, Captain Eo, starring Michael Jackson. Francis Ford Coppola directed Eo for the Disney theme parks in 1986. It’s too early to tell whether the latest 3D trend will be sustained, but Murch offered his take. “3D certainly has excellent box office numbers right now, but there is still a fundamental perceptual problem with it: Through millions of years of evolution, our brains have been wired so that when we look at an object, the point where our eyes converge and where they focus is one and the same. But with 3D film we have to converge our eyes at the point of the illusion (say five feet in front of us) and simultaneously focus at the plane of the screen (perhaps sixty feet away). We can do this, obviously, but doing it continuously for two hours is one of the reasons why we get headaches watching 3D. If we can somehow solve this problem and if filmmakers use 3D in interesting ways that advance the story – and not just as gimmicks – then I think 3D has a very promising long-term future.”


Written for Videography magazine (NewBay Media, LLC)


© 2009 Oliver Peters

FxFactory adds diversity to your toolkit


For the past few posts I’ve been looking at a number of new plug-ins and applications designed to augment an editor’s toolset. I’m going to round off this “Plug-in Summer” with a fresh look at FxFactory. Noise Industries was one of the first developers to leverage the power of Apple’s Core Image technology for real-time filter application – first with Factory Tools for Avid (AVX) and then FxFactory for Apple’s FxPlug architecture. They found the most success with FCP editors and have focused primarily on FxFactory, but current versions of Factory Tools can still be purchased for Avid systems.




FxFactory operates with the three primary FxPlug hosts (Final Cut Express, Final Cut Pro and Motion), as well as Adobe After Effects CS3 and CS4. It actually installs as two components – the FxFactory filter management application and a package of plug-ins. The FxFactory application isn’t used to apply filters. Instead, this is where you control license registrations, hide filters you don’t want to use and disable trial versions. It also provides one place to get a quick visual overview and access to user instructions for all the effects. Last but not least, adventurous editors can use this as a portal for Apple’s Quartz Composer in order to develop their own custom plug-ins. That’s a unique part of FxFactory not offered by any other plug-in developer.




Noise Industries has developed their business through a partnership with various plug-in developers, who design specific filters to work with the FxFaxtory engine. These developers currently include idustrial revolution, yanobox, Boinx Software, SUGARfx, Futurismo Zugakousaku, DVShade and, of course, Noise Industries itself. In its most basic form, FxFactory is a free download. This means that you get the FxFactory application, a few free plug-ins and 15-day trial versions of the other filter packages. This is a great way to get started, because if you only care to buy the yanobox Motype title animation generator or the DVShade color correction EasyLooks filter, then that’s all you have to pay for. If you want a more comprehensive package, then get FxFactory Pro, which includes over 140 filters, generators and transitions, as well as the other trial packages. You also get a free 15-day trial period with the Pro package.




ParticleMetrix example




Boinx example


This partnership arrangement is an interesting aspect of the Noise Industries approach. Most plug-in vendors develop their filters with an in-house programming staff, resulting in a similar style and focus to the plug-ins that are developed. Since FxFactory plug-ins come from a variety of different programmers – each with a different vision of what they’d like to create – the total sum of filters provides more diverse choices than the competition. For instance, there are lots of glow filters on the market, but I’ve rarely seen anything as organic as idustrial revolution’s Volumetrix 2 package. FxFaxtory didn’t include particle effects until idustrial revolution came out with ParticleMetrix and Boinx Software was added as a partner. Now there are two of the most gorgeous particle packages under the same umbrella.




Much of this expansion has happened in the past year, giving you a lot to choose from in 2009. For instance, Final Cut Pro 7 will introduce alpha transitions, but idustrial revolution has been there for at least a year or more with SupaWipe. The new Final Cut Studio package will drop LiveType, so if you don’t want to do the effects in Motion 4 (or an older version of LiveType), yanobox Motype is a good alternative. Motype offers a wealth of presets with tons of customization so you can create very graceful title animations, all within a single track and single application of an effects generator. Remember, all of this installs into the Final Cut Studio apps, as well as After Effects, so editors who like to do their heavy lifting in After Effects can maintain filter compatibility.




It’s hard to cover the whole breadth of what’s possible with these effects in one single post. A relative newcomer is DVShade, whose EasyLooks provides FxFactory with a color corrector. This filter is deceptively simple, because it shows up as a single filter in the palette. Nevertheless, it includes a slider-based 3-way corrector, diffusion, gradient and vignette tools and a ton of preset looks. Unlike other 3-ways, target colors selected for the low/gamma/high color wells are used to tint those color ranges in an additive or subtractive fashion. This approach yields some interesting results. Like all the Noise Industries filters, if you are confused about its use, simply click on the logo at the top of the filter control pane to launch a PDF help guide. In the past year, Noise Industries has added a number of video tutorials to its website to further improve the customer experience.




As you look through the many options for filters, generators and transitions, it’s hard to decide which product is the best, if you assume that you only can purchase one package. Noise Industries offers some diverse and powerful options, but remember that it’s not “all or nothing”. Many companies are breaking down their comprehensive packages into smaller sets of filters. That’s great for the user – allowing you to get color correction filters from Company A, titling tools from Company B, keyers from Company C and so on. It’s a model that Noise Industries helped to start and one that let users customize their ideal working environment.


©2009 Oliver Peters

A little mocha in your video?


Tracking is the key to believable visual effects and one of the leaders in this technology is Imagineer Systems. Mocha, a 2D standalone tracker, is one of their better known products. If you purchased one of Adobe’s Creative Suite 4 bundles that included After Effects, then you already own mocha for After Effects, whether you know it or not. This year Imagineer released mocha for Final Cut Pro, bringing the same tracking power to FCP editors.


Many software packages already include tracking technology. Avid Media Composer, Apple Motion and Adobe After Effects all include built-in trackers. So, why buy another? All of these trackers are “point” trackers. You isolate one or more obvious targets on an image and position a tracker over it. Usually this is an area of a few pixels with a high contrast difference, like a clear logo or sign in the frame that moves with the object you are tracking. As the object moves through the frame, the tracker hopefully stays “locked” onto this target area while the software does an analysis pass of the video clip. If the tracked point moves off screen or out of focus, most point trackers will have trouble following and new tracker targets have to be picked where the first track leaves off. Often tracks have to be manually adjusted.




The information generated by tracking results in keyframe data that can be applied to stabilize shots, corner pin objects or to moving masks for filters and other effects. More accurate tracking is achieved by adding more point trackers. Corner pinning – used to replace one logo with another – generally requires four trackers. Mocha differs from these other tracking systems because it is a planar tracker. Instead of tracking isolated points, you draw a spline shape around an object and mocha will analyze all the pixels within that shape. This results in a more accurate track, even when part of the tracked area moves off screen, goes out of focus or when a foreground object briefly cuts across part of the tracked area.




Mocha for After Effects and for Final Cut are not true plug-ins, but are separate applications. The difference between them is the export module. To work in either version, simply import the clip to be tracked. At this point you are in mocha, which is basically the same as the full-blown, standalone version. Once you have completed the track, made adjustments and are satisfied with the results, you are ready to export the data. This last stage is where the plug-in versions differ. Both generate either basic motion information (translate, scale, rotate) or distort (corner pinning) values. The After Effects version generates text files that can be copied-and-pasted into AE as keyframes. The Final Cut version exports XML files that can be imported into FCP.


The data can also be inverted during the export. For example, if you are using the tracking data to stabilize an image, you’ll want to invert the data, so that the image is stable and stationary, but the frame around it appears to be moving. If you intend to use this tracking data in Apple Motion, then you first have to import the XML into FCP and “Send to a Motion project” from FCP.


When you import the XML file, the clip is imported along with motion tab data applied to it. Depending on which data you exported, this will either consist of scaling/position/rotation keyframes or distort keyframes for each corner. The keyframe data can be copied-and-pasted (paste attributes) onto a logo or mask. To place a new logo into a shot, cut the clip onto V1. Highlight the clip and copy. Cut the logo onto V2 and paste attributes (which came from the V1 clip). Now remove attributes from the clip on V1. If you used corner pinning, you can still adjust scale/position/rotation of the V2 logo for a better fit – or – if you applied basic motion, then you can still adjust the corner positions (distort).




I recently used both versions of mocha (AE and FCP) on a commercial for several shot repairs. The clips were of a large stadium video screen and above the screen was an LED sign with the words “Kansas City”. Unfortunately on the day of the shoot, the panel containing the middle “S” was not completely working. The production couldn’t be held up, so the decision was made to fix it in post. When I first saw the shots, I thought the fix was going to be a piece of cake inside FCP. Simply duplicate the clip so the same clip is on V1 and V2. Offset and crop the V2 clip so that the second “S” overlapped and hid the first “S” and all would be fine. Both clips would be moving in sync and the two letters would match perfectly. So much for theory! The shots were Steadicam shots with a right-to-left movement throughout the shot. These were also low angle shots resulting in enough optical difference between the positions of the two letters to make a simple fix impossible.


The next approach was tracking. Let me point out that mocha is a 2.5D planar tracker. In the real world it does a good job with objects that stay on the same plane relative to the lens, including with perspective changes. You won’t be immune from problems created by arcing or trucking 3D camera moves. All of the nice demos and tutorials are often done using moving subjects that are within static camera shots. Rarely are both moving.




Another consideration is film and 3:2 pulldown. These spots were shot on 35mm film and transferred to Digital Betacam. As with most NTSC footage, I had to contend with the whole-frame/split-field-frame cadence of film transfers. Although mocha can track across these split-field frames, the resulting data doesn’t necessarily composite well back in Motion or Final Cut. My solution was to first remove the pulldown in After Effects – one of the best tools for that. Then simply render out a 24fps progressive-frame file.


( Note: Most video apps express the video-friendly version of 24p as 23.98. That’s a rounded-up value. After Effects uses the 3-digit version of 23.976. Most apps make no distinction and use the same math, but in the case of AE, there is a difference between 23.976 and 23.98. So, use 23.976 in AE and you’ll still be OK as 23.98 back in FCP. )


Step one done. My clips were now progressive frame media at 24fps (23.976). Now for the fun. I pulled a single frame into Photoshop, fixed the “S” and cut out the sign to form the new foreground element. I would track this onto the clip to replace the original sign. In order to get the most rock solid lock, I ended up using a whole slew of tracking solutions, including Motion, After Effects and both versions of mocha. I exported both motion data and corner pinning data to try it either way.


In all cases, I found the track from mocha to be more precise than After Effects or Motion, but that didn’t always translate to the best compositing results. Corner pinning data sometimes results in information that is too precise and the object appears to jitter in a composite, because of the minute changes in each corner at each of the many keyframes. On the other hand, motion data results in an objects that appear to float too much and don’t look as locked as you’d like. As I said, mocha provided a great track, but this doesn’t mean that the keyframe values are precisely interpreted in the host compositor.


The last shot gave me the most fits, even though it looked the easiest. The sign was large in the frame and tracking points stayed in frame. Any of the trackers should have done well, but they didn’t. As the camera moved, someone in the foreground crowd was clapping and his hand intersected one of the corners for a few frames. I couldn’t get a good track with any of the point trackers. In this situation mocha shined. The analysis ignored the hand, since the larger spline area covered the entire shape of the sign. Instead of corner pinning, I used basic motion data and composited the shot in FCP.


( Note: After I was done tracking these shots, I stumbled upon this quick tutorial by Mathias Mohl, which combines his After Effects MochaImport script and Red Giant Software’s Warp to better deal with such perspective distortion issues. )


Once I had fixed all of the shots and had new 24p media, I brought these files back into After Effects. There I rendered new 29.97 clips with new 3:2 pulldown, so that the clips could be cleanly cut back into the spots.


Although no single solution provides the silver bullet to fix some of these issues, Imagineer Systems’ mocha goes much farther than the built-in solutions. If tracking is something you need to do often, then mocha for FCP is a pretty cost-effective answer.


© 2009 Oliver Peters

nVeil – the origami of video


If you are looking for a plug-in to give you a unique and different look for striking visual effects, then Storek Studio’s nVeil filter fits the bill. nVeil is an FxPlug filter for Final Cut Pro, Final Cut Express and Motion and provides yet another tool that leverages the power of OpenGL and the FxPlug architecture.




The creative description of what it does is a bit harder to explain than what is happening technically. That’s because the results you can achieve are more like video artwork, than simply stylizing video clips with various effects filters. In short, nVeil uses scalable vector graphics (SVG file) to slice the image into polygons, which are then rendered using OpenGL and powered by the computer’s GPU. These SVG files are considered “veils” (as in a curtain) that become “cells” onto which portions of the image are “projected”. The company has tested nVeil on a range of graphics cards and Macs. I’m on a 15” MacBook Pro with the nVidia GeForce 8600M GT card. It was fine up to 720p projects, but I did receive a render warning when I tried applying nVeil on a 1080i timeline. Nevertheless, unrendered real-time effects played smoothly on this unit.




nVeil ships with a library of about 60 SVG files. These can also be created or modified using Adobe Illustrator, so feel free to create your own. The user guide and tutorials on the nVeil website provide concise descriptions about how to generate new vector files. SVG images can include line art as well as text.




In FCP, simply drop the filter onto a clip and access an SVG file from the filter tab. The stock SVG files will be installed in Applications / nVeil / SVG Veil Library. You won’t see any affect at first, so adjust Source Scale as a starting point. Sliding the Source Scale slider to one extreme blurs the image, so that your vector graphic is filled with fuzzy colors, much like a kaleidoscope or a stained glass window. Slide it in the opposite direction and the image becomes a serious of crisp multiple images, like an insect-eye effect.




From there it’s a matter of adjusting the Source and Veil Transform sliders to get the look you want. Since the nVeil filter is being applied to moving video, the natural changes of objects and color in the video create a vibrant effect.




You can set keyframes for each slider value, so nVeil filters can change over the length of the clip and may be used for interesting transition effects. Furthermore, as with any other FCP or Motion filter, you can stack filters for other effects. For example, place a blur, glow or vignette filter upstream of the nVeil filter and the adjustments are visible inside the segments of the veil graphic.




The are a few key settings that control how the veil and source clip are composited. The Add SVG Bounds toggle (Veil Generation) determines whether the outer shape is a rectangle or the drawn edges of the graphic. With Add SVG Bounds unchecked, a dragon graphic holds the shape of the dragon. With it checked, the dragon graphic appears inside the edges of the rectangular file boundary.




At the bottom of the filter pane is the Background Mode: Pass Through, Projected or Matte. Pass Through leaves the original clip untouched in the background with the veil effect on top. Projected applies Source Transforms, but no veil parsing, to the source clip to create the background. Matte leaves a black background. As yet, there are no provisions to change the matte color or for multi-layer effects. You can’t place a clip with a veil effect on V2 and see a clip on V1 as the background.




Storek’s nVeil is yet another example of how innovative designers have taken the groundwork created by Apple’s FxPlug to give you new tools that can enrich your productions. Check out the site for motion examples of what can be done with nVeil.


© 2009 Oliver Peters