Sitting in the Mix

blog_mix_1

Like most video editors, audio mixing isn’t necessarily my forte, but there are plenty of projects, where I end up “playing a mixer on TV”. I’ll be the first to recommend that – budget permitting – you should have an experienced audio editor/mixer handle the sound portion of your project. I work with several and they aren’t all equal. Some work best with commercials that grab your attention and others are better suited for the nuance of long-form projects. But they all have one thing in common. The ears to turn out a great mix.

Unfortunately there are plenty of situations where you are going to have to do it yourself “in the box”. Generally, these are going to be projects involving basic voice-overs, sound effects and music, which is typical of most commercials and corporate videos. The good news is that you have all the tools you need at your disposal. I’d like to offer some ideas to use for the next time that the task falls to you.

Most NLEs today have a decent toolset for audio. Sony Vegas Pro is by far the best, because the application started life as a multitrack DAW and still has those tools at its core. Avid Media Composer is much weaker, probably in large part because Avid has put all the audio emphasis on Pro Tools. Most other NLEs fall somewhere in between. If you purchased Apple’s Final Cut Studio or one of the Adobe bundles, then you have excellent audio editing and mixing software in the form of Soundtrack Pro or Soundbooth.

Mixing a commercial track that cuts through the clutter employs all the same elements as creating a winning song. It’s more than simply setting the level of announcer against the music. Getting the voice to sound right is part of what’s called getting it to “sit right in the mix”. It’s the same concept as getting a singer’s voice or solo lead instrument to cut through the background music within the overall mix.

blog_mix_2

1. Selection

The most important choice is the proper selection of the vocal talent and the music to be used. Most often you are going to use needledrop music from one of the many CD or online libraries. As you audition music, be mindful of what works with the voice qualities of the announcer. Think of it like the frequency ranges of an instrument. The music selected should have a frequency “hole” that is in the range of the announcer’s voice. The voice functions as an instrument, so a male announcer with a deep bass voice, is going to sound better against a track that lets his voice shine. A female voice is going to be higher pitched and often softer, so it may not work with a heavy metal track. Think of the two in tandem and don’t force a square peg into a round hole.

blog_mix_9

Soundtrack Pro, Soundbooth, GarageBand and SmartSound Sonicfire Pro are all options you may use to create your own custom score. One of the useful features in the SmartSound and Soundbooth scores is that you can adjust the intensity of arrangements to better fit under vocals. These two apps each use a different approach, but they both permit the kind of tailoring that isn’t possible with standard needledrop music.

blog_mix_3

2. Comping the VO track

It’s rare that a single read of a voice-over is going to nail the correct inflection for each and every phrase or word. The standard practice is to record multiple takes of the complete spot and also multiple takes of each sentence or phrase. As the editor, don’t settle for one overall “best” read, but edit together a composite track, so each phrase comes through with meaning. At times this will involve making edits within the word – using the front half from one take and the back half from another. Using a pro audio app instead of an NLE will help to make such edits smooth and seamless.

blog_mix_8

3. Pen tools and levels

I personally like to mix with an external fader controller, but there are times when you just have to get in with the pen tool and add specific keyframes to properly adjust levels. For instance, on a recent track, our gravely-voiced announcer read the word “dreamers”. The inflection was great, but the “ers” portion simply trailed off and was getting buried by the music. This is clearly a case, where surgical level correction is needed. Adding specific keyframes to bump up the level of “ers” versus “dream” solved the issue.

blog_mix_4

4. EQ

Equalizers are a good tool to affect the timbre of your talent’s voice. Basic EQs are used to accentuate or reduce the low, middle or high frequencies of the sound. Adding mids and highs can “brighten” a muddy-sounding voice. Adding lows can add some gravity to a standard male announcer. Don’t get carried away. Look through your effects toolset for an EQ that does more than the basics, by splitting the frequency ranges into more than just three bands.

blog_mix_5

5. Dynamics

The two tools used most often to control dynamics are compressors and limiters. These are often combined into a single tool. Most vocals sound better in a commercial mix with some compression, but don’t get carried away. All audio filters are “controlled distortion devices”, as a past chief engineer was fond of saying! Limiters simply stop peaks from exceeding a given level. This is referred to as “brick wall” limiting. A compressor is more appropriate for the spoken voice, but is also the trickiest to handle for the first time user.

Compressors are adjusted using three main controls: threshold, ratio and gain. Threshold is the level at which gain reduction kicks in. Ratio is the amount of reduction to be applied. A 2:1 ratio means that for every 2dB of level above the threshold setting, the compressor will give you 1dB of output above that threshold. Higher ratios mean more aggressive level reduction. As you get more aggressive, the audible output is lower, so then the gain control is used to bring up the average volume of the compressed signal. Other controls, like attack and release times and knee, determine how quickly the compressor works and how “rounded” or how “harsh” the application of the compression is. Extreme settings of all of these controls can result in the “pumping” effect that is characteristic of over-compression. That’s when the noise floor is quickly made louder in the silent spaces between the announcer’s audio.

blog_mix_6

6. Effects

The selective use of effects filters is the “secret sauce” to make a VO sparkling. I’ll judicially use reverb units, de-essers and exciters. Let me again emphasize subtlety. Reverb adds just a touch of “liveness” to a very dry vocal. You want to pick a reverb sound that is appropriate to the voice and the situation. The better reverb filters base their presets on room geometry, so a “church” preset will sound different than a “small hall” preset. One will have more echo than the other, based on the simulated times that it would take for audio to bounce off of a wall in a room this size.

Reverbs are pretty straightforward, but the other two may not be. De-essers are designed to reduce the sibilance in a voice. Essentially a de-esser acts as a multi-band EQ/compressor that deals with the frequency ranges of sibilant sounds, like the letter “s”. An exciter works by increasing the harmonic overtones present in all audio. Sometimes these two may be complementary and at other times they will conflict. An exciter will help to brighten the sound and add a feeling of openness, while the de-esser will reduce natural and added sibilance.

The exact mixture of EQ, compression and effects becomes the combination that will help you make a better vocal track, as well as give a signature sound to your mixes.

blog_mix_7

7. Sound design

Let’s not forget sound effects. Part of the many-GBs of data installed with Final Cut Studio are tons of sound effects. Soundbooth includes an online link to Adobe’s Resource Central. Here you can audition and download a wealth of SFX right inside the Soundbooth interface. Targeted use of sound effects for ambience or punctuation can add an interesting element to your project.

In a recent spot that I cut, all the visuals were based on the scenario of a surfer at the beach. This was filmed MOS, so the spot’s audio consisted of voice-over and music. To spruce up the mix, it was a simple matter of using the Soundtrack Pro media browser to search for beach, wave and seagull SFX – all content that’s part of the stock Final Cut Studio installation. Soundtrack Pro makes it easy to search, import and mix, all within the same interface.

Being a better editor means paying attention to sound as well as picture. The beauty of all of these software suites is that you have many more audio tools at your disposal than a decade ago. Don’t be afraid to use them!

© 2009 Oliver Peters

Canon EOS 5D Mark II in the real world

blg_canon5d_11

A case study on dealing with Canon 5D Mk2 footage on actual productions.

You could say that it started with Panasonic and Nikon, but it wasn’t until professional photographer Vincent Laforet posted his ground-breaking short film Reverie, that the idea of a shooting video with a DSLR (digital single lens reflex) camera caught everyone’s imagination. The concept of shooting high definition video with a relatively simple digital still camera was enough for Red Digital Cinema Camera Company to announce the dawn of the DSMC (digital still and motion camera) and push it to retool the concepts for its much anticipated Scarlet.

The Scarlet has yet to be released, but nevertheless, people have been busy shooting various projects with the Canon EOS 5D Mark II like the one used by Laforet. Check out these projects by directors of photography Philip Bloom and Art Adams. To meet the demand, companies like Red Rock Micro and Zacuto have been busy manufacturing a number of accessories designed specifically for the Canon 5D in order to make it a friendlier rig for the operator shooting moving video.

blg_canon5d_3

Frame from Reverie

Why use a still camera for video?

The HOW and WHY are pretty simple. Digital camera technology has advanced to the point that full-frame-rate video is possible using the miniaturized circuitry of a digital still photography camera. Nearly all DSLRs provide real-time video feedback to the LCD display on the back of the camera. Canon was able to use this concept to record the “live view” signal as a file to its memory card. The 5Dmk2 uses a large “full frame 35mm” 21.1 MP sensor, which is bigger than the RED One’s sensor or a 35mm motion picture film frame. Raw or JPEG stills captured with the camera are 5616×3744 pixels in a 5:3 aspect ratio (close to HD’s 16:9). The video view used for the live display is a downsampled image from the same sensor, which is recorded as a 1920×1080 high-def file. This is a compressed file (H264 codec) at a data rate of about 40Mbps. 16:9 is slightly wider than 5:3, so the file for the moving image is cropped on the top and bottom compared with a comparable still photo.

The true beauty of the camera is its versatility. A photographer can shoot both still images and motion video with the same camera and at the same settings. When JPEG images are recorded, then the same colorimetry, exposure and balance will be applied to both. Alternatively, one could opt for camera raw stills, in which case the photos can still be adjusted with great latitude after the fact, since this data would not be “baked in” as it is with the video. Stills from the camera use the full resolution of this large sensor, so photographs from the Canon 5D are much better than any stills extracted from an HD camera, including the RED One.

blg_canon5d_4

Frame from Reverie

Videographers have long used various film lens adapters to gain the lens selection and shallow depth-of-field advantages enjoyed by film DPs. The Canon 5D gives them the advantage of a wide range of glass that many may already own. The camera creates a relatively small footprint compared to the typical video and film camera – even with added accessories – so it becomes a very interesting option in run-and-gun situations, like documentaries. Last but not least, the camera body (no lenses) costs under $3K. So, compared with a Sony EX3 or a RED One, the 5Dmk2 starts to look even more attractive to low-budget filmmakers.

What you lose in the deal

As always, there are some trade-offs and the Canon EOS 5D Mark II is no exception. The first issue is recording time. The Canon 5D uses CF (CompactFlash) memory cards. These are formatted as FAT32 and have a 4GB file limit. Due to this limit, the maximum clip length for a single file recorded by the 5Dmk2 is about 12 minutes. Unlike P2 or EX, there is no provision for file spanning. The second issue is that the camera records at a true 30fps – not a video friendly 29.97 and not the highly desirable film rate of 23.98 or 24fps.

Audio is considered passable, but for serious projects, double-system, film-style sound is recommended. This workflow would be the same as if you were shooting on film. Traditional slates and/or software like PluralEyes (Singular Software) or FCPauxTC Reader (VideoToolshed) make post syncing picture and sound a lot easier.

blg_canon5d_1

Example of the rolling shutter effects used for interesting results

One major limitation cited by many is the rolling shutter that causes the so-called “jello” effect. The Canon 5D uses a single CMOS sensor and nearly all CMOS cameras have the same problem to some degree. This includes the RED One. This image artifact arises because the sensor is not globally exposed at the same point in time, like exposing a frame of 35mm film. Instead, portions of the sensor are sequentially exposed. This means that fast motion of an image or the camera translates into the image appearing to wobble or skew. In the worst case, the object in the frame takes on a certain rubbery quality, hence the name the “jello” effect. It can also show up with strobes and flashes. For example, I’ve seen it on strobe light and gun shot footage from a Sony EX3. In this case, the rolling shutter caused half of the frame to be exposed and the other half to be dark.

Skew or wobble becomes most obvious when there are distinct vertical lines within the frame, such as a lamp post or the edge of some furniture. Fast panning motion of the camera or subject can cause it, but it’s also quite visible in just the normal shakiness of handheld shots. If you notice many of the short films on the web, the camera is almost always stationary, tripod-mounted or moving very slowly. In addition, lens stabilization circuitry can also exacerbate the appearance of these artifacts. Yet, in other instances, it helps reduce the severity.

blg_canon5d_2

Note the skew on the passing subway cars

High-end CMOS cameras are engineered in ways that the effect is less noticeable, except in extreme circumstances. On the other hand, the Canon 5D competitor – the Nikon D90 – gained a bit of a reputation specifically for this artifact. To combat this issue, The Foundry recently announced RollingShutter, an After Effects and Nuke plug-in designed to tackle these image distortion problems.

Don’t let this all scare you away, though. Even a camera that is more subject to the phenomenon will turn out great images when the subject is organic in nature and care is taken with the camera movement. Check out some of the blog posts, like those from Stu Maschwitz, about these issues.

blg_canon5d_8

Frame from My Room video

But, how do you post it?

Like my RED blog post, I’ve given you a rather long-winded intro, so let’s take a look at a real-life project I recently posted that was shot using the Canon EOS 5D Mark II. Toby Phillips is a renowned international director, director of photography and Steadicam operator with tons of credits on commercials, music videos and feature films. I’ve worked with him on numerous spots where his medium of choice is 35mm film. Toby is also an avid photographer and Canon owner (including a 5D Mark II). We recently had a chance to use his 5Dmk2 for a good cause – a pro bono fundraiser for My Room, an Australian charity that assists the Children’s Cancer Centre at the Royal Children’s Hospital in Melbourne. Toby needed to shoot his scenes with minimal fuss in the ward. This became an ideal situation in which to test the capabilities of the Canon and to see how the concept translated into a finished piece in the real world.

blg_canon5d_5

Frame from My Room video

Toby has a definite shooting style. It typically involves keeping the camera in motion and pulling focus to just hit a point that’s optimally in focus at the sweet spot of the camera move. That made this project a good test bed for the Canon 5D in production. Lighting was good and the images had a warm and appealing quality. The footage generally turned out well, but Toby did express to me that shooting in this style – and shooting handheld without any of the Red Rock or Zacuto accessories or a focus puller – was tough to do. Remember that still camera lenses are not mechanically engineered like a motion picture lens. Focus and zoom ranges are meant to be set and left, not smoothly adjusted during the exposure time.

blg_canon5d_10

Posting footage from the 5Dmk2 is relatively easy, but you have to take the right steps, depending on what you want to end up with. The movie files recorded by the camera are QuickTime files using the H264 codec, so any Mac or PC QuickTime-compatible application can deal with the files. They are a true 30fps, so you can choose to work natively in 30fps (FCP) or first convert them to 29.97fps (for FCP or Avid). That speed change is minor, so there are no significant sync or pitch issues with the onboard audio. If you opt to edit with Media Composer, simply import the camera movies into a 29.97 project, using the RGB import settings and the result will be standard Avid media files. The camera shoots in progressive scan, so footage converted to 29.97 looks like that shot with any video camera in a 30p mode.

Canon 5D and Final Cut Pro

I edited the My Room project in Final Cut. Although I could have cut these natively (H264 at 30fps), I decided to first convert the files out of H264 for a smoother edit. I received the raw footage on a FireWire drive containing the clips copied from the CF cards. This included 150 motion clips for a total of about one hour of footage (18GB). The finished video would use a mixture of motion footage and moves on stills, so I also received another 152 stills from the 5Dmk2 plus 242 stills from a Canon G10 still camera.

Step one was file conversion to ProRes at 1920×1080. Apple Compressor on a MacBook Pro took under five hours for this step. Going to ProRes increased the storage needs from 18GB to 68GB.

Step two was frame rate conversion. The target audience is in Australia, so we decided to alter the speed to 25fps. This gives all shots a slight slomo quality as if the footage was shot in an overcranked setting. The 5Dmk2 by itself isn’t capable of variable frame rates or off-speed shooting, so any speed changes have to be handled in post. Although a frame rate change is possible in the Compressor setting (step 1), I opted to do it in Cinema Tools using the conform function. When you conform a file in Cinema Tools, you are altering the metadata information of that file. This tells a QuickTime-compatible application to play the file at a specific speed, such as 25fps instead of 30fps. I could also have used this to conform the rate to 29.97 or 23.98. Because only the metadata was changed, the time needed to conform a batch of 150 clips was nearly instantaneous.

Step three – pitch. Changing the frame rate through conform slows the clips, but it also affects the sync sound by making it slower and lowering the pitch. Our video was cut to a music track so that was no big deal; however, we did have one sync dialogue line. I decided to fix just the one line by using Soundtrack Pro. I went back to the original 30fps camera file and used STP’s TimeStretch. This let me adjust the sync speed (approximately 83% of the original) to 25fps, yet maintain the proper pitch.

Step four – stills. I didn’t want to deal with the stills in their full size within FCP. This would have been incredibly taxing on the system and generally overkill, even for an HD job. I created Photoshop actions to automate the conversion of the stills. The 152 5Dmk2 JPEG stills were converted from 5616×3744 to 3500×2333. The stills from the G10 come in a 4:3 aspect ratio (4416×3312) and were intended to be used as black-and-white portrait shots. Another Photoshop action made quick work of downsampling these to 3000×2250 and also converting them to black-and-white. Photoshop CS4 has a nice black-and-white adjustment tool, which generates slightly more pleasing results than a simple desaturation. These images were further cropped to 16:9 inside FCP during the edit.

blg_canon5d_6

Frame from My Room video

Editing

Once I had completed these conversions, the edit was pretty straightforward. The project was like any other PAL-based HD job (1920×1080, 25fps, ProRes). The Canon 5D creates files that are actually easier for an editor to deal with than RED, P2 or EX files. Naming follows the same convention as most what DSLRs use for stills, with files names such as MVI_0240.mov. There is no in-camera SMPTE timecode and all imported clips start from zero. File organization over a larger project would require a definite process, but on the other hand, you aren’t fighting something being done for you by the camera! There are no cryptic file names and copying the files from the card to other storage is as simple as any other QuickTime file. There is also no P2-style folder hierarchy to maintain, since the media is not MXF-based.

Singular Software and Glue Tools are both developing FCP-related add-ons to deal with native camera files from the Canon 5D. Singular offers an Easy Set-up for the camera files, whereas Glue Tools has announced a Log and Transfer plug-in. The latter will take the metadata from the file and apply the memory card ID number as a reel name. It uses the camera’s time-of-day stamp as a timecode starting point and interpolates clip timecode for the file. Thus, all clips in a 24-hour period would have a unique SMPTE timecode value, as long as they are imported using Log and Transfer.

blg_canon5d_7

Frame from My Room video

My final FCP sequence was graded in Apple Color. Not really because I had to, but rather to see how the footage would react. Canon positioned the 5Dmk2 in that niche between the high-end amateur and the entry level professional photographer, so it tends to have more automatic control than most pros would like. In fact, a recent firmware update added back some manual exposure control. In general, the camera tends to make good-looking images with rich saturation and contrast. Not necessarily ideal for grading, but Stu at ProLost offers this advice. Nevertheless, I really didn’t have any shots that presented major problems – especially given the nature of this shoot, which was closer to a documentary than a commercial shoot. I could have easily graded this with my standard “witches brew” of FCP plug-ins, but the roundtrip through Color was flawless.

As a first time out with the Canon EOS 5D Mark II, I think the results were pretty successful (click here to view). I certainly didn’t see any major compression artifacts to speak of and although the footage wasn’t immune from the “jello” effect, I don’t think it got in the way of the emotion we were trying to convey. A filmmaker who was serious about using this as the principal camera on a project could certainly deliver results on par with far more expensive HD cameras. To do that successfully, a) they would need to invest in some of the rigs and accessories needed to utilize the camera in a motion picture environment; and b) they would need to shoot carefully and adhere to set-ups that steer away from some of the problems.

blg_canon5d_9

What about 24fps?

25fps worked for us, but until Canon adds 24fps to the 5Dmk2 or a successor, filmmakers will continue to clamor for ways to get 24p footage out of the camera. Philip Bloom and others have posted innovative post “recipes” to achieve this.

I tested one of these solutions on my cut and was amazed at the results. If I needed to maintain sync dialogue on a project, yet wanted the “film look” of 24fps, this is the method I would use. It’s based on Bloom’s blog post (watch his tutorial video). Here are the steps if you are cutting with Final Cut Pro:

1. Edit your video at the native 30fps camera speed.
(Write down the accurate sequence duration in FCP.)

2. Export a self-contained QuickTime file.

3. Conform that exported file to 23.98fps in Cinema Tools.
(This will result in a longer, slowed down file.)

4. Bring the file into Compressor and create and apply a setting to convert the file, but leave the target frame rate at 23.98fps (or same as current file).

5. Click the applied setting to modify it in the Inspector window.

6. Enable Frame Controls and change the duration from “100% of source” to a new duration. Enter the exact original duration of the 30fps sequence (step 1). (Best results are achieved – but with the longest render times – when Rate Conversion is set to “Best – high quality motion compensated”.)

7. Import the converted file into FCP and edit it to a 23.98 fps timeline. This should match perfectly to a mixed version of the audio from the original 30fps sequence.

I was able to achieve a perfect conversion from 30fps to 23.98fps using these steps. There were no obvious optical flow artifacts or frame blending. This utilizes Compressor’s standards conversion technology, so even edited cuts in the self-contained file stayed clean without blending. Of course, your mileage may vary.

The edited video segment was 1:44 at 30fps and 2:10 at the slower 23.98fps rate. The retiming conversion necessary to get back to a 1:44-long 23.98 file took two hours on my MacBook Pro. This would be time-prohibitive if you wanted to process all of the raw footage first. Using it only on an edited piece definitely takes away the pain and leaves you with excellent results.

Cameras like the Canon EOS 5D Mark II are just the beginning of this DSMC journey. I don’t think Canon realized what they had until the buzz started. I’m sure you’ll soon see more of these cameras from Canon and Nikon, not to mention Panasonic and even Sony, too. Once RED finally starts shipping Scarlet, it will be interesting to see whether this concept really has legs. In any case, from an editor’s perspective, these formats aren’t your tape of old, but they also shouldn’t be feared.

©2009 Oliver Peters

Reliving the Zoetrope tradition – Walter Murch and Tetro

blg_tetro_1

Age can sometimes be an impediment to inspired filmmaking, but Francis Ford Coppola, who recently turned 70, has tackled his latest endeavor with the enthusiasm and creativity of a young film school graduate. The film Tetro opened June 11th in New York and Los Angeles and will enter wider distribution in the weeks that follow. Coppola set up camp in a two-story house in Buenos Aires and much of the film was produced in Argentina. This house became the film’s headquarters for production and post in the same approach to filmmaking that the famed director adopted on Youth Without Youth (2007) in Romania.

 

blg_tetro_4

 

Tetro is Francis Ford Coppola’s first original screenplay since The Conversation (1974) and is very loosely based on the dynamics within his own family. It is not intended to be autobiographical, but explores classic themes of sibling rivalry, as well as the competition between father and son. Coppola’s own father, Carmine (who died in 1991), was a respected musician and composer who also scored a number of his son’s films. One key figure in Tetro is the family patriarch Carlo (Klaus Brandauer), an acclaimed symphony conductor, who moved as a young music student from the family home in Argentina to Berlin and then to New York. Carlo’s younger son Bennie (Alden Ehrenreich) decided to head back to Buenos Aires in search of his older brother, the brooding poet Tetro (Vincent Gallo) – only to discover a different person than he’d expected.

 

blg_tetro_2

 

Coppola put together a team of talented Argentine actors and crew, but also brought back key collaborators from his previous films, including Mihai Malaimare, Jr.(director of photography), Osvaldo Golijov (composer) and Walter Murch (editor and re-recording mixer). I caught up with Walter Murch via phone in London, where he spoke at the 1st Annual London Final Cut Pro User Group SuperMeet.

 

Embracing the American Zoetrope tradition

 

Tetro has a definite style and vision that sets it apart from current studio fare. According to Walter Murch, “Francis funded Tetro in the same fashion as his previous film Youth Without Youth. He has personal money in it from his Napa Valley winery, as well as that of a few other investors. This lets him make the film the way he wants to, without studio interference. Francis’s directing style is process-oriented – he likes to let the film evolve during the production – to make serendipitous discoveries based on the actors, the sets, the atmosphere of a new city. Many directors work this way, but Francis embraces it more than any other. In Coppola’s own words: ‘The director is the ringmaster of a circus that is inventing itself.’ I think that’s why, at age 69, he was enthusiastic about jumping into a country that was new to him and working with talented young local filmmakers.”

 

blg_tetro_6

 

This filmmaking approach is reminiscent of Coppola’s original concept for American Zoetrope Studios . There Coppola pioneered early concepts in electronic filmmaking, hallmarked by the “Silverfish”, an AirStream trailer that provided on-set audio and editing support. Murch continued, “Ideally everything needed to make a Zoetrope film on location should be able to be loaded into two vans. The Buenos Aires building that was our base of operations reminded me of the Zoetrope building in San Francisco 40 years ago. The central idea was to break down the separation between tasks and to be as efficient and collaborative as possible. In other words, to operate more like a film-school crew. Zoetrope also has always embraced new technology – the classic ‘early adopter’ profile. Our crew in Buenos Aires was full of young, enthusiastic local film technicians and artists and on a number of occasions, rounding a corner, I felt like I was bumping into a 40-year-younger version of myself.”

 

A distinctive visual style

 

Initial Tetro reviews have commented on the striking visual style of the film. All modern day scenes are in 2.35 wide-screen black-and-white, while flashbacks appear in more classically-formatted 1.77 color. This is Coppola’s second digital film and it followed a similar workflow to that used on Youth Without Youth, shooting with two of the director’s own Sony F900 CineAlta HD cameras. As in the earlier film, the signals from both F900s were recorded onto one Sony SRW field recorder in the HDCAM-SR format. This deck recorded two simultaneous 4:2:2 video streams onto a single tape, which functioned as the “digital negative” for both the A and B cameras.

 

Simultaneously, another backup recording was made in the slightly more compressed 3:1:1 HDCAM format, using the onboard recorders of the Sony cameras. These HDCAM tapes provided safety backup as well as the working copies to be used for ingest by the editorial team. The HDCAM-SR masters, on the other hand, were set aside until the final assembly at the film’s digital intermediate finish at Deluxe.

 

blg_tetro_3

 

Did the fact that this was a largely black-and-white film impact Murch’s editing style? “Not as much as I would have thought,” Murch replied. “The footage was already desaturated before I started cutting, so I was always looking at black-and-white material. However, a few times when I’d match-frame a shot, the color version of the source media would pop up and then that was quite a shock! But the collision between color and black-and-white ultimately provoked the decision to frame the color material with black borders and in a different ‘squarer’ aspect ratio – 1.77 vs. 2.35.”

 

blg_tetro_7

 

Changes in the approach

 

Walter Murch continued to describe the post workflow, “It was similar to our methods in Romania on Youth Without Youth, although with a couple of major differences. Tetro was assembled and screened in 720p ProRes, instead of DV. We had done a ‘bake-off’ of different codecs to see which looked the best for screening without impacting the system’s responsiveness. We compared DVCPRO HD 720 and 1080 with ProRes 720 and 1080, as well as the HQ versions of ProRes. Since I was cutting on Final Cut Pro, we felt naturally drawn to the advantages of ProRes, and as it turned out for our purposes, the 720 version of ProRes seemed to give us the best quality balanced against rendering time. My cutting room also doubled as the screening room and, as we were using the Sim2 digital projector, I had the luxury of being able to cut and look at a 20-foot wide screen as I did so. Another change for me was that my son [Walter Slater Murch] was my first assistant editor. Sean Cullen, my assistant since 2000, was in Paris cutting a film for the first time as the primary editor. Ezequiel Borovinsky and Juan-Pablo Menchon from Buenos Aires rounded out the editorial department as second assistant and apprentice respectively.”

 

The RED camera has had all the buzz of late, so I asked Murch if Coppola had considered shooting the film with RED, instead of his own Sonys. Murch replied, “Francis is very happy with the look of his cameras, and of course, he owns them, so there’s also a budget consideration. Mihai [Malaimare, DP] brought in a RED for a few days when we needed to shoot with three cameras. The RED material integrated well with the Sony footage, but there is a significantly different workflow, because the RED is a tapeless camera. In the end, I would recommend shooting with one camera or the other if possible. A production shouldn’t mix up workflows unnecessarily.”

 

blg_tetro_5

 

Walter Murch discusses future technology

 

It’s hard to talk film with Walter Murch and not discuss trends, philosophy and technology. He’s been closely associated with a number of digital advances, so I wondered if he saw a competitor coming to challenge either Avid Media Composer or Apple Final Cut Pro for film editing. “It’s hard to see into the future more than about three years,” he answered. “Avid is an excellent system and studios and rental operations have capital investment in equipment, so for the foreseeable future, I think Avid and Final Cut will continue to be the two primary editing tools. Four years from now, who knows? I see more possibility for sooner changes in the area of sound editing and mixing. I’ve done some promising work with Apple’s Soundtrack Pro. The Nuendo-Euphonix combination is also very interesting; but, for Tetro it seemed best to stay compatible with what the sound team was familiar using. Also, [fellow re-recording mixer] Pete Horner and I mixed on the ICON and that’s designed to work with Pro Tools.”

 

Murch continued, “I’d really like to see some changes in how timelines are handled. I’ve used a Filemaker database for all of my notes now for more than twenty years, starting back when I was still cutting on film. I tweak the database a bit with each film as the needs change. Tetro was the first film where I was able to get the script supervisor – Anahid Nazarian in this case – to also use Filemaker. That was great, because all of the script and camera notes were incorporated into the same Filemaker database from the beginning. Thinking into the future, I’d love to see the Filemaker workshare approach applied to Final Cut Pro. If that were the case, the whole team – picture and sound editors and visual effects – could have access to the same sequence simultaneously. If I was working in one area of the timeline, for example, I could put a ‘virtual dike’ around the section I was editing. The others would not be able to access it for changes, but would see its status prior to my current changes. Once I was done and removed the ‘dike’ the changes would ripple through, the timeline would be updated and everyone could see and work with the new version.”

 

Stereoscopic 3D is all the rage now, but you may not know that Walter Murch also worked on one of the iconic 3D short films, Captain Eo, starring Michael Jackson. Francis Ford Coppola directed Eo for the Disney theme parks in 1986. It’s too early to tell whether the latest 3D trend will be sustained, but Murch offered his take. “3D certainly has excellent box office numbers right now, but there is still a fundamental perceptual problem with it: Through millions of years of evolution, our brains have been wired so that when we look at an object, the point where our eyes converge and where they focus is one and the same. But with 3D film we have to converge our eyes at the point of the illusion (say five feet in front of us) and simultaneously focus at the plane of the screen (perhaps sixty feet away). We can do this, obviously, but doing it continuously for two hours is one of the reasons why we get headaches watching 3D. If we can somehow solve this problem and if filmmakers use 3D in interesting ways that advance the story – and not just as gimmicks – then I think 3D has a very promising long-term future.”

 

Written for Videography magazine (NewBay Media, LLC)

 

© 2009 Oliver Peters