Documentary Editing Tips

Of the many projects I work on, documentaries and documentary-style productions are my favorite. I find these often more entertaining and certainly more enlightening than many dramatic features and shows. It’s hard to beat reality. Documentaries present challenges for the editor, but in no other form does the editor play more of a role in shaping the final outcome. Many of them truly typify an editor’s function as the “writer” through shot selection and construction.

Structure and style

There are different ways you can build a documentary, but in the end, the objective is to end up with a film that tells an engaging story in such a way that the audience comprehends it. Structurally a documentary tends to take one of these forms:

–       Interview sound bites completely tell the story

–       The “voice of God” narrator guides you through

–       The “slice of life” story, where the viewer is a hidden observer

–       Re-enactments of events through acted scenes or readings, a la The Civil War or The Blues

–       The filmmaker as a first person guide, such as Werner Herzog

Sometimes, the best approach is a combination of all of these. You may set out to have the complete story told only through assembled sound bites, yet the story is never fully fleshed out. There, pieces of scripted narration will help clarify the story and bind disparate elements and thoughts together.

Story arc and character

The persons on screen are real, but to the audience they are no less characters in a film than a role performed by a dramatic actor. As an editor, the way you select sound bites and put them together – and the order in which these are presented throughout the film – establish not only a story arc, but also perceived heroes and villains in the minds of the audience. Viewers want a film with a logical start, building tension and ultimate resolution. Even when there is no happy ending, the editor should strive to build a story that leaves the audience with some answers or conclusion.

Remember to balance out your characters. In many interview-based stories, the same questions are posed to the various interviewees as the interviews are conducted. This is helpful to the editor, because you can balance out the different on-camera appearances by mixing up whose response you choose to use. That way, the same subject isn’t always to go-to person and you aren’t heavy with any single person. Sometimes it’s best to have one person start a thought or a statement and then conclude with another, assuming the two segments are complementary.

Objectivity

This is one of the myths taught in some film and journalism schools. The truth is that almost every documentary (and often many news stories) are approached from the point-of-view and biases of the writer, producer, director and editor. You can try to portray all sides fairly, but the choice of who is interviewed or which bites are selected reflects an often subconscious bias of the person making that decision. It can also appear lopsided simply based on which subjects decided to participate.

Sometimes the effects are subtle and harmless, as in reality TV shows, where the aim is to tell the most entertaining story. In the other extreme, it can become borderline propaganda for the agenda of the filmmaker. I’m not telling you what type of film to make – just to be aware of the inevitable. If there’s a subjective point-of-view, then don’t try to hide it. Rather, make it clearly a personal statement so the audience isn’t tricked into believing the filmmakers gave a fair shake to all sides.

The art of the interview

 If your documentary tale is built out of interview clips, then a lot of your time as an editor will go into organizing the material and playing with story structure. That is, editing and re-arranging sound bites in a way to tell a complete story without the need for a narrator. Often this requires that you assemble sound bites in a way that’s quite different from the way they were recorded in linear time.

Enter the “Frankenbite”. That’s a term editors apply to two types of sound bite construction: a) splicing together parts of two or more sound bite snippets to create a new, concise statement; or b) editing a word or phrase from another part of the interview to get the right inflection, such as making a statement sound like the end of a sentence, when in fact the original part was really in mid-thought.

Personally I have no problem with any of this, but draw the line at dishonesty. It’s very important to listen to the interviews in their entirety and make sure that the elements you are splicing together aren’t taken out of context. You don’t want to create the impression that what is being said is the exact opposite of what the speaker meant to say. The point of this slicing is to collapse time and get the point across succinctly without presenting a full and possibly rambling answer. Be true to the intent and you’ll be fine.

Typically such edits are covered by cutaway shots to hide the jump cut, though some director stylistically prefer to show the jump cut that such edits produce. This can give a certain interesting rhythm to the cut that might not otherwise be there. It also clearly tells the audience that an edit was made. It’s a stylistic approach, so pick a path and stick with it.

The beauty of the HDSLR revolution brought about by Canon is that it’s easier (and cheaper) than ever to field two-camera shoots. This is especially useful for documentary interviews. Often directors will set up two 5D or 7D cameras – one facing the subject and the other at an angle. This gives the editor two camera angles to cut with and it’s often possible to assemble edited sound bites using cuts between the two cameras at these edit points. This lets you splice together thoughts and still appear like a live switch in a TV show – totally seamless without an obvious jump cut. I’ve been able to build short shows this way working 100% from the interviews without a single cutaway shot and still have the end result appear to the audience as completely contiguous and coherent.

Mine the unrehearsed responses. Naturally that depends on the talent of the interviewer and how much her or she can get out of the interviewee. The best interviewers will warm up their subject first, go through the pro forma questions and then circle back for more genuine answers, once the interviewee is less nervous with the process. This is usually where you’ll get the better responses, so often the first half of the recording tends to be less useful. If the interviewer asks at the end, “Is there anything else you’d like to add?” – that’s where you frequently get the best answers, especially if the subject is someone who is interviewed a lot. Those folks are used to giving stock answers to all the standard questions. If their answers can be more freeform, then you’ll tend to get more unique and thoughtful points-of-view.

Organizing non-timecoded source material

 Archival footage frequently used in documentaries comes from a variety of sources, such as old home movies (on various film formats), VHS tapes and more. Before you ever start editing from these, they should be transferred with the best possible quality to a mastering format, such as Digital Betacam (for NTSC or PAL), HDCAM/HDCAM-SR (for HD) or high-quality QuickTime files (DNxHD, ProRes or uncompressed).

The point is to get these to a format, which can be organized and tracked through stages of the edit. This usually means some format that allows timecode, reel numbers or other file name coding to make it easy to find if the project takes years to complete. Remember that timecode and a 4-digit reel (or source) number lets you find any single frame within 10,000 hours of footage. To make this material easier to use during the offline editing stage of the project, you may elect to make low-cost/low-res copies for editing. For example, DVCAM if on tape or ProRes Proxy or DNxHD 36 for files. Doing so means that timecode and source/reel info MUST correspond perfectly between the low-res and hi-res versions.

Your still photo strategy

 Photography and artwork are the visual lifeblood of documentaries that lack supporting film or video content. Ken Burns has elevated the technique of camera moves on still images to an art form. Clearly he’s a filmmaker known to the general public as much for this effect branded by Apple after his name, as his award-winning films. Yet, the technique clearly predates him and has gone by many terms over the years. A company I once worked for frequently called it “pictography”. Regardless of origin – the use of stills requires two elements: organization and motion.

There are numerous photo and still image organizing and manipulation applications, including Adobe Lightroom, Bridge, Apple iPhoto and Aperture. Each of these provides a method to catalog, rate and sort the photos. You’ll need the application with a good manipulation toolset to properly crop, color correction and/or fix damaged images. Lightroom is my personal preference, but they all get the job done.

Moves on stills can be accomplished in several ways: animated moves in software, a computer-assisted, motion control camera stand or simply a human operator doing real cameras moves. Often the last method is the simplest, fastest and best looking. If that’s your choice, print large versions of the stills, put them on an easel and set up a video camera. Then record a variety of moves at different speeds, which will become source “video” for your edit session.

Another popular method is to separate components of the image into Photoshop layers. Then bring these into After Effects and design perspective moves in which the foreground elements move or grow at a different rate than the background layer. This method was popularized in The Kid Stays in the Picture. The trick to pulling this off successfully is that the Photoshop artist must fill in the background layer to replace the portion cut out for the foreground person or object. Otherwise you see a repeated section of the foreground image or possibly the cut-out area.

Edit system organization

 There are plenty of tools at your disposal, regardless of whether you prefer Avid, FCP 7, FCP X or something else. If this project takes several years with several editors and a potpourri of formats, then Media Composer is a good bet; however, Final Cut also has its share of fans among documentary editors. Make liberal use of subclips and markers to keep yourself straight. Tools like Boris Soundbite (formerly Get) and Avid ScriptSync and PhraseFind are essential to the editors who embrace them.

I tend to not use transcripts as the basis for my edits. Nevertheless, having an electronic and/or paper transcript of interviews available to you (with general timecode locations) makes it easy to find alternatives. That can be as simple as having a copy open in Word on the same computer and using the Find function. My point is that modern tools make it very easy to tackle a wealth of content without getting buried by the footage.

The value of the finishing process

 I feel that even more so than on dramatic features, documentaries benefit for high-quality finishing services. These range from simple online editing to format conversion to color grading. Since original sources often vary so widely in quality, it’s important to get the polish that a trained online/finishing editor and/or colorist can provide. Same for audio. Use the services of talented sound designers, editors and mixers to bring the mix up a notch. Nothing screams “bad”, like a substandard soundtrack, no matter how striking the images are.

Clearances

It is important for the editor is to keep track of the sources and usage for stock images and music. These aren’t free. Many documentary producers seem to feel they can “sweet-talk” the rights holder into donating content out of a sense of interest or altruism. That’s almost never successful. So understand the licensing issues and be wary of using images and music – even on a temporary basis – that you know will be hard to clear or too expensive to purchase.

Make sure that you have an adequate system for tracking and reporting the use of stock material, so that it can be properly bought and cleared when the film is being finished. During the rough cut, stock footage and images will usually be low-res versions with a “burn-in” or watermark. When the time comes to purchase the final high-res images, most companies require that you request the exact range of the material used based on timecode. That material will be provided as files or on tape, but there’s no guarantee that the timecode will match. Be prepared to eye-match each shot if that’s the case.

©2011 Oliver Peters

A milestone

Over the past 24 hours, this blog crossed over the 1,000,000-views-mark since its start in March of 2008. Quite a few of you have commented to me during this time about how helpful you find a lot of these posts. I appreciate the feedback and thanks for the comments. I’m certainly glad that these musings are useful as you navigate the confusion that often surrounds post production. From its inception this blog has served as a repository for things that I write elsewhere, as well as additional thoughts, ideas and tips that are best presented in this forum. And I intend to continue along those same lines. Here’s wishing you the best for the upcoming Thanksgiving and the holiday season.  Cheers!

A note to iOS5 iPad readers

This is a quick housekeeping note. If you follow this blog on an iPad, you already know that some WordPress-hosted sites, such as this one, use an alternative format for mobile devices, like the iPad. The Home page displays a small grouping of headers for the latest posts. When you click on one, it loads an optimized screen in the foreground. That was working fine in iOS4, but apparently for now is broken in the iOS5 version of Safari. The first post selected will flash, but still load, however, subsequent selections never load.

I recommend the following solution if this happens on your iPad. At the bottom of the Home page is a toggle to View iPad Site or View Standard Site. Select View Standard Site. Safari will remember this setting when you return to the blog. Since the theme I use employs a page width and column structure that is a bit awkward on iPads, use the Safari Reader function. Select a post from the sidebar and when it loads, click the READER button in the URL bar. That post will open in a format that’s easy to read on the iPad.

Thank you.

RED post for My Fair Lidy

I’ve work on various RED projects, but a recent interesting example is My Fair Lidy, an independent film produced through the Valencia College Film Production Technology program. This was a full-blown feature shot entirely with RED One cameras. In this program, professional filmmakers with real projects in hand partner with a class of eager students seeking to learn the craft of film production. I’ve edited two of these films produced through the program and assisted in various aspects of post on many others. My Fair Lidy – a quirky comedy directed by program director Ralph Clemente – was shot in 17 days this summer at various central Florida locations. Two RED Ones were used – one handled by director of photography Ricardo Galé and the second by student cinematographers. My Fair Lidy was produced by SandWoman Films and stars Christopher Backus and Leigh Shannon.

There are many ways to handle the post production of native RED media and I’ve covered a number of them in these earlier posts. There is no single “best way” to handle these files, because each production is often best-served by a custom solution. Originally, I felt the way to tackle the dailies was to convert the .r3d camera files into ProRes 4444 files using the RedLogFilm profile. This gives you a very flat look, and a starting point very similar to ARRI ALEXA files shot with the Log-C profile. My intension would have been to finish and grade straight from the QuickTimes and never return to the .r3d files, unless I needed to fix some problems. Neutral images with a RedLogFilm gamma setting are very easy to grade and they let the colorist swing the image for different looks with ease. However, after my initial discussions with Ricardo, it was decided to do the final grade from the native camera raw files, so that we had the most control over the image, plus the ability to zoom in and reframe using the native 4K files as a source.

The dailies and editorial flow

My Fair Lidy was lensed with a 16 x 9 aspect ratio, with the REDs set to record 4096 x 2304 (at 23.98fps). In addition to a RED One and a healthy complement of grip, lighting and electrical gear, Valencia College owns several Final Cut Pro post systems and a Red Rocket accelerator card. With two REDs rolling most of the time, the latter was a godsend on this production.  We had two workstations set up – one as the editor’s station with a large Maxx Digital storage array and the other as the assistant’s station. That system housed the Red Rocket card. My two assistants (Kyle Prince and Frank Gould) handled all data back-up and conversion of 4K RED files to 1920 x 1080 ProResHQ for editorial media. Using ProResHQ was probably overkill for cutting the film (any of the lower ProRes codecs would have been fine for editorial decisions) but this gave us the best possible image for an potential screenings, trailers, etc.

Redcine-X was our tool for .r3d media organization and conversion. All in-camera settings were left alone, except the gamma adjustment. The Red Rocket card handles the full-resolution debayering of the raw files, so conversion time is close to real time. The two stations were networked via AFP (Apple’s file-sharing protocol), which permitted the assistant to handle his tasks without slowing down the editor. In addition, the assistant would sync and merge audio from the double-system sound, multi-track audio recordings and enter basic scene/take descriptions. Each shoot day had its own FCP project, so when done, project files and media (.r3d, ProRes and audio) were copied over to the editor’s Maxx array. Master clips from these daily FCP projects were then copied-and-pasted (and media relinked) into a single “master edit” FCP project.

For reasons of schedule and availability, I split the editing responsibilities with a second film editor, Patrick Tyler. My initial role was to bring the film to its first cut and then Patrick handled revisions with the producer and director. Once the picture was locked, I rejoined the project to cover final finishing and color grading. My Fair Lidy was on a very accelerated schedule, with sound design and music scoring running on a parallel track. In total, post took about 15 weeks from start to finish.

Finishing and grading

Since we didn’t use FCP’s Log and Transfer function nor the in-camera QuickTime reference files as edit proxies, there was no easy way to get Apple Color to automatically relink clips to the original .r3d files. You can manually redirect Color to link to RED files, but this must be done one shot at a time – not exactly desirable for the 1300 or so shots in the film.

The recommended workflow is to export an XML from FCP 7, which is then opened in Redcine-X. It will correctly reconnect to the .r3d files in place of the QuickTime movies. From there you export a new XML, which can be imported into Color. Voila! A Color timeline that matches the edit using the native camera files. Unfortunately for us, this is where reality came crashing in – literally. No matter what we did, using both  XMLs and EDLs, everything that we attempted to import into Color crashed the application. We also tried ClipFinder, another free application designed for RED media. It didn’t crash Color, but a significant number of shots were incorrectly linked. I suspect some internal confusion because of the A and B camera situation.

On to Plan B. Since Redcine-X correctly links to the media and includes not only controls for the raw settings, but also a healthy toolset for primary color correction, then why not use it for part of the grading process? Follow that up with a pass through Color to establish the stylistic “look”. This ended up working extremely well for us. Here are the basic steps I followed.

Step 1. We broke the film into ten reels and exported an XML file for each reel from FCP 7.

Step 2. Each reel’s XML was imported into Redcine-X as a timeline. I changed all the camera color metadata for each shot to create a neutral look and to match shots to each other. I used RedColor (slightly more saturated than RedColor2) and RedGamma2 (not quite as flat as RedLogFilm), plus adjusted the color temp, tint and ISO values to get a neutral white balance and match the A and B camera angles. The intent was to bring the image “within the goalposts” of the histogram. Occasionally I would make minor exposure and contrast adjustments, but for the most part, I didn’t touch any of the other color controls.

My objective was to end up with a timeline that looked consistent but preserved dynamic range. Essentially that’s the same thing I would do as the first step using the primary tab within Color. The nice part about this is that once I matched the settings of the shots, the A and B cameras looked very consistent.

Step 3. Each timeline was exported from Redcine-X as a single ProResHQ file with these new settings baked in. We had moved the Red Rocket card into the primary workstation, so these 1920 x 1080 clips were rendered with full resolution debayering. As with the dailies, rendering time was largely real-time or somewhat slower. In this case, approximately 10-20 minutes per reel.

Step 4. I imported each rendered clip back into FCP and placed it onto video track two over the corresponding clips for that reel to check the conforming accuracy and sync. Using the “next edit” keystroke, I quickly stepped through the timeline and “razored” each edit point on the clip from Redcine-X. This may sound cumbersome, but only took a couple of minutes for each reel. Now I had an FCP sequence from a single media clip, but with each cut split as an edit point. Doing this creates “notches” that are used by the color correction software for cuts between corrections. That’s been the basis for all “tape-to-tape” color correction since DaVinci started doing it and the new Resolve software still includes a similar automatic scene detection function today.

Step 5. I sent my newly “notched” timeline to Color and graded as I normally would. By using the Redcine-X step as a “pre-grade”, I had done the same thing to the image as I would have done using the RED tab within Color, thus keeping with the plan to grade from the native camera raw files. I do believe the approach I took was faster and better than trying to do it all inside Color, because of the inefficiency of bouncing in and out of the RED tab in Color for each clip. Not to mention that Color really bogs down when working with 4K files, even with a Red Rocket card in place.

Step 6. The exception to this process was any shot that required a blow-up or repositioning. For these, I sent the ProRes file from dailies in place of the rendered shot from Redcine-X. In Color, I would then manually reconnect to the .r3d file and resize the shot in Color’s geometry room, thus using the file’s full 4K size to preserve resolution at 1080 for the blow-up.

Step 7. The last step was to render in Color and then “Send to FCP” to complete the roundtrip. In FCP, the reel were assembled for the full movie and then married to the mixed soundtrack for a finished film.

© 2011 Oliver Peters

Improving FCP X

A short while ago I started a thread at Creative COW entitled, “What would it take?” My premise is that Final Cut Pro X has enough tantalizing advantages that many “pro users” (whatever that means) would adopt it, if only it had a few extra features. I’m not talking about turning it into FCP 8. I think that’s pretty unrealistic and I believe Apple is going in a different direction. The point is that there are a number of elements that could be added and stay within the FCP X paradigm, which would quell some of the complaints. The thread sparked some interesting suggestions, but here are a few of mine in no particular order of priority.

1. Make audio trimming and transitions as easy as and comparable to video trimming. Currently audio seems to take a back seat to video editing when it comes to trims and transitions.

2. Add “open in Motion” or “send to Motion” functions for clips. Motion 5 is quite powerful and it fills in many gaps that exist in FCP X. For example, drawing mattes. A “send to” roundtrip function would help.

3. Either add track-based mixing or add a “send to Logic” function. I feel audio without tracks is a pretty tough way to mix. Assuming the next version of Logic isn’t as drastic of a change as FCP 7 to FCP X, then it would be nice to offer the option of sending your FCP X project audio to Logic for mixing.

4. Add modifiers to give you some user-defined control over the magnetic timeline. More than just the position tool. Time to tame the magnetic timeline.

5. Add user-defined controls for more track-like behavior. Such as expanded use/behavior of additional storylines. I’m not sure what form this would take, but the desire is to get the best of both worlds.

6. Add a “save as” function.

7. Add event/project management to open/hide projects and media. This exists in Assisted Editing’s Event Manager X, but it should be a direct function within FCP X.

8. Add a choice to not see the event thumbnail/filmstrip when you click on it. Even in list view, when you click on an event clip it is refreshed in the single visible filmstrip at the top. This slows down the response of the system. I’d like to see a true list-only view for faster response when I’m entering data.

9. Remember clip in/out points.

10. Add some user control over window layouts. FCP 7’s workspace customization was great and it’s a shame we lost it.

11. Add some way to see a second window as a source/record (2-up) view.

12. Bring back copy/paste/remove attributes.

13. Bring back the equivalent to the Track Tool.

14. Import legacy FCP sequences. I realize some third-party developer will likely create an XML to FCP XML translator, but it sure would make sense if Apple solved this issue. Even if it means only a simple sequence without effects, speed ramps or audio levels.

©2011 Oliver Peters