Tips for Production Success – Part 1

df1915_prodtips_1_smThroughout this blog, I’ve written numerous tips about how to produce projects, notably indie features, with a successful outcome in mind. I’ve tried to educate on issues of budget and schedule. In these next two entries, I’d like to tackle 21 tips that will make your productions go more smoothly, finish on time, and not become a disaster during the post production phase. Although I’ve framed the discussion around indie features, the same tips apply to commercials, music videos, corporate presentations, and videos for the web.

Avoid white. Modern digital cameras handle white elements within a shot much better than in the past, but hitting a white shirt with a lot of light complicates your life when it comes to grading and directing the eye of the viewer. This is largely an issue of art direction and wardrobe. The best way to handle this is simply to replace whites with off-whites, bone or beige colors. The sitcom Barney Miller, which earned DP George Spiro Dibie recognition for getting artful looks out of his video cameras, is said to have had the white shirts washed in coffee to darken them a bit. The whiteness was brought back once the cameras were set up. The objective in all of this is to get the overall brightness into a range that is more controllable during color correction and to avoid clipping.

Expose to the right. When you look at a signal on a histogram, the brightest part is on the righthand side of the scale. By pushing your camera’s exposure towards a brighter, slightly over-exposed image (“to the right”), you’ll end up with a better looking image after grading (color correction). That’s because when you have to brighten an image by bringing up highlights or midtones, you are accentuating the sensor noise from the camera. If the image is already brighter and the correction is to lower the levels, then you end up with a cleaner final image. Since most modern digital cameras use some sort of log or hyper gamma encoding to record a flatter signal, which preserves latitude, opening up the exposure usually won’t run the risk of clipping the highlights. In the end, a look that stretches the shadow and mids to expose more detail to the eye gives you a more pleasing and informative image than one that places emphasis on the highlight portion.

Blue vs. green-screen. Productions almost ubiquitously use green paint, but that’s wrong. Each paint color has a different luminance value. Green is brighter and should be reserved for a composite where the talent should appear to be outside. Blue works best when the composited image is inside. Paint matters. The correct paint to use is still the proper version of Ultimatte blue or green paint, but many people try to cut corners on cost. I’ve even had producers go so far as to rig up a silk with a blue lighting wash and expect me to key it! When you light the subject, move them as far away from the wall as possible to avoid contamination of the color onto their hair and wardrobe. This also means, don’t have your talent stand on a green or blue floor, when you aren’t intending to see the floor or see them from their feet to their head.

Rim lighting. Images stand out best when your talent has some rim lighting to separate them from the background. Even in a dark environment, seek to create a lighting scheme that achieves this rimming effect around their head and shoulders.

Tonal art direction. The various “blockbuster” looks are popular – particularly the “orange and teal” look. This style pushes skin tones warm for a slight orange appearance, while many darker background elements pick up green/blue/teal/cyan casts. Although this can be accentuated in grading, it starts with proper art direction in the set design and costuming. Whatever tonal characteristic you want to achieve, start by looking at the art direction and controlling this from step one.

Rec. 709 vs. Log. Digital cameras have nearly all adopted some method of recording an image with a flat gamma profile that is intended to preserve latitude until final grading. This doesn’t mean you have to use this mode. If you have control over your exposure and lighting, there’s nothing wrong with recording Rec. 709 and nailing the final look in-camera. I highly recommend this for “talking head” interviews, especially ones shot on green or blue-screen.

Microphone direction/placement. Every budding recording engineer working in music and film production learns that proper mic placement is critical to good sound. Pay attention to where mics are positioned, relative to where the person is when they speak. For example, if you have two people in an interview situation wearing lavaliere mics on their lapels, the proper placement would be on each’s inner lapel – the side closer to the other person. That’s because each person will turn towards the other to address them as they speak and thus talk over that shoulder. Having the mic on this side means they are speaking into the mic. If it were on their outer lapel, they would be speaking away from the mic and thus the audio would tend to sound hollow. For the same reasons, when you use a boom or fish pole overhead mic, the operator needs to point the mic in the direction of the person talking. They will need to shift the mic’s direction as the conversation moves from one person to the next to follow the sound.

Multiple microphones/iso mics. When recording dialogue for a group of actors, it’s best to record their audio with individual microphones (lavs or overhead booms) and to record each mic on an isolated track. Cameras typically feature on-board recording of two to four audio channels, so if you have more mics than that, use an external multi-channel recorder. When external recording is used, be sure to still record a composite track to your camera for reference.

Microphone types. There are plenty of styles and types of microphones, but the important factors are size, tonal quality, range, and the axis of pick-up. Make sure you select the appropriate mic for the task. For example, if you are recording an actor with a deep bass voice using a lavaliere, you’d be best to use a type that gives you a full spectrum recording, rather than one that favors only the low end.

Sound sync. There are plenty of ways to sync sound to picture in double-system sound situations. Synchronizing by matched timecode is the most ideal, but even there, issues can arise. Assure that the camera’s and sound recorder’s timecode generators don’t drift during the day – or use a single, common, external timecode generator for both. It’s generally best to also include a clapboard and, when possible, also record reference audio to the camera. If you plan to sync by audio waveforms (PluralEyes, FCP X, Premiere Pro CC), then make sure the reference signal on the camera is of sufficient quality to make synchronization possible.

Record wild lines on set. When location audio is difficult to understand, ADR (automatic dialogue replacement, aka “looping”) is required. This happens because the location recording was not of high quality due to outside factors, like special effects, background noise, etc. Not all actors are good at ADR and it’s not uncommon to watch a scene with ADR dialogue and have it jump out at you as the viewer. Since ADR requires extra recording time with the actor, this drives up cost on small films. One workaround in some of these situations is for the production team to recapture the lines separately – immediately after the scene was shot – if the schedule permits. These lines would be recorded wild and may or may not be in sync. The intent is to get the right sonic environment and emotion while you are still there on site. Since these situations are often fast-paced action scenes, sync might not have to be perfect. If close enough, the sound editors can edit the lines into place with an acceptable level of sync so that viewers won’t notice any issues. When it works, it saves ADR time down the road and sounds more realistic.

Click here for Part 2.

©2015 Oliver Peters

Filmmaking Pointers

df_fmpointersIf you want to be a good indie filmmaker, you have to understand some of the basic principles of telling interesting visual stories and driving the audience’s emotions. These six   ideas transcend individual components of filmmaking, like cinematography or editing. Rather, they are concepts that every budding director should understand and weave into the entire structure of how a film is approached.

1. Get into the story quickly. Films are not books and don’t always need a lengthy backstory to establish characters and plot. Films are a journey and it’s best to get the characters on that road as soon as possible. Most scripts are structured as three-act plays, so with a typical 90-100 minute running time, you should be through act one at roughly one third of the way into the film. If not, you’ll lose the interest of the audience. If you are 20 minutes into the film and you are still establishing the history of the characters without having advanced the story, then look for places to start cutting.

Sometimes this isn’t easy to tell and an extended start may indeed work well, because it does advance the story. One example is There Will Be Blood. The first reel is a tour de force of editing, in which editor Dylan Tichenor builds a largely dialogue-free montage that quickly takes the audience through the first part of Daniel Plainview’s (Daniel Day-Lewis) history in order to bring the audience up to the film’s present day. It’s absolutely instrumental to the rest of the film.

2. Parallel story lines. A parallel story structure is a great device to show the audience what’s happening to different characters at different locations, but at more or less the same time. With most scripts, parallel actions are designed to eventually converge as related or often unrelated characters ultimately end up in the same place for a shared plot. An interesting take on this is Cloud Atlas, in which an ensemble cast plays different characters spread across six different eras and locations – past, present and future.

The editing style pulled off by Alexander Berner is quite a bit different than traditional parallel story editing. A set of characters might start a scene in one era. Halfway through the scene – through some type of abrupt cut, such as walking through a door – the characters, location and eras shift to somewhere else. However, the story and the editing are such that you clearly understand how the story continues for the first half of that scene, as well as how it led into the second half. This is all without explicitly shooting those parts of each scene. Scene A/era A informs your understanding of scene B/era B and vice versa.

3. Understand camera movement. When a camera zooms, moves or is used in a shaky, handheld manner, this elicits certain emotions from the audience. As a director or DP, you need to understand when each style is appropriate and when it can be overdone. Zooming into a close-up while an actor delivers a line should be done intentionally. It tells the audience, “Listen up. This is important.” If you shoot handheld footage, like most of the Bourne series, it drives a level of documentary-style, frenetic action that should be in keeping with the concept.

The TV series NYPD Blue is credited with introducing TV audiences to the “shaky-cam” style of camera work. Many pros thought it was overdone, with movement often being introduced in an unmotivated fashion. Yet, the original Law & Order series also made extensive use of handheld photography. As this was more in keeping with a subtle documentary style, few complained about its use on that show.

4. Color palettes and art direction. Many new filmmakers often feel that you can get any look you want through color grading. The reality is that it all starts with art direction. Grading should enhance what’s there, not manufacture something that isn’t. To get that “orange & teal” look, you need to have a set and wardrobe that has some greens and blues in it. To get a warm, earthy look, you need a set and wardrobe with browns and reds.

This even extends to black & white films. To get the right contrast and tonal values in black & white, you often have to use set/wardrobe color choices that are not ideal in a color world. That’s because different colors carry differing luminance and midrange values, which becomes very obvious, once you eliminate the color information from the picture. Make sure you take that into account if you plan to produce a black & white film.

5. Score versus sound design. Music should enhance and underscore a film, but it does not have to be wall-to-wall. Some films, like American Hustle and The Wolf of Wall Street, are driven by a score of popular tunes. Others are composed with an original score. However, often the “score” consists of sound design elements and simple musical drones designed to heighten tension and otherwise manipulate emotion. The absence of score in a scene can achieve the same effect. Sound effects elements with stark simplicity may have more impact  on the audience than music. Learn when to use one or the other or both. Often less is more.

6. Don’t tell too much story. Not every film requires extensive exposition. As I said at the top, a film is not a book. Visual cues are as important as the spoken word and will often tell the audience a lot more in shorthand, than pages and pages of script. The audience is interested in the journey your film’s characters are on and frequently need very little backstory to get an understanding of the characters. Don’t shy away from shooting enough of that sort of detail, but also don’t be afraid to cut it out, when it becomes superfluous.

©2014 Oliver Peters

Film editing stages – Sound

df_filmsoundeditLike picture editing, the completion of sound for a film also goes through a series of component parts. These normally start after “picture lock” and are performed by a team of sound editors and mixers. On small, indie films, a single sound designer/editor/mixer might cover all of these roles. On larger films, specific tasks are covered by different individuals. Depending on whether it’s one individual or a team, sound post can take anywhere from four weeks to several months to complete.

Location mixing – During original production, the recording of live sound is handled by the location mixer. This is considered mixing, because originally, multiple mics were mixed “on-the-fly” to a single mono or stereo recording device. In modern films with digital location recordings, the mixer tends to record what is really only a mixed reference track for the editors, while simultaneously recording separate tracks of each isolated microphone to be used in the actual post production mix.

ADR – automatic dialogue replacement or “looping”. ADR is the recording of replacement dialogue in sync with the picture. The actors do this while watching their performance on screen. Sometimes this is done during production and sometimes during post. ADR will be used when location audio has technical flaws. Sometimes ADR is also used to record additional dialogue – for instance, when an actor has his or her back turned. ADR can also be used to record “sanitized” dialogue to remove profanity.

Walla or “group loop” – Additional audio is recorded for groups of people. This is usually for background sounds, like guests in a restaurant. The term “walla” comes from the fact that actors were (and often still are) instructed to say “walla, walla, walla” instead of real dialogue. The point is to create a sound effect of a crowd murmuring, without any recognizable dialogue line being heard. You don’t want anything distinctive to stand out above the murmur, other than the lead actors’ dialogue lines.

Dialogue editing – When the film editor (i.e. the picture editor) hands over the locked cut to the sound editors, it generally will include all properly edited dialogue for the scenes. However, this is not prepared for mixing. The dialogue editor will take this cut and break out all individual mic tracks. They will make sure all director’s cues are removed and they will often add room tone and ambience to smooth out the recording. In addition, specific actor mics will be grouped to common tracks so that it is easier to mix and apply specific processing, as needed, for any given character.

Sound effects editing/sound design – Sound effects for a film come from a variety of sources, including live recordings, sound effects libraries and sound synthesizers. Putting this all together is the role of the sound effects editor(s). Because many have elevated the art, by creating very specific senses of place, the term “sound designer” has come into vogue. For example, the villain’s lair might always feature certain sounds that are identifiable with that character – e.g. dripping water, rats squeaking, a distant clock chiming, etc. These become thematic, just like a character’s musical theme. The sound effects editors are the ones that record, find and place such sound effects.

Foley – Foley is the art of live sound effects recording. This is often done by a two-person team consisting of a recordist and a Foley walker, who is the artist physically performing these sounds. It literally IS a performance, because the walker does this in sync to the picture. Examples of Foley include footsteps, clothes rustling, punches in a fight scene and so on. It is usually faster and more appropriate-sounding to record live sound effects than to use library cues from a CD.

In addition to standard sound effects, additional Foley is recorded for international mixes. When an actor deliveries a dialogue line over a sound recorded as part of a scene – a door closing or a cup being set on a table – that sound will naturally be removed when English dialogue is replaced by foreign dialogue in international versions of the film. Therefore, additional sound effects are recorded to fill in these gaps. Having a proper international mix (often called “fully filled”) is usually a deliverable requirement by any distributor.

Music – In an ideal film scenario, a composer creates all the music for a film. He or she is working in parallel with the sound and dialogue editors. Music is usually divided between source cues (e.g. the background songs playing from a jukebox at a bar) and musical score.

Recorded songs may also be used as score elements during montages. Sometimes different musicians, other than the composer, will create songs for source cues or for use in the score. Alternatively, the producers may license affordable recordings from unsigned artists. Rarely is recognizable popular music used, unless the production has a huge budget. It is important that the producers, composer and sound editors communicate with each other, to define whether items like songs are to be treated as a musical element or as a background sound effect.

The best situation is when an experienced film composer delivers all completed music that is timed and synced to picture. The composer may deliver the score in submixed, musical stems (rhythm instruments separated from lead instruments, for instance) for greater control in the mix. However, sometimes it isn’t possible for the composer to provide a finished, ready-to-mix score. In that case, a music editor may get involved, in order to edit and position music to picture as if it were the score.

Laugh tracks – This is usually a part of sitcom TV production and not feature films. When laugh tracks are added, the laughs are usually placed by sound effects editors who specialize in adding laughs. The appropriate laugh tracks are kept separate so they can be added or removed in the final mix and/or as part of any deliverables.

Re-recording mix – Since location recording is called location mixing, the final, post production mix is called a re-recording mix. This is the point at which divergent sound elements – dialogue, ADR, sound effects, Foley and music – all meet and are mixed in sync to the final picture. On a large film, these various elements can easily take up 150 or more tracks and require two or three mixers to man the console. With the introduction of automated systems and the ability to completely mix “in the box”, using a DAW like Pro Tools, smaller films may be mixed by one or two mixers. Typically the lead mixer handles the dialogue tracks and the second and third mixers control sound effects and music. Mixing most feature films takes one to two weeks, plus the time to output various deliverable versions (stereo, surround, international, etc.).

The deliverable requirements for most TV shows and features are to create a so-called composite mix (in several variations), along with separate stems for dialogue, sound effects and music. A stem is a submix of just a group of component items, such as a stereo stem for only dialogue.The combination of the stems should equal the mix. By having stems available, the distributors can easily create foreign versions and trailers.

©2013 Oliver Peters

Film editing stages – Picture

df_filmpicedit

While budding filmmakers have a good idea of what happens during the production phase of shooting a film, most have little idea about what happens in post. Both picture and sound go through lengthy and separate editorial processes. These often become a rude awakening for new directors when it pertains to the time and budget requirements. These are the basic steps every modern film goes through in getting to the finish line.

First cut – This stage goes by many names – first cut, first assembly, editor’s cut, etc. In general, this is the first version of the fully-assembled film, including all the scenes edited according to the script. Depending on the editor and the post schedule, this cut may be very rough – or it might be a reasonably polished edit. If the editing happens concurrent to the studio and location filming, then often there will be a “first assembly” and a subsequent “editor’s cut”. The former is a quick turnaround version, so that everyone can make sure the coverage is adequate. The latter is a more refined version.

Some productions employ an on-set editor who is the person generating this “first assembly”. That editor is then often replaced by the main film editor, who starts after all production is completed. In that situation, the “editor’s cut” might be completely different in style, pace and technique from the first version. No matter how you get there, the intent of this step is to properly represent the intention of the script without concern for length or solving any content or script challenges.

Director’s cut – Once the editor has completed the first cut of the film, then the director steps in. He or she works with the editor to complete the cut of the film. Directors often deviate from the written scene. Sometimes this is sufficiently communicated to the editor to show up that way in the first cut. Sometimes it isn’t, because it lives in the director’s mind as the production proceeds. During the “director’s cut” phase, the director and editor work closely to adjust the cut to reflect the director’s vision.

Many directors and editors repeatedly work together on films and form a partnership of sorts. In these situation, the editor has a good idea of what the director wants and often the director only needs to give notes and review the cut periodically. Other directors like to be very “hands on” and will work closely with the editor, reviewing every take and making adjustments as needed.

Depending on the film and whether or not the director is DGA (Directors Guild), this stage will take a minimum of 20 days (DGA low budget) or 10 weeks (DGA standard) or longer. The goal is for the director and editor to come up with the best film possible, without interference from outside parties, including the producers. At this point, the film may go through severe changes, including shortening, losing and/or re-arranging scenes and even the addition of new content, like insert shots and new voice-over recordings.

Producer’s cut – After the director has a shot at the film, now it’s time to make adjustments according to studio notes, producer comments and feedback from official and unofficial test screenings. If the director hasn’t yet brought the film into line – both story-wise and length-wise – now is the time to do that. Typically most indie films are targeted at the 90-100 minute range. If your first cut or director’s cut is 120 minutes or longer, then it’s going to have to be cut down by a significant amount.

Typically you can shorten a film by 10% through trimming and shortening scenes. A reduction of 25% or more means that shots and whole scenes have to go. This can be a painful experience for the director, who has suffered through the agony, time and expense of getting these scenes and shots recorded. The editor, on the other hand, has no such emotional investment and can be more objective. Whichever way the process moves forward, the point is to get the cut to its final form.

Depending on the production, this version of the film might also include temporary sound effects, music and visual effects that have been added by the editor and/or assistants. Often this is needed to fully appreciate the film when showing it in test screenings.

Locked picture – The goal of these various editing steps is to explore all creative options in order to end up with a film that will not go through any further editing changes. This means, no revisions that change time or selected shots. The reason for a “locked picture” is so that the sound editing team and the visual effects designers can proceed with their work without the fear that changes will undo some of their efforts. Although large budget films have the luxury of making editorial changes after this point, it is unrealistic for smaller indie films. “Locking the cut” is absolutely essential if you want to get the best effort out of the post team, as well as stay within your budget.

Visual effects – If your film requires any visual effects shots, these are best tackled after picture lock. The editors will hand off the required source elements to the visual effects company or designers so they can do their thing. Editors are typically not involved in visual effects creation, other than to communicate the intent of any temp effects that have created and to make sure the completed VFX shots integrate properly back into the picture.

Sound editorial This will be covered in depth in the next blog post. It has its own set of steps and usually takes several weeks to several months to complete.

Conform and grade – Prior to this step, all editing has been performed with “proxy” media. During the “finishing” stage of the film, the original camera media is “conformed” to the locked cut that was handed over from the film editor. This conform step is typically run by an online editor who works in tandem with the colorist. Sometimes this is performed by the colorist and not a separate individual. On very low budget films, the film editor, online editor and colorist might all be the same person. During conforming, the objective is to frame-accurately re-create the edit, including all reframing, speed ramps and to integrate all final visual effects shots. From this point the film goes to color correction for final grading. Here the colorist matches all shots to establish visual consistency, as well as to add any subjective looks requested by the director or director of photography. The last process is to marry the sound mix back to the picture and then generate the various deliverable masters.

©2013 Oliver Peters

Anatomy of editing a two camera scene

df_2cam_1

With the increase in shooting ratios and shortened production schedules, many directors turn to shooting their project with two cameras for the entire time. Since REDs and Canon HDSLRs are bountiful and reasonably priced to buy or rent, even a low budget indie film can take advantage of this. Let me say from the beginning that I’m not a big fan of shooting with two cameras. Too many directors view it as a way to get through their shooting schedule more quickly; but, in fact, they often shoot more footage than needed. Often the B-camera coverage is only 25% useful, because it was not properly blocked or lit for. However, there are situations where shooting with two cameras works out quite well. The technique is at its most useful when shooting a dramatic dialogue scene with two or three principal actors. (Click the images below for expanded views.)

Synchronization

df_2cam_4_smThe most critical aspect is maintaining proper sync with audio and between the two cameras. In an ideal world, this is achieved with matching timecode among the cameras and the external sound recorder. Reality often throws a curve ball, which means that more often than not, timecodes drift throughout the day or the cameras weren’t properly jam-synced or some other issue. The bottom line is that by the time it gets to the editor, you often cannot rely on timecode for all elements to be in sync. That’s why “old school” techniques like a slate with a clapstick are ESSENTIAL. This means roll all three devices and slate both cameras. If you have to move to stand in front of the B-camera for a separate B-camera slate and clap, then you MUST do it.

When this gets to post, the editor or assistant first needs to sync audio and video for both the A-camera and B-camera for every take. If your external sound recorder saved broadcast WAV files, then usually you’ll have one track with the main mix and additional tracks for each isolated microphone used on set. Ideally, the location mixer will have also fed reference audio to both cameras. This means you now have three ways to sync – timecode, slate/clapstick and/or common audio. If the timecode does match, most NLEs have a syncing function to create merged clips with the combined camera file and external audio recording. FCP X can also sync by matching audio waveforms (if reference audio is present on the camera files). For external syncing, there’s Sync-N-Link and Sync-N-Link X (matching timecode) and PluralEyes (matching audio).

These are all great shortcuts, but there are times when none of the automatic solutions work. That’s when the assistant or editor has to manually mark the visual clap on the camera files and audio spike of the clap on the sound file and sync the two elements based on these references. FCP X adds an additional feature, which is the ability to open a master clip in its own timeline (“open in timeline” command). You can then edit directly “inside” the master clip. This is useful with external audio, because you have now embedded the external audio tracks inside the master clip for that camera file and they travel together from then on. This has an advantage over FCP X’s usual synchronized clip method, in that it retains the camera source’s timecode. Synchronized clips reset the timecode of that clip to 00:00:00:00.

Directing strategy

df_2cam_2_smIn standard film productions, a scene will be shot multiple times – first a “master” and then various alternate angles; sometimes alternative line readings; as well as pick-ups for part of a scene or cutaways and inserts showing items around the set. The “master” of the scene gets a scene number designation, such as Scene 101, Take 1, Take 2, etc. Whenever the camera is reframed or repositioned – or alternative dialogue is introduced – those recordings get a letter suffix, such as 101A, 101B and so on. With two cameras, there’s also the A and B camera designation, which is usually part of the actual camera file name or embedded metadata.

In blocking a simple dialogue scene with two actors, the director would set up the master with a wide shot for the entire scene on the A-camera and maybe a medium on the lead actor within that scene on the B-camera. The B-cam may be positioned next to A-cam or on the opposite side (without crossing the line). That’s Scene 101 and typically, two or three takes will be recorded.

Next, the director will set up two opposing OTS (over the shoulder) angles of the two speaking actors for 101A. After that, opposing CU (close-up) angles for 101B. Often there’s a third set-up (101C) for additional items. For example, if the scene takes place in a bar, there may be extra coverage that sets up the environment, such as patrons at the bar in the background of the scene. In this example with four setups (101-101C) – assuming the director rolled for three takes on each set-up – coverage with two cameras automatically gives you 24 clips to choose from in editing this scene.

Editing strategy

When you mention two camera coverage, many will think of multi-cam editing routines. I never use that for this purpose, because for me, an A-cam or B-cam angle of the same take is still like a uniquely separate take. However, I do find that editing the first swipe at the scene works best when you work with A-cam and B-cam grouped together. Although a director might pick a certain take as his best or “circle” take, I make the assumption that all takes have some value for individual lines. I might start with the circle take of the scene’s master, but I usually end up editing in bits and pieces of other takes, as well. The following method works best when the actors stick largely to the script, with minimal ad libs and improvisation.

df_2cam_3_smStep one is to edit the A-cam circle take of the scene master to the timeline, complete with slate. Next, edit the matching B-cam clip on top, using the slate’s clap to match the two angles. (Timecode also works, of course, if A-cam and B-cam have matching timecode.) The exact way I do this varies with the NLE that I am using. In FCP X, the B-cam clip is a connected clip, while in FCP 7, Media Composer and Premiere Pro, the B-cam is on V2 and the accompanying audio is on the tracks below those from the A-cam clip. The point is to have both angles stacked and in sync. df_2cam_5_smLastly, I’ll resize the B-cam clip so I see it as a PIP (picture-in-picture effect) over the A-cam image. Now, I can play through this scene and see what each camera angle of the master offers.

df_2cam_6_smStep two is to do the first editing pass on the scene. I use the blade tool (or add edit) to cut across all tracks/layers/clips at each edit point. Obviously, I’ll add a cut at the start of the action so I can remove the slate and run-up to the actual start of the scene. As I play though, I am making edit selections, as if I were switching cameras. The audio is edited as well – often in the middle of a line or even word. This is fine. Once these edits are done, I will delete the front and back of these takes. Then I will select all of the upper B-cam shots (plus audio) that I don’t want to use and delete these. Finally, remove the transform effects to restore the remaining B-cam clips to full screen.

df_2cam_7_smAt this stage I will usually move the B-cam clips down to the main track. In FCP X, I use the “overwrite to primary storyline” command to edit the B-cam clips (with audio) onto the storyline, thus replacing the A-cam clip segments that were there. This will cause the embedded external audio from the overwritten A-cam clip segments to be pushed down as connected clips. Highlight and delete this. In a track-based NLE, I may leave the B-cam clips on V2 or overwrite to V1. I’ll also highlight and delete/lift the unwarranted, duplicate A-cam audio. In all cases, what you want to end up with is a scene edit that checkerboards the A-cam and B-cam clips – audio and video.

df_2cam_8_smStep three is to find other coverage. So far in this example, I’ve only used the circle take for the master of the scene. As I play the scene, I will want to replace certain line readings with better takes from the other coverage (ex. 101A, B, C, etc.), including OTS and CU shots. One reason is to use the best acting performance. Another is to balance the emotion of the scene and create the best arc. Typically in a dramatic scene, the emotion rises as you get later into the scene. To emphasize this visually, I want to use tighter shots as I get further into the scene – focusing mainly on eyes and facial expressions.

I work through the scene, seeking to replace some of the master A-cam or B-cam clip segments. I will mark the timeline section to delete/extract, find a better version of that line in another angle/take and insert/splice it into that position. FCP X has a replace function which is designed for this, but I find it to be slow and inconsistent. A fast keystroke combo of marking clips and timeline and then pressing delete, followed by insert is significantly faster. Regardless of the specific keystrokes used, the point is to build/control the emotion of the scene in ways that improve the drama and combine the best performances of each actor.

df_2cam_9_smStep four is to tighten the scene. At this point, you are primarily working in the trim mode of your NLE. With FCP X, expand the audio/video so you can independently trim both elements for J and L-cuts. As you begin, you’ll have some sloppy edits. Words may be slightly clipped or the cadence of speech doesn’t sound right. You now have to fix this by trimming clips and adjusting audio and video edit points. FCP X is especially productive here, because the waveform display makes it easy to see where the same words from adjacent clips align.

You want the scene to flow – dramatically and technically. How natural-sounding is the delivered dialogue as a result of your edit choices? You should also be mindful of continuity, such as actors’ eye lines, body positions and actions. Often actors will add dramatic pauses, long stares and verbal stumbles to the performance. This may be for valid dramatic emphasis; but, it can also be over-acting and even the equivalent memory trick of saying “um” – as the actor tries to remember the next line. Your job as an editor (in support of the director) is to decide which it is and edit the scene so that it comes across in the best way possible. You can cut out this “air” by trimming the edits and/or by using tricks. For example, slip part of a line out of sync and play it under an OTS or reaction shot to tighten a pause.

Step five is to embellish the scene. This is where you add non-sync reactions, inserts and cutaways. The goal is to enhance the scene by giving it a sense of place, cover mismatched continuity and to improve drama. Your elements are the extra coverage around the set (like our patrons at the bar) or an actor’s nod, smile, head turn or grimace in response to the dialogue delivered by their acting counterpart. The goal is to maintain the viewer’s emotional involvement and help tell the story through visuals other than simply seeing a person talk. You want the viewer to follow both sides of the conversation through dialogue and visual cues.

While I have written this post with film and television drama in mind, the same techniques apply, whether it’s a comedy, documentary or simple corporate interview. It’s a strategy for getting the most efficiency out of your edit.

Click here for more film editing tips.

©2013 Oliver Peters