Final Cut Pro X Batch Export

df_batchex_1_sm

One of the “legacy” items that editors miss when switching to Final Cut Pro X is the batch export function. For instance, you might want to encode H.264 versions of numerous ProRes files from your production, in order to upload raw footage for client review. While FCP X can’t do it directly, there is a simple workaround that will give you the same results. It just takes a few steps.

df_batchex_2_smStep one. The first thing to do is to find the clips that you want to batch export. In my example images, I selected all the bread shots from a grocery store commercial. These have been grouped into a keyword collection called “bread”. Next, I have to edit these to a new sequence (FCP X project) into order to export. These can be in a random order and should include the full clips. Once the clips are in the project, export an FCPXML from that project.

df_batchex_3_smStep two. I’m going to use the free application ClipExporter to work the magic. Launch it and open the FCPXML for the sequence of bread shots. ClipExporter can be used for a number of different tasks, like creating After Effects scripts, but in this case we are using it to create QuickTime movies. Make sure that all of the other icons are not lit. If you toggle the Q icon (QuickTime) once, you will generate new self-contained files, but these might not be the format you want. If you toggle the Q twice, it will display the icon as QR, which means you are now ready to export QuickTime reference files – also something useful from the past. ClipExporter will generate a new QuickTime file (self-contained or reference) for each clip in the FCP X project. These will be copied into the target folder location that you designate.df_batchex_4_sm

df_batchex_5_smStep three. ClipExporter places each new QuickTime clip into its own subfolder, which is a bit cumbersome. Here’s a neat trick that will help. Use the Finder window’s search bar to locate all files that ends with the .mov extension. Make sure you limit the search to only your target folder and not the entire hard drive. Once the clips have been selected, copy-and-paste them to a new location or drag them directly into your encoding application. If you created reference files, copying them will go quickly and not take up additional hard drive space.

df_batchex_6_smStep four. Drop your selected clips into Compressor or whatever other encoding application you choose. (It will need to be able to read QuickTime reference movies.) Apply your settings and target destination and encode.

df_batchex_7_smStep five. Since many encoding presets typically append a suffix to the file name, you may want to alter or remove this on the newly encoded files. I use Better Rename to do this. It’s a batch utility for file name manipulation.

There you go – five easy steps (less if you skip some of the optional tasks) to restore batch exports to FCP X.

©2014 Oliver Peters

The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters

Amira Color Tool and your NLE

df_amiracolor_1I was recently alerted to the new Amira Color Tool by Michael Phillips’ 24p blog. This is a lightweight ARRI software application designed to create custom in-camera looks for the Amira camera. You do this by creating custom color look-up tables (LUT). The Amira Color Tool is available as a free download from the ARRI website (free registration required). Although the application is designed for the camera, you can also export looks in a variety of LUT file formats, which in turn, may be installed and applied to footage in a number of different editing and color correction applications. I tested this in both Apple Final Cut Pro X and Avid Media Composer | Software (v8) with good results.

The Amira Color Tool is designed to correct log-C encoded footage into a straight Rec709 offset or with a custom look. ARRI offers some very good instructions, white papers, sample looks and tutorials that cover the operation of this software. The signal flow is from the log-C image, to the Rec709 correction, and then to the CDL-based color correction. To my eye, the math appears to be floating point, because a Rec709 conversion that throws a shot into clipping, can be pulled back out of clipping in the look tab, using the CDL color correction tools. Therefore it is possible to use this tool for shots other than ARRI Amira or Alexa log-C footage, as long as it is sufficiently flat.

The CDL correction tools are based on slope, offset and power. In that model slope is equivalent to gain, offset to lift and power to gamma. In addition to color wheels, there’s a second video look parameters tab for hue intensities for the six main vectors (red, yellow, green, cyan, blue and magenta). The Amira Color Tool is Mac-only and opens both QuickTime and DPX files from the clips I tested. It worked successfully with clips shot on an Alexa (log-C), Blackmagic Cinema Camera (BMD Film profile), Sony F-3 (S-log) and Canon 1DC (4K Canon-log). Remember that the software is designed to correct flat, log-C images, so you probably don’t want to use this with images that were already encoded with vibrant Rec709 colors.

FCP X

df_amiracolor_4To use the Amira Color Tool, import your clip from the application’s file browser, set the look and export a 3D LUT in the appropriate format. I used the DaVinci Resolve setting, which creates a 3D LUT in a .cube format file. To get this into FCP X, you need to buy and install a LUT filter, like Color Grading Central’s LUT Utility. To install a new LUT there, open the LUT Utility pane in System Preferences, click the “+” symbol and navigate to where the file was saved.df_amiracolor_5_sm In FCP X, apply the LUT Utility to the clip as a filter. From the filter’s pulldown selection in the inspector, choose the new LUT that you’ve created and installed. One caveat is to be careful with ARRI files. Any files recorded with newer ARRI firmware are flagged for log-C and FCP X automatically corrects these to Rec709. Since you don’t want to double up on LUTs, make sure “log processing” is unchecked for those clips in the info tab of the inspector pane.

Media Composer

df_amiracolor_6_smTo use the custom LUTs in Media Composer, select “source settings” for the clip. Go to the color management tab and install the LUT. Now it will be available in the pull-down menu for color conversions. This color management change can be applied to a single clip or to a batch of clips within a bin.

In both cases, the source clips in FCP X and/or Media Composer will play in real-time with the custom look already applied.

df_amiracolor_2_sm

df_amiracolor_3_sm

©2014 Oliver Peters

Using FCP X with Adobe CC

df_x-cc_1

While the “battle” rages on between the proponents of using either Apple Final Cut Pro X or Adobe Premiere Pro CC as the main edit axe, there is less disagreement about the other Adobe applications. Certainly many users like Motion, Aperture and Logic, but it’s pretty clear that most editors favor Adobe solutions over others. I have encountered very few power users of Motion, as compared with After Effects wizards – nor graphic designers who can get by without touching Illustrator or Photoshop. This post isn’t intended to change anyone’s opinion, but rather to offer a few pointers on how to productively use some of the Adobe Creative Cloud (or CS6) applications to complement your FCP X workflows. (Click images below for an expanded view.)

Photoshop

df_x-cc_2_sm

For many editors, Adobe Photoshop is the title tool of choice. FCP X has some nice text tools, but Photoshop is significantly better – especially for logo creation. When you import a layered Photoshop file into FCP X, it comes in as a special layered graphics file. Layers can be adjusted, animated or disabled when you “open in timeline”. Photoshop layer effects, like a drop shadow, glow or emboss, do not show up correctly inside FCP X. If you drop the imported Photoshop file onto the timeline, it becomes a self-contained title clip. Although you cannot “open in editor” to modify the file, there is a workaround.

To re-edit the Photoshop file in Adobe Photoshop, select the clip in FCP X and “reveal in Finder”. From the Finder window open the file in Photoshop. Now you can make any changes you like. Once saved, the changes are updated in FCP X. There is one caveat that I’ve noticed. All changes that you make have to be made within the existing layers. New, additional layers do not update back inside FCP X. However, if you created layer effects and then merge that layer to bake in the effects, the update is successful in FCP X and the effects become visible.

This process is very imperfect because of FCP X’s interpretation of the Photoshop files. For example, layer alignment that matches in Photoshop may be misaligned in FCP X. All layers must have some content. You cannot create blank layers and later add content into them. When you do this, the updates will not be recognized in FCP X.

Audition

df_x-cc_3_sm

Sound mixing is still a weak link in Final Cut Pro X. All mixing is clip-based without a proper mixing pane, like most other NLEs have. There are methods (X2Pro Audio Convert) to send the timeline audio to Pro Tools, but many editors don’t use Pro Tools. Likewise sending an FCPXML to Logic X works better than before, but why buy an extra application if you already own Adobe Audition? I tested a few options, like using X2Pro to get an AAF into Premiere Pro and then into Audition, but none of this worked. What does work is using XML.

First, duplicate the sequence and work from the copy for safety. Review your edited sequence in FCP X and detach/delete any unused audio elements, such as muted audio associated with connected clips that are used as video-only B-roll. Next, break apart any compound clips. I recommend detaching the desired audio, but that’s optional. Now export an FCPXML for that sequence. Open the FCPXML in the Xto7 application and save the audio tracks as a new XML file.

Launch Audition and import the new XML file. This will populate your multitrack mixing window with the sequence and clips. At this stage, all clips that were inside FCP X Libraries will be offline. Select these clips and use the “link media” command. The good news is that the dialogue window will allow you to see inside the Library file and let you navigate to the correct file. Unfortunately, the correct name match will not be bolded. Since these files are typically date/time-stamped, make sure to read the names carefully when you select the first clip. The rest will relink automatically. Note that level changes and fades that were made in FCP X do not come across into Audition.

Now you can mix the session. When done, export a stereo (or other) mixed master file. Import that into FCP X and attach as a connected clip to the head of your sequence. Make sure to delete, disable (make “invisible”) or mute all previous audio.

After Effects

df_x-cc_4_sm

For many editors, Adobe After Effects is the finishing tool of choice – not just for graphics and effects, but also color correction and other embellishments. Thanks to the free ClipExporter application, it’s easy to go from FCP X to After Effects.

Similar to the Audition step, I recommend detaching/deleting all audio. Some folks like to have audio inside After Effects, but most of the time it’s in the way for me. Break part all compound clips. You might as well remove any FCP X titles and effects filters/transitions, since these don’t translate into After Effects. Lastly, I recommend selecting all connected clips and using the “overwrite to storyline” command. This will place everything onto the primary storyline and result in a straightforward cascade of layers once inside After Effects.

Export an FCPXML file for the sequence. Open ClipExporter and select the AE conversion tab. Import the FCPXML file. An important feature is that ClipExporter supports FCP X’s retiming function, but only for AE exports. Now run ClipExporter and save the resultant After Effects script file.

Launch Adobe After Effects and from the File/Script pulldown menu, select the saved script file created by ClipExporter. The script will run and load the clips and a your sequence as a new composition. Each individual shot is stashed into its own mini-composition and these are then placed into a stack of layers for the timeline of the main AE composition. Should you need to trim/slip the media for a shot, all available media can be accessed and adjusted within the shot’s individual mini-comp. If a shot has been retimed in FCP X, those adjustments also appear in the mini-comp and not in the main composition.

Build your effects and render a flattened file with everything baked in. Import that file into FCP X and add it as a connected clip to the top of your sequence. Disable all other video clips.

©2014 Oliver Peters

NLE Tips – Week 4

df_nle4_1_sm

Apple FCP X and Lined Scripts

Feature film editing is facilitated by the information coming from the script supervisor’s notes and adjusted script. This is frequently called a “lined script” because the supervisor will draw vertical lines with notations that indicate which angles and takes cover specific sections of every scene. In addition, editors developed another notation of horizontal lines that separate the dialogue. This was the basis of the original Ediflex Script Mimic process that eventually found its way into Avid as Script Integration and Script Sync. (Click on any image for an expanded view.)df_nle4_4_sm

There are a couple of simple ways to adapt this concept to Apple Final Cut Pro X. A few methods have been proposed, but the easiest and fastest method for me is to use markers. The first step is to take the printed script with the script supervisor’s notations and add the horizontal line notation that splits up the dialogue.

df_nle4_2_smStart at line 1 on page one and you’ll eventually end up with 1,000 or more at the end of the last page. Other numbering conventions are fine. Ideally this could be added to the script by the supervisor before the start of the production. If not, you or the assistant editor (if you are lucky enough to have one) will need to do this. You can add as many lines as you want to, depending on how granular you want the division of the dialogue to be. This could be with every carriage return of the printed script or it could be just between every paragraph.

When the camera files are imported and logged into the FCP X event, you’ll need to add scene and take information. This can be done by renaming the clip name or (which I prefer) by entering it into the scene/take columns (or both). As each clip is reviewed, add markers at every point within the clip that matches the position of the horizontal divisions made to the script dialogue. Rename these markers to match the numbers assigned on the written script.

df_nle4_3_smWhen you’ve gone through this process for each file that covers the scene, you will have scene/take information that matches the supervisor’s vertical lines and markers that align with the horizontal separation. Under each clip, there’s now a list of markers, which you’ve labelled to match the script lines. By clicking on one of these, you can instantly jump to that point in the dialogue within any given clip.

In a lengthy scene, if you want to see all the coverage options that are available for a particular line of dialogue somewhere in the middle of the scene, all you have to do is go to the corresponding numbered marker closest to that line of dialogue. If that number is “201” for example, simply click on the marker labelled “201” within each clip and you can successively review each angle and take at that point.

Naturally you can leverage FCP X’s capabilities by creating favorites and smart collections based on these choices, but script lining and using markers is a good and easy starting point.

©2014 Oliver Peters

NLE Tips – Week 3

df_nle3_1_sm

The Avid  – Resolve Roundtrip Workflow

Avid Media Composer has always been regarded as the best offline editing tool and its heritage was built upon a strong offline-to-online workflow. The file-based world has complicated things and various camera formats have made life even more complex for editors. Many have become quite fond of using Blackmagic Design’s DaVinci Resolve as a great companion to Media Composer. It’s cross-platform and even the free version will do most of what you need. Here’s a step-by-step example of how you might use the combo. Relinking varies a bit, based on file metadata and might need to be modified for your particular circumstances. This workflow is great with ARRI ALEXA files and will most likely work well with other similar camera formats. (Click images for an expanded view.)

df_nle3_4_smCreating edit proxies files with Resolve – ALEXA files are usually Apple ProRes 4444 or ProRes HQ QuickTime files that have been recorded with a Log-C gamma profile. So, they are big files with a flat appearance. To start, launch Resolve, load the ProRes camera clips into the Media Pool (Media or Edit tab) and select/edit all of the full clips to a new timeline. In the Color tab, select “track” instead of “clip” and apply a single node. In that node, apply an ARRI Log-C-to-Rec709 LUT. Go to the Deliver tab and pick the Avid roundtrip Easy Set-up. Make sure “Individual Source Clips” is selected (not a single file), define a render location and df_nle3_3_smdecide whether or not to add a file name prefix or suffix (not required). Render using the DNxHD 36 codec choice.

Moving to Media Composer for the creative cut – When the render process has been completed, you’ll have a folder containing Avid MXF media and a corresponding AAF file. This media has the LUT “baked in” and has been rendered with the very lightweight df_nle3_5_smDNxHD 36 codec. Drag the AAF file out of this folder to another location. Now drag this complete folder into any of your Avid MediaFiles/MXF subfolders. Unless you’ve already added extra folders there, you will typically find one existing folder (with Avid’s default label of “1”) that contains MXF media. Change the label of the new folder (the one that you’ve just dragged in) to another number, such as “2”.df_nle3_2_sm

Launch Media Composer, create a new project, open the first bin and import the AAF file that was created by Resolve. This bin will become populated by the color corrected, DNxHD 36 files created by Resolve. Voila, you are ready to edit your Oscar-winner! Cut until the project is locked. When you are done and are ready to move to the online or finishing phase of the edit, export an AAF file from Media Composer. Select “AAF Edit Protocol” and “Link to” media in the AAF options.df_nle3_10_sm

df_nle3_7_smReturning to Resolve for the final grade – Launch Resolve and start a new project. Import the AAF file that you exported from Media Composer. You’ll end up with a timeline that matches your Avid cut and it will be linked to the DNxHD 36 media. You will want to relink the files back to the original camera media – the ProRes HQ or ProRes 4444 files. To do this, delete all the media in the Resolve Media Pool (Edit tab), which will make the timeline clips appear offline. df_nle3_12_smNow, navigate to the folder with the original camera files and bring those into the Media Pool. Your timeline clips will now be relinked to this original camera media. You’ll recognize this because the clips on the timeline will be back to their original, flat, Log-C appearance. In some instances, Resolve may see some files as duplicate and might possibly relink to the wrong file. In that case, you’ll see an error icon on the timeline clip. Click on it and Resolve will present a dialogue window with the possible alternate media options. Pick the correct one and the clip should then be linked to the right shot. Color correct your timeline with the desired grade and any reframing.

df_nle3_6_smReturning to Media Composer to complete the edit – When you’ve completed the color grading, go to the Deliver tab and pick the Avid roundtrip Easy Set-up again, but this time pick a higher-quality codec (like DNxHD 175x). Make sure to set handle lengths (usually 2-5 sec.) and render (as “Individual Source Clips” again). The result will be a new folder of rendered MXF media with the “baked in” grade, plus a new corresponding AAF file. As before, drag out this AAF file and drag the folder of rendered media into the Avid MediaFiles/MXF subfolder. Relabel the folder of this new Resolve media with a different number (such as “3”).

df_nle3_11_smLaunch Media Composer, open your existing project and create a new bin. Import the new AAF file, which will now populate this bin with the high-quality media. This bin will also include the sequence that you sent over to Resolve, but now linked to the high-resolution media files. In many cases, you would simply use this sequence for any final effects, titles and other adjustments.

df_nle3_8_smRelinking the sequence in Media Composer – If for some reason the sequence that was “round-tripped” does not correctly reflect the edited cut as built in the offline stage, then you will need to relink a copy of that sequence to the new media. To do so, duplicate the sequence from your DNxHD 36 edit and move that copy into the bin with the 175x media. Close all other bins, except the 175x bin. Right-click the sequence and select “Relink” from the menu. Set your options to “Select Items In All Open Bins” and relink by “Timecode – Start” and “Source Name – Tape Name or Source File ID”. This will cause the sequence to be relinked to the new 175x final-quality media.df_nle3_9_sm

If everything worked correctly, you will have done a complete offline (creative cut) and online (finishing) workflow between Media Composer and Resolve, without the need for Avid’s traditional import or newer AMA processes!

©2014 Oliver Peters

Film editing stages – Sound

df_filmsoundeditLike picture editing, the completion of sound for a film also goes through a series of component parts. These normally start after “picture lock” and are performed by a team of sound editors and mixers. On small, indie films, a single sound designer/editor/mixer might cover all of these roles. On larger films, specific tasks are covered by different individuals. Depending on whether it’s one individual or a team, sound post can take anywhere from four weeks to several months to complete.

Location mixing – During original production, the recording of live sound is handled by the location mixer. This is considered mixing, because originally, multiple mics were mixed “on-the-fly” to a single mono or stereo recording device. In modern films with digital location recordings, the mixer tends to record what is really only a mixed reference track for the editors, while simultaneously recording separate tracks of each isolated microphone to be used in the actual post production mix.

ADR – automatic dialogue replacement or “looping”. ADR is the recording of replacement dialogue in sync with the picture. The actors do this while watching their performance on screen. Sometimes this is done during production and sometimes during post. ADR will be used when location audio has technical flaws. Sometimes ADR is also used to record additional dialogue – for instance, when an actor has his or her back turned. ADR can also be used to record “sanitized” dialogue to remove profanity.

Walla or “group loop” – Additional audio is recorded for groups of people. This is usually for background sounds, like guests in a restaurant. The term “walla” comes from the fact that actors were (and often still are) instructed to say “walla, walla, walla” instead of real dialogue. The point is to create a sound effect of a crowd murmuring, without any recognizable dialogue line being heard. You don’t want anything distinctive to stand out above the murmur, other than the lead actors’ dialogue lines.

Dialogue editing – When the film editor (i.e. the picture editor) hands over the locked cut to the sound editors, it generally will include all properly edited dialogue for the scenes. However, this is not prepared for mixing. The dialogue editor will take this cut and break out all individual mic tracks. They will make sure all director’s cues are removed and they will often add room tone and ambience to smooth out the recording. In addition, specific actor mics will be grouped to common tracks so that it is easier to mix and apply specific processing, as needed, for any given character.

Sound effects editing/sound design – Sound effects for a film come from a variety of sources, including live recordings, sound effects libraries and sound synthesizers. Putting this all together is the role of the sound effects editor(s). Because many have elevated the art, by creating very specific senses of place, the term “sound designer” has come into vogue. For example, the villain’s lair might always feature certain sounds that are identifiable with that character – e.g. dripping water, rats squeaking, a distant clock chiming, etc. These become thematic, just like a character’s musical theme. The sound effects editors are the ones that record, find and place such sound effects.

Foley – Foley is the art of live sound effects recording. This is often done by a two-person team consisting of a recordist and a Foley walker, who is the artist physically performing these sounds. It literally IS a performance, because the walker does this in sync to the picture. Examples of Foley include footsteps, clothes rustling, punches in a fight scene and so on. It is usually faster and more appropriate-sounding to record live sound effects than to use library cues from a CD.

In addition to standard sound effects, additional Foley is recorded for international mixes. When an actor deliveries a dialogue line over a sound recorded as part of a scene – a door closing or a cup being set on a table – that sound will naturally be removed when English dialogue is replaced by foreign dialogue in international versions of the film. Therefore, additional sound effects are recorded to fill in these gaps. Having a proper international mix (often called “fully filled”) is usually a deliverable requirement by any distributor.

Music – In an ideal film scenario, a composer creates all the music for a film. He or she is working in parallel with the sound and dialogue editors. Music is usually divided between source cues (e.g. the background songs playing from a jukebox at a bar) and musical score.

Recorded songs may also be used as score elements during montages. Sometimes different musicians, other than the composer, will create songs for source cues or for use in the score. Alternatively, the producers may license affordable recordings from unsigned artists. Rarely is recognizable popular music used, unless the production has a huge budget. It is important that the producers, composer and sound editors communicate with each other, to define whether items like songs are to be treated as a musical element or as a background sound effect.

The best situation is when an experienced film composer delivers all completed music that is timed and synced to picture. The composer may deliver the score in submixed, musical stems (rhythm instruments separated from lead instruments, for instance) for greater control in the mix. However, sometimes it isn’t possible for the composer to provide a finished, ready-to-mix score. In that case, a music editor may get involved, in order to edit and position music to picture as if it were the score.

Laugh tracks – This is usually a part of sitcom TV production and not feature films. When laugh tracks are added, the laughs are usually placed by sound effects editors who specialize in adding laughs. The appropriate laugh tracks are kept separate so they can be added or removed in the final mix and/or as part of any deliverables.

Re-recording mix – Since location recording is called location mixing, the final, post production mix is called a re-recording mix. This is the point at which divergent sound elements – dialogue, ADR, sound effects, Foley and music – all meet and are mixed in sync to the final picture. On a large film, these various elements can easily take up 150 or more tracks and require two or three mixers to man the console. With the introduction of automated systems and the ability to completely mix “in the box”, using a DAW like Pro Tools, smaller films may be mixed by one or two mixers. Typically the lead mixer handles the dialogue tracks and the second and third mixers control sound effects and music. Mixing most feature films takes one to two weeks, plus the time to output various deliverable versions (stereo, surround, international, etc.).

The deliverable requirements for most TV shows and features are to create a so-called composite mix (in several variations), along with separate stems for dialogue, sound effects and music. A stem is a submix of just a group of component items, such as a stereo stem for only dialogue.The combination of the stems should equal the mix. By having stems available, the distributors can easily create foreign versions and trailers.

©2013 Oliver Peters