Preparing Digital Camera Files

df0815_media_4_sm.

The modern direction in file-based post production workflows is to keep your camera files native throughout the enter pipeline. While this might work within a closed loop, like a self-contained Avid, Adobe or Apple workflow, it breaks down when you have to move your project across multiple applications. It’s common for an editor to send files to a Pro Tools studio for the final mix and to a colorist running Resolve, Baselight, etc. for the final grade. In doing so, you have to ensure that editorial decisions aren’t incorrectly translated in the process, because the NLE might handle a native camera format differently than the mixer’s or colorist’s tool. To keep the process solid, I’ve developed some disciplines in how I like to handle media. The applications I mention are for Mac OS, but most of these companies offer Windows versions, too. If not, you can easily find equivalents.

Copying media

df0815_media_6_smThe first step is to get the media from the camera cards to a reliable hard drive. It’s preferable to have at least two copies (from the location) and to make the copies using software that verifies the back-up. This is a process often done on location by the lowly “data wrangler” under less than ideal conditions. A number of applications, such as Imagine Products’ ShotPut Pro and Adobe Prelude let you do this task, but my current favorite is Red Giant’s Offload. It uses a dirt simple interface permitting one source and two target locations. It has the sole purpose of safely transferring media with no other frills.

Processing media on location

df0815_media_5_smWith the practice of shooting footage with a flat-looking log gamma profile, many productions like to also see the final, adjusted look on location. This often involves some on-site color grading to create either a temporary look or even the final look. Usually this task falls to a DIT (digital imaging technician). Several applications are available, including DaVinci Resolve, Pomfort Silverstack and Redcine-X Pro. Some new applications, specifically designed for field use, include Red Giant’s BulletProof and Catalyst Browse/Prepare from Sony Creative Software. Catalyst Browse in free and designed for all Sony cameras, whereas Catalyst Prepare is a paid application that covers Sony cameras, but also other brands, including Canon and GoPro. Depending on the application, these tools may be used to add color correction, organize the media, transcode file formats, and even prepare simple rough assemblies of selected footage.

All of these tools add a lot of power, but frankly, I’d prefer that the production company leave these tasks up to the editorial team and allow more time in post. In my testing, most of the aforementioned apps work as advertised; however, BulletProof continues to have issues with the proper handling of timecode.

Transcoding media

df0815_media_2_smI’m not a big believer in always using native media for the edit, unless you are in a fast turnaround situation. To get the maximum advantage for interchanging files between applications, it is ideal to end up in one of several common media formats, if that isn’t how the original footage was recorded. You also want every file to have unique and consistent metadata, including file names, reel IDs and timecode. The easiest common media format is QuickTime using the .mov wrapper and encoded using either Apple ProRes, Panasonic AVC-Intra, Sony XDCAM, or Avid DNxHD codecs. These are generally readable in most applications running on Mac or PC. My preference is to first convert all files into QuickTime using one of these codecs, if they originated as something else. That’s because the file is relatively malleable at that point and doesn’t require a rigid external folder structure.

Applications like BulletProof and Catalyst can transcode camera files into another format. Of course, there are dedicated batch encoders like Sorenson Squeeze, Apple Compressor, Adobe Media Encoder and Telestream Episode. My personal choice for a tool to transcode camera media is either MPEG Streamclip (free) or Divergent Media’s EditReady. Both feature easy-to-use batch processing interfaces, but EditReady adds the ability to apply LUTs, change file names and export to multiple targets. It also reads formats that MPEG Streamclip doesn’t, such as C300 files (Canon XF codec wrapped as .mxf). If you want to generate a clean master copy preserving the log gamma profile, as well as a second lower resolution editorial file with a LUT applied, then EditReady is the right application.

Altering your media

df0815_media_3_smI will go to extra lengths to make sure that files have proper names, timecode and source/tape/reel ID metadata. Most professional video cameras will correctly embed that information. Others, like the Canon 5D Mark III, might encode a non-standard timecode format, allow duplicated file names, and not add reel IDs.

Once the media has been transcoded, I will use two applications to adjust the file metadata. For timecode, I rely on VideoToolShed’s QtChange. This application lets you alter QuickTime files in a number of ways, but I primarily use it to strip off unnecessary audio tracks and bad timecode. Then I use it to embed proper reel IDs and timecode. Because it does this by altering header information, processing a lot of files happens quickly. The second tool in this mix is Better Rename, which is batch renaming utility. I use it frequently for adding, deleting or changing all or part of the file name for a batch of files. For instance, I might append a production job number to the front of a set of Canon 5D files. The point in doing all of this is so that you can easily locate the exact same point within any file using any application, even several years apart.

df0815_media_1_smSpeed is a special condition. Most NLEs handle files with mixed frame rates within the same project and sequences, but often such timelines do not correctly translate from one piece of software to the next. Edit lists are interchanged using EDL, XML, FCPXML and AAF formats and each company has its own variation of the format that they use. Some formats, like FCPXML, require third party utilities to translate the list, adding another variable. Round-tripping, such as going from NLE “A” (for offline) to Color Correction System “B” (for grading) and then to NLE “C” (for finishing), often involves several translations. Apart from effects, speed differences in native camera files can be a huge problem.

A common mixed frame rate situation in the edit is combining 23.98fps and 29.97fps footage. If both of these were intended to run in real-time, then it’s usually OK. However, if the footage was recorded with the intent to overcrank for slomo (59.94 or 29.97 native for a timebase of 23.98) then you start to run into issues. As long as the camera properly flags the file, so that every application plays it at the proper timebase (slowed), then things are fine. This isn’t true of DSLRs, where you might shoot 720p/59.94 for use as slomo in a 1080p/29.97 or 23.98 sequence. With these files, my recommendation is to alter the speed of the file first, before using it inside the NLE. One way to do this is to use Apple Cinema Tools (part of the defunct Final Cut Studio package, but can still be found). You can batch-conform a set of 59.94fps files to play natively at 23.98fps in very short order. This should be done BEFORE adding any timecode with QtChange. Remember that any audio will have its sample rate shifted, which I’ve found to be a problem with FCP X. Therefore, when you do this, also strip off the audio tracks using QtChange. They play slow anyway and so are useless in most cases where you want overcranked, slow motion files.

Audio in your NLE

The last point to understand is that not all NLEs deal with audio tracks in the same fashion. Often camera files are recorded with multiple mono audio sources, such as a boom and a lav mic on channels 1 and 2. These may be interpreted either as stereo or as dual mono, depending on the NLE. Premiere Pro CC in particular sees these as stereo when imported. If you edit them to the timeline as a single stereo track, you will not be able to correct this in the sequence afterwards by panning. Therefore, it’s important to remember to first set-up your camera files with a dual mono channel assignment before making the first edit. This same issue crops up when round-tripping files through Resolve. It may not properly handle audio, depending on how it interprets these files, so be careful.

These steps add a bit more time at the front end of any given edit, but are guaranteed to give you a better editing experience on complex projects. The results will be easier interchange between applications and more reliable relinking. Finally, when you revisit a project a year or more down the road, everything should pop back up, right where you left it.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Camerama 2015

df0715_main

The design of a modern digital video camera comes down to the physics of the sensor and shutter, the software to control colorimetry and smart industrial design to optimize the ergonomics for the operator. Couple that with a powerful internal processor and recording mechanism and you are on your way. Although not exactly easy, these traits no longer require skills that are limited to the traditional camera manufacturers. As a result, innovative new cameras have been popping up from many unlikely sources.

df0715_cionThe newest of these is AJA, which delivered the biggest surprise of NAB 2014 in the form of their CION 4K/UltraHD/2K/HD digital camera. Capitalizing on a trend started by ARRI, the CION records directly to the edit-ready Apple ProRes format, using AJA Pak solid state media. The CION features a 4K APS-C sized CMOS sensor with a global shutter to eliminate rolling-shutter artifacts. AJA claims 12 stops of dynamic range and uses a PL mount for lenses designed for Super 35mm. The CION is also capable of outputting AJA camera raw at frame rates up to 120fps.  It can send out 4K or UHD video from its four 3G-SDI outputs to the AJA Corvid Ultra for replay and center extraction during live events.

df0715_alexaThe darling of the film and high-end television world continues to be ARRI Digital with its line of ALEXA cameras. These now include the Classic, XT, XT Plus, XT M and XT Studio configurations. They vary based on features and sensor size. The Classic cameras have a maximum active sensor photosite size of 2880 x 2160, while the XT models go as high as 3414 x 2198. Another difference is that the XT models allow in-camera recording of ARRIRAW media. The ALEXA introduced ProRes recording and all current XT models permit Apple ProRes and Avid DNxHD recording.

df0715_amiraThe ALEXA has been joined by the newer, lighter AMIRA, which is targeted at documentary-style shooting with smaller crews. The AMIRA is tiered into three versions, with the Premium model offering 2K recording in all ProRes flavors at up to 200fps. ARRI has added 4K capabilities to both the ALEXA and AMIRA line by utilizing the full sensor size using their Open Gate mode. In the Amira, this 3.4K image is internally scaled by a factor of 1.2 to record a UHD file at up to 60fps to its in-camera CFast 2.0 cards. The ALEXA uses a similar technique, but only records the 3.4K signal in-camera, with scaling to be done later in post.

df0715_alexa65To leapfrog the competition, ARRI also introduced its ALEXA 65, which is available through the ARRI Rental division. This camera is a scaled up version of the ALEXA XT and uses a sensor that is larger than a 5-perf 65mm film frame. That’s an Open Gate resolution of 6560 x 3102 photosites. The signal is captured as uncompressed ARRIRAW. Currently the media is recorded on ALEXA XR Capture drives at a maximum frame rate of 27fps.

df0715_bmd_cc_rear_lBlackmagic Design had been the most unexpected camera developer a few years ago, but has since grown its DSLR-style camera line into four models: Studio, Production 4K, Cinema and Pocket Cinema. These vary in cosmetic style and size, which formats they are able to record and the lens mounts they use. df0715_bmdpocketThe Pocket Cinema Camera is essentially a digital equivalent of a Super 16mm film camera, but in a point-and-shoot, small camera form factor. The Cinema and Production 4K cameras feature a larger, Super 35mm sensor. Each of these three incorporate ProRes and/or CinemaDNG raw recording. The Studio Camera is designed as a live production camera. It features a larger viewfinder, housing, accessories and connections designed to integrate this camera into a television studio or remote truck environment. There is an HD and a 4K version.

df0715_ursaThe biggest Blackmagic news was the introduction of the URSA. Compared to the smaller form factors of the other Blackmagic Design cameras, the URSA is literally a “bear” of a camera. It is a rugged 4K camera built around the idea of user-interchangeable parts. You can get EF, PL and broadcast lens mounts, but you can also operate it without a lens as a standalone recording device. It’s designed for UltraHD (3840 x 2160), but can record up to 4,000 pixels wide in raw. Recording formats include CinemaDNG raw (uncompressed and 3:1 compressed), as well as Apple ProRes, with speeds up to 80fps. There are two large displays on both sides of the camera, which can be used for monitoring and operating controls. It has a 10” fold-out viewfinder and a built-in liquid cooling system. As part of the modular design, users can replace mounts and even the sensor in the field.

df0715_c300Canon was the most successful company out of the gate when the industry adopted HD-video-capable DSLR cameras as serious production tools. Canon has expanded these offerings with its Cinema EOS line of small production cameras, including the C100, C100 Mark II, C300 and C500, which all share a similar form factor. Also included in this line-up is the EOS-1D C, a 4K camera that retains its DSLR body. The C300 and C500 camera both use a Super 35mm sized sensor and come in EF or PL mount configurations. The C300 is limited to HD recording using the Canon XF codec. The C500 adds 2K and 4K (4096 cinema and 3840 UHD) recording capabilities, but this signal must be externally recorded using a device like the Convergent Design Odyssey 7Q+. HD signals are recorded internally as Canon XF, just like the C300. The Canon EOS C100 and C100 Mark II share the design of the C300, except that they record to AVCHD instead of Canon XF. In addition, the Mark II can also record MP4 files. Both C100 models record to SD cards, whereas the C300/C500 cameras use CF cards. The Mark II features improved ergonomics over the base C100 model.

df0715_5dThe Canon EOS-1D C is included because it can record 4K video. Since it is also a still photography camera, the sensor is an 18MP full-frame sensor. When recording 4K video, it uses a Motion JPEG codec, but for HD, can also use the AVCHD codec. The big plus over the C500 is that the 1D C records 4K onboard to CF cards, so is better suited to hand-held work. The DSLR cameras that started the craze for Canon continue to be popular, including the EOS 5D Mark III and the new EOS 7D Mark II. Plus the consumer-oriented Rebel versions. All are outstanding still cameras. The 5D features a 22.3MP CMOS sensor and records HD video as H.264 MOV files to onboard CF cards. Thanks to the sensor size, the 5D is still popular for videographers who want extremely shallow depth-of-field shots from a handheld camera.

df0715_d16Digital Bolex has become a Kickstarter success story. These out-of-the-box thinkers coupled the magic of a venerable name from the film era with innovative design and marketing to produce the D16 Cinema Camera. Its form factor mimics older, smaller, handheld film camera designs, making it ideal for run-and-gun documentary production. It features a Super 16mm sized CCD sensor with a global shutter and claims 12 stops of dynamic range. The D16 records in 12-bit CinemaDNG raw to internal SSDs, but media is offloaded to CF cards or via USB3.0 for media interchange. The camera comes with a C-mount, but EF, MFT and PL lens mounts are available. Currently the resolutions include 2048 x 1152 (“S16mm mode”), 2048 x 1080 (“S16 EU”) and HD (“16mm mode”). The D16 records 23.98, 24 and 25fps frame rates, but variable rates up to 32fps in the S16mm mode are coming soon. To expand on the camera’s attractiveness, Digital Bolex also offers a line of accessories, including Kish/Bolex 16mm prime lens sets. These fixed aperture F4 lenses are C-mount for native use with the D16 camera. Digital Bolex also offers the D16 in an MFT mount configuration and in a monochrome version.

df0715_hero4The sheer versatility and disposable quality of GoPro cameras has made the HERO line a staple of many productions. The company continues to advance this product with the HERO4 Black and Silver models as their latest. These are both 4K cameras and have similar features, but if you want full video frame rates in 4K, then the HERO4 Black is the correct model. It will record up to 30fps in 4K, 50fps in 2.7K and 120fps in 1080p. As a photo camera, it uses a 12MP sensor and is capable of 30 frames a one second in burst mode and time-lapse intervals from .5 to 60 seconds. The video signal is recorded as an H264 file with a high-quality mode that’s up 60 Mb/s. MicrosSD card media is used. HERO cameras have been popular for extreme point-of-video shots and its waterproof housing is good for 40 meters. This new HERO4 series offers more manual control, new night time and low-light settings, and improved audio recording.

df0715_d810Nikon actually beat Canon to market with HD-capable DSLRs, but lost the momentum when Canon capitalized on the popularity of the 5D. Nevertheless, Nikon has its share of supportive videographers, thanks in part to the quantity of Nikon lenses in general use. The Nikon range of high-quality still photo and video-enabled cameras fall under Nikon’s D-series product family. The Nikon D800/800E camera has been updated to the D810. This is the camera of most interest to professional videographers. It’s a 36.3MP still photo camera that can also record 1920 x 1080 video in 24/30p modes internally and 60p externally. It can also record up to 9,999 images in a time-lapse sequence. A big plus for many is its optical viewfinder. It records H.264/MPEG-4 media to onboard CF cards. Other Nikon video cameras include the D4S, D610, D7100, D5300 and D3300.

df0715_varicamPanasonic used to own the commercial HD camera market with the original VariCam HD camera. They’ve now reimagined that brand in the new VariCam 35 and VariCam HS versions. The new VariCam uses a modular configuration with each of these two cameras using the same docking electronics back. In fact, a costumer can purchase one camera head and back and then only need to purchase the other head, thus owning both the 35 and the HS models for less than the total cost of two cameras. The VariCam 35 is a 4K camera with wide color gamut and wide dynamic range (14+ stops are claimed). It features a PL lens mount, records from 1 to 120fps and supports dual-recording. For example, you can simultaneously record a 4K log AVC-Intra master to the main recorder (expressP2 card) and 2K/HD Rec 709 AVC-Intra/AVC-Proxy/Apple ProRes to a second internal recorder (microP2 card) for offline editing. VariCam V-Raw camera raw media can be recorded to a separate Codex V-RAW recorder, which can be piggybacked onto the camera. The Panasonic VariCam HS is a 2/3” 3MOS broadcast/EFP camera capable of up to 240fps of continuous recording.  It supports the same dual-recording options as the VariCam 35 using AVC-Intra and/or Apple ProRes codecs, but is limited to HD recordings.

df0715_gh4With interest in DSLRs still in full swing, many users’ interest in Panasonic veers to the Lumix GH4. This camera records 4K cinema (4096) and 4K UHD (3840) sized images, as well as HD. It uses SD memory cards to record in MOV, MP4 or AVCHD formats. It features variable frame rates (up to 96fps), HDMI monitoring and a professional 4K audio/video interface unit. The latter is a dock the fits to the bottom of the camera. It includes XLR audio and SDI video connections with embedded audio and timecode.

RED Digital Cinema started the push for 4K cameras and camera raw video recording with the original RED One. That camera is now only available in refurbished models, as RED has advanced the technology with the EPIC and SCARLET. Both are modular camera designs that are offered with either the Dragon or the Mysterium-X sensor. The Dragon is a 6K, 19MP sensor with 16.5+ stops of claimed dynamic range. The Mysterium-X is a 5K, 14MP sensor that claims 13.5 stops, but up to 18 stops using RED’s HDRx (high dynamic range) technology. df0715_epicThe basic difference between the EPIC and the SCARLET, other than cost, is that the EPIC features more advanced internal processing and this computing power enables a wider range of features. For example, the EPIC can record up to 300fps at 2K, while the SCARLET tops out at 120fps at 1K. The EPIC is also sold in two configurations: EPIC-M, which is hand-assembled using machined parts, and the EPIC-X, which is a production-run camera. With the interest in 4K live production, RED has introduced its 4K Broadcast Module. Coupled with an EPIC camera, you could record a 6K file for archive, while simultaneously feeding a 4K and/or HD live signal for broadcast. RED is selling studio broadcast configurations complete with camera, modules and support accessories as broadcast-ready packages.

df0715_f65Sony has been quickly gaining ground in the 4K market. Its CineAlta line includes the F65, PMW-F55, PMW-F5, PMW-F3, NEX-FS700R and NEX-FS100. All are HD-capable and use Super 35mm sized image sensors, with the lower-end FS700R able to record 4K raw to an external recorder. At the highest end is the 20MP F65, which is designed for feature film production.df0715_f55 The camera is capable of 8K raw recording, as well as 4K, 2K and HD variations. Recordings must be made on a separate SR-R4 SR MASTER field recorder. For most users, the F55 is going to be the high-end camera for them if they purchase from Sony. It permits onboard recording in four formats: MPEG-2 HD, XAVC HD, SR File and XAVC 4K. With an external recorder, 4K and 2K raw recording is also available. High speeds up to 240fps (2K raw with the optional, external recorder) are possible. The F5 is the F55’s smaller sibling. It’s designed for onboard HD recording (MPEG-2 HD, XAVC HD, SR File). 4K and 2K recordings require an external recorder.

df0715_fs7The Sony camera that has caught everyone’s attention is the PXW-FS7. It’s designed as a lightweight, documentary-style camera with a form factor and rig that’s reminiscent of an Aaton 16mm film camera. It uses a Super 35mm sized sensor and delivers 4K resolution using onboard XAVC recording to XQD memory cards. XDCAM MPEG-2 HD recording (now) and ProRes (with a future upgrade) will also be possible. Also raw will be possible to an outboard recorder.

df0715_a7sSony has also not been left behind by the DSLR revolution. The A7s is an APS-C, full frame, mirrorless 12.2MP camera that’s optimized for 4K and low light. It can record up to 1080p/60 (or 720p/120) onboard (50Mbps XAVC S) or feed uncompressed HD and/or 4K (UHD) out via its HDMI port. It will record onboard audio and sports such pro features as Sony’s S-Log2 gamma profile.

With any overview, there’s plenty that we can’t cover. If you are in the market for a camera, remember many of these companies offer a slew of other cameras ranging from consumer to ENG/EFP offerings. I’ve only touched on the highlights. Plus there are others, like Grass Valley, Hitachi, Samsung and Ikegami that make great products in use around the world every day. Finally, with all the video-enabled smart phones and tablets, don’t be surprised if you are recording your next production with an iPhone or iPad!

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Adobe Anywhere and Divine Access

df0115_da_1_sm

Editors like the integration of Adobe’s software, especially Dynamic Link and Direct Link between creative applications. This sort of approach is applied to collaborative workflows with Adobe Anywhere, which permits multiple stakeholders, including editors, producers and directors, to access common media and productions from multiple, remote locations. One company that has invested in the Adobe Anywhere environment is G-Men Media of Venice, California, who installed it as their post production hub. By using Adobe Anywhere, Jeff Way (COO) and Clay Glendenning (CEO) sought to improve the efficiency of the filmmaking process for their productions. No science project – they have now tested the concept in the real world on several indie feature films.

Their latest film, Divine Access, produced by The Traveling Picture Show Company in association with G-Men Media, is a religious satire centering on reluctant prophet Jack Harriman. Forces both natural and supernatural lead Harriman down a road to redemption culminating in a final showdown with his long time foe, Reverend Guy Roy Davis. Steven Chester Prince (Boyhood, The Ringer, A Scanner Darkly) moves behind the camera as the film’s director. The entire film was shot in Austin, Texas during May of 2014, but the processing of dailies and all post production was handled back at the Venice facility. Way explains, “During principal photography we were able to utilize our Anywhere system to turn around dailies and rough cuts within hours after shooting. This reduced our turnaround time for review and approval, thus reducing budget line items. Using Anywhere enabled us to identify cuts and mark them as viable the same day, reducing the need for expensive pickup shoots later down the line.”

The production workflow

df0115_da_3_smDirector of Photography Julie Kirkwood (Hello I Must Be Going, Collaborator, Trek Nation) picked the ARRI ALEXA for this film and scenes were recorded as ProRes 4444 in 2K. An on-set data wrangler would back up the media to local hard drives and then a runner would take the media to a downtown upload site. The production company found an Austin location with 1GB upload speeds. This enabled them to upload 200GB of data in about 45 minutes. Most days only 50-80GB were uploaded at one time, since uploads happened several times throughout each day.

Way says, “We implemented a technical pipeline for the film that allowed us to remain flexible.  Adobe’s open API platform made this possible. During production we used an Amazon S3 instance in conjunction with Aspera to get the footage securely to our system and also act as a cloud back-up.” By uploading to Amazon and then downloading the media into their Anywhere system in Venice, G-Men now had secure, full-resolution media in redundant locations. Camera LUTs were also sent with the camera files, which could be added to the media for editorial purposes in Venice. Amazon will also provide a long-term archive of the 8TB of raw media for additional protection and redundancy. This Anywhere/Amazon/Aspera pipeline was supervised by software developer Matt Smith.

df0115_da_5_smBack in Venice, the download and ingest into the Anywhere server and storage was an automated process that Smith programmed. Glendenning explains, “It would automatically populate a bin named for that day with the incoming assets. Wells [Phinny, G-Men editorial assistant] would be able to grab from subfolders named ‘video’ and ‘audio’ to quickly organize clips into scene subfolders within the Anywhere production that he would create from that day’s callsheet. Wells did most of this work remotely from his home office a few miles away from the G-Men headquarters.” Footage was synced and logged for on-set review of dailies and on-set cuts the next day. Phinny effectively functioned as a remote DIT in a unique way.

Remote access in Austin to the Adobe Anywhere production for review was made possible through an iPad application. Way explains, “We had close contact with Wells via text message, phone and e-mail. The iPad access to Anywhere used a secure VPN connection over the Internet. We found that a 4G wireless data connection was sufficient to play the clips and cuts. On scenes where the director had concerns that there might not be enough coverage, the process enabled us to quickly see something. No time was lost to transcoding media or to exporting a viewable copy, which would be typical of the more traditional way of working.”

Creative editorial mixing Adobe Anywhere and Avid Media Composer

df0115_da_4_smOnce principal photography was completed, editing moved into the G-Men mothership. Instead of editing with Premiere Pro, however, Avid Media Composer was used. According to Way, “Our goal was to utilize the Anywhere system throughout as much of the production as possible. Although it would have been nice to use Premiere Pro for the creative edit, we believed going with an editor that shared our director’s creative vision was the best for the film. Kindra Marra [Scenic Route, Sassy Pants, Hick] preferred to cut in Media Composer. This gave us the opportunity to test how the system could adapt already existing Adobe productions.” G-Men has handled post on other productions where the editor worked remotely with an Anywhere production. In this case, since Marra lived close-by in Santa Monica, it was simpler just to set up the cutting room at their Venice facility. At the start of this phase, assistant editor Justin (J.T.) Billings joined the team.

Avid has added subscription pricing, so G-Men installed the Divine Access cutting room using a Mac Pro and “renting” the Media Composer 8 software for a few months. The Anywhere servers are integrated with a Facilis Technology TerraBlock shared storage network, which is compatible with most editing applications, including both Premiere Pro and Media Composer. The Mac Pro tower was wired into the TerraBlock SAN and was able to see the same ALEXA ProRes media as Anywhere. According to Billings, “Once all the media was on the TerraBlock drives, Marra was able to access these in the Media Composer project using Avid’s AMA-linking. This worked well and meant that no media had to be duplicated. The film was cut solely with AMA-linked media. External drives were also connected to the workstations for nightly back-ups as another layer of protection.”

Adobe Anywhere at the finish line

df0115_da_6_smOnce the cut was locked, an AAF composition for the edited sequence was sent from Media Composer to DaVinci Resolve 11, which was installed on an HP workstation at G-Men. This unit was also connected to the TerraBlock storage, so media instantly linked when the AAF file was imported. Freelance colorist Mark Todd Osborne graded the film on Resolve 11 and then exported a new AAF file corresponding to the rendered media, which now also existed on the SAN drives. This AAF composition was then re-imported into Media Composer.

Billings continues, “All of the original audio elements existed in the Media Composer project and there was no reason to bring them into Premiere Pro. By importing Resolve’s AAF back into Media Composer, we could then double-check the final timeline with audio and color corrected picture. From here, the audio and OMF files were exported for Pro Tools [sound editorial and the mix is being done out-of-house]. Reference video of the film for the mix could now use the graded images. A new AAF file for the graded timeline was also exported from Media Composer, which then went back into Premiere Pro and the Anywhere production. Once we get the mixed tracks back, these will be added to the Premiere Pro timeline. Final visual effects shots can also be loaded into Anywhere and then inserted into the Premiere Pro sequence. From here on, all further versions of Divine Access will be exported from Premiere Pro and Anywhere.”

Glendenning points out that, “To make sure the process went smoothly, we did have a veteran post production supervisor – Hank Braxtan – double check our workflow.  He and I have done a lot of work together over the years and has more than a decade of experience overseeing an Avid house. We made sure he was available whenever there were Avid-related technical questions from the editors.”

Way says, “Previously, on post production of [the indie film] Savageland, we were able to utilize Anywhere for full post production through to delivery. Divine Access has allowed us to take advantage of our system on both sides of the creative edit including principal photography and post finishing through to delivery. This gives us capabilities through entire productions. We have a strong mix of Apple and PC hardware and now we’ve proven that our Anywhere implementation is adaptable to a variety of different hardware and software configurations. Now it becomes a non-issue whether it’s Adobe, Avid or Resolve. It’s whatever the creative needs dictate; plus, we are happy to be able to use the fastest machines.”

Glendenning concludes, “Tight budget projects have tight deadlines and some producers have missed their deadlines because of post. We installed Adobe Anywhere and set up the ecosystem surrounding it because we feel this is a better way that can save time and money. I believe the strategy employed for Divine Access has been a great improvement over the usual methods. Using Adobe Anywhere really let us hit it out of the park.”

Originally written for DV magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Unbroken

df0315_unbroken_7_sm

Some films might be hard to believe if it weren’t for the fact that the stories are true. Such is the case with Unbroken, Angelina Jolie’s largest studio film to date as a director. Unbroken tells the amazing true life story of Louis Zamperini, who participated in the 1936 Olympics in Berlin and then went on to serve as a bombardier on a B-24 in the Pacific during World War II. The plane went down and Zamperini plus two other crew members (one of whom subsequently died) were adrift at sea for 47 days. They were picked up by the Japanese and spent two years in brutal prisoner-of-war camps until the war ended. Zamperini came home to a hero’s welcome, but struggled with what we now know as post traumatic stress disorder. Through his wife, friends and attending a 1949 Billy Graham crusade in Los Angeles, Zamperini was able to turn his life around by finding the path of forgiveness, including forgiving his former prison guards.df0315_unbroken_8_sm

Since Zamperini’s full life story could easily take up several films, Unbroken focuses on his early years culminating with his heroic return home. Jack O’Connell (300: Rise of an Empire, Starred Up) plays the lead role as Louis Zamperini. Jolie pulled in a “dream team” of creative professionals to help her realize the vision for the film, including Roger Deakins (Prisoners, Skyfall, True Grit) as director of photographer, Alexandre Desplat (The Imitation Game, The Grand Budapest Hotel, The Monuments Men) to compose the score and Joel and Ethan Coen (Inside Llewyn Davis, True Grit, No Country for Old Men) to polish the final draft of the screenplay.  ILM was the main visual effects house and rounding out this team were Tim Squyres (Life of Pi, Lust Caution, Syrianna) and William Goldenberg (The Imitation Game, Transformers: Age of Extinction, Argo) who co-edited Unbroken.

Australia to New York

df0315_unbroken_3_smThe film was shot entirely in Australia during a 67-day period starting in October 2013. The crew shot ARRIRAW on Alexa XT cameras and dailies were handled by EFILM in Sydney. Avid DNxHD 115 files were encoded and sent to New York via Aspera for the editors. Dailies for the studio were sent via the PIX system and to Roger Deakins using the eVue system.

Tim Squyres started with the project in October as well, but was based in New York, after spending the first week in Australia. According to Squyres, “This was mainly a single camera production, although about one-third of the film used two cameras. Any takes that used two cameras to cover the same action where grouped in the Avid. Angie and Roger were very deliberate in what they shot. They planned carefully, so there was a modest amount of footage.” While in New York, Squyres and his assistants used three Avid Media Composer 7 systems connected to Avid Isis shared storage. This grew to five stations when the production moved to the Universal lot in Los Angeles and eventually eight when William Goldenberg came on board.

df0315_unbroken_6_smDuring production Squyres was largely on his own to develop the first assembly of the film. He continues, “I really didn’t have extensive contact with Angie until after she got back to LA. After all, directing on location is a full-time job, plus there’s a big time zone difference. Before I was hired, I had a meeting with her to discuss some of the issues surrounding shooting in a wave tank, based on my experiences with Life of Pi. This was applicable to the section of Unbroken when they are lost at sea in a life raft. While they were in Australia, I’d send cut scenes and notes to her over PIX. Usually this would be a couple of versions of a scene. Sometimes I’d get feedback, but not always. This is pretty normal, since the main thing the director wants to know is if they have the coverage and to be alerted if there are potential problems.”

Editor interaction

The first assembly of the film was finished in February and Squyres continued to work with Jolie to hone and tighten the film. Squyres says, “This was my first time working with an actor/director. I really appreciated how much care she took with performances. That was very central to her focus. Even though there’s a lot of action in some of the scenes, it never loses sight of the characters. She’s also a very smart director and this film has hundreds of visual effects. She quickly picked up on how effects could be used to enhance a shot and how an effect, like CG water, needed to be tweaked to make it look realistic.”

df0315_unbroken_2_smWilliam Goldenberg joined the team in June and stayed with it until October 2014. This was Tim Squyres’ first time working with a co-editor and since the film was well past the first cut stage, the two handled their duties differently than two editors might on other films. Goldenberg explains, “Since I wasn’t on from the beginning, we didn’t split up the film between us, with one editor doing the action scenes and the other the drama. Instead, we both worked on everything very collaboratively. I guess I was brought in to be another set of eyes, since this was such a huge film. I’d review a section and tweak the cut and then run it by Tim. Then the two of us would work through the scene with Angie, kicking around ideas. She’s a very respectful person and wanted to make sure that everyone’s opinion was heard.” Squyres adds, “Each editor would push the other to make it better.”

This is usually the stage at which a film might be completely rearranged in the edit and deviate significantly from the script. That wasn’t the case with Unbroken. Goldenberg explains, “The movie is partly told through flashbacks, but these were scripted and not a construct created during editing. We experimented with rearranging some of the scenes, but in the end, they largely ended up back as they were scripted.”

The art of cutting dialogue scenes

df0315_unbroken_1_smBoth editors are experienced Media Composer users and one unique Avid tool used on Unbroken was Script Integration, often referred to as ScriptSync. This feature enables the editor to see the script as text in a bin with every take synced to the individual dialogue lines. Squyres says, “It had already been set up when I came on board to cut a previous film and was handy for me there to learn the footage. So I decided to try it on this film. Although handy, I didn’t find it essential for me, because of how I go through the dailies. I typically build a sequence out of pieces of each take, which gives me a set of ‘pulls’ – my favorite readings from each setup strung together in script order. Then I’ll cut at least two versions of a scene based on structure or emotion – for example, a sad version and an angry version. When I’ve cut the first version, I try not to repeat shots in building the second version. By using the same timeline and turning on dupe detection, it’s easy to see if you’ve repeated something. Assembling is about getting a good cut, but it’s also about learning all the options and getting used to the footage. By having a few versions of each scene, you are already prepared when the director wants to explore alternatives.”

Goldenberg approaches the raw footage in a similar fashion. He says, “I build up my initial sequences in a way that accomplishes the same thing that ScriptSync does. The sequence will have each line reading from each take in scene order going from wide to tight. This results in a long sequence for a scene, with each line repeated, but I can immediately see what all the coverage is for that scene. When a director wants to see alternates for any reading, I can go back to this sequence and then it’s easy to evaluate the performance options.”

Building up visual effects and the score

df0315_unbroken_5_smUnbroken includes approximately 1100 visual effects shots, ranging from water effects, composites and set extensions to invisible split screens used to adjust performance timing. Squyres explains, “Although the editing was very straightforward, we did a lot of temporary compositing. I would identify a shot and then my assistant would do the work in either Media Composer or After Effects. The spilt screens used for timing is a common example. We had dozens of those. On the pure CG shots, we had pre-vis clips in the timeline and then, as more and more polished versions of the effects came in, these would be replaced in the sequence.” Any temp editorial effects were redone in final form by one of the VFX suppliers or during the film’s digital intermediate finishing at EFILM in Hollywood.

In addition to temp effects, the editors also used temp music. Squyres explains, “At the beginning we used a lot of temp music pulled from other scores. This helps you understand what a scene is going to be, especially if it’s a scene designed for music. As we got farther into the cut, we started to receive some of Alexandre’s temp cues. These were recorded with synth samples, but still sounded great and gave you an good sense of the music. The final score was recorded using an orchestra at Abbey Road Studios, which really elevated the score beyond what you can get with samples.”

Finding the pace

In a performance-driven film, the editing is all about drama and pacing. Naturally Squires and Goldenberg have certain scenes that stand out to them. Squyres says, “A big scene in the film is when Louis meets the head guard for the first time. You have to set up that slow burn as they trade lines. Later in the film, there’s a second face-off, but now the dynamic between them is completely different. It’s a large, complex scene without spoken dialogue, so you have to get the pacing right, based on feel, not on the rhythm of the dialogue. The opening scene is an action air battle. It was shot in a very measured fashion, but you had to get the right balance between the story and the characters. You have to remember that the film is personal and you can’t lose sight of that.”

For Goldenberg, the raft sequence was the most difficult to get right. He says, “You want it to feel epic, but in reality – being lost at sea on a raft – there are long stretches of boredom and isolation, interrupted by some very scary moments. You have to find the right balance without the section feeling way too long.”

Both editors are fans of digital editing. Goldenberg puts it this way, “Avid lets you be wrong a lot of the time, because you can always make another version without physical constraints. I feel lucky, though, in having learned to work in the ‘think before you edit’ era. The result – more thinking – means you know where you are going when you start to cut a scene. You are satisfied that you know the footage well. I like to work in reels to keep the timeline from getting too cumbersome and I do keep old versions, in case I need to go back and compare or restore something.”

The film was produced with involvement by Louis Zamperini and his family, but unfortunately he died before the film was completed. Goldenberg sums it up, “It was a thrill. Louie was the type of person you hope your kids grow up to be. We were all proud to be part of telling the story for Louie.” The film opened its US release run on Christmas Day 2014.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Audio Tools Update

df_audiotools_7_sm

Most video editors can get by with the audio editing tools that are built into their NLE. But if you want that extra audio finesse, then you really need some dedicated audio applications and plug-ins. 2014 is closing out nicely with new offerings from Sony Creative Software and iZotope.

Sony Creative Software – Sound Forge Expands

df_audiotools_1Sony’s software arm – known for Acid, Vegas and Sound Forge, to name a few – has expanded its Mac audio offerings. Although Sony’s audio applications have traditionally been Windows-based, Sony previously ventured into the Mac ecosystem with its 1.0 version of Sound Forge Pro for the Mac. This year version 2.0 was released, which includes more features, power and support for 64-bit plug-ins. All Sound Forge Mac versions are architected for OS X and not simply a port from Windows.

As before, Sound Forge Pro continues to be a file-based editor and not a multi-track DAW designed for mixing. It supports high-resolution files up to 24-bit/192 kHz. Although file-based, it can handle up to 32-channel files and is capable of recording, as well as editing and mastering. The Pro version includes a number of Sony and iZotope plug-ins designed for mastering (EQ, reverb, multi-band compressor, limiter, imager and exciter), post processing (sample rate and bit depth conversion) and restore/repair (declicker, denoiser and declipper).

A new addition is the inclusion of the iZotope Nectar Elements plug-in. The Nectar series is a channel strip, all-in-one filter that combines a number of the processes that a recording engineer would place in the signal chain when recording vocals. However, it can still be used on music and mixed tracks without issue. Nectar Elements is the “lite” version of the full Nectar filter and is being bundled with a number of the Sony applications, including Vegas Pro. Another included filter is the Zplane élastique Pro time stretch and pitch shift plug-in.

df_audiotools_2The Convrt batch tool – a freestanding utility for mass file conversions – comes with Sound Forge Pro Mac 2. Load your files, set up the script and the rest is automated. If you purchase Sound Forge Pro as part of the Audio Mastering Bundle, you also get SpectraLayers Pro 2, an audio spectrum editing tool. Even without it, Sound Forge Pro Mac 2 now enables data interoperability between it and SpectraLayers Pro 2.

Loudness compliance is of big concern to broadcasters, so Sound Forge Pro Mac 2 now includes CALM Act compliant metering. Unfortunately, the read-out is by numbers and meters without the visual eye candy of an Insight or Radar-style meter; however, it does provide true peak values.

Mid-year, Sony also released Sound Forge for the Mac App Store. It doesn’t have all of the bell-and-whistles as the Pro version and due to the App Store’s sandboxing policies has other minor differences. Convrt and the iZotope plug-ins are not included; however, most of everything else is. Both versions support 64-bit AU and VST plug-ins. Both include real-time previewing. Both have generally the same tech specs. One extra that comes with Sound Forge is Wave Hammer, Sony’s own mastering compressor. It is not included in the Pro version. Lastly, both versions come with a large amount of downloadable sound effects content, which is available to users as a separate download, once they’ve registered the software.

There are a lot of audio tools on the market, but I find Sound Forge or Sound Forge Pro Mac 2 to be definite must-haves for the video editor serious about audio. The Sound Forge interface is clean and customizable and the operation is very intuitive. With nearly every spot I cut in FCP X, I’ll bounce the mix out to Sound Forge Pro for a mastering pass. Same if I need to modify a TV mix for a radio spot.

iZotope – Nectar

df_audiotools_4iZotope is one of the top audio plug-in developers and you’ll find their products bundled with a number of applications, including those from Sony and Adobe. One cool plug-in is the Nectar product, which is marketed in three versions: Nectar 2, Nectar Elements and Nectar Production Suite. These are compatible with most audio and video hosts that support AAX (Pro Tools), RTAS/AudioSuite, VST and AU plug-ins.

Nectar 2 is an all-in-one plug-in that combines eleven tools: plate reverb, pitch, FX, delay, de-esser, saturation, compressors, gate, EQ and limiter. It functions a lot like a very sophisticated channel strip in a mixing console, except with a lot more processes. Although designed with vocal recording in mind, it can easily be used for music and/or mastering. The interface presents you with an overview and easy controls for all tools, but then you can open each individual tool for more precise adjustments. It includes over 150 presets. You can switch between tracking and mixing modes for low-latency processing.

Nectar Elements is a reduced-feature version of Nectar 2. The controls tend to be more specific for vocal recording and the needs of home enthusiasts. On the other side is Nectar 2 Production Suite, which bundles the filter with a Pitch Editor and Breath Control plug-in for Nectar 2.

iZotope – RX 4

df_audiotools_5The biggest iZotope news is the release of the RX 4 audio repair and enhancement tool – the latest in iZotope’s RX series. RX 4 comes in a standard and advanced version and both include the RX 4 standalone application, as well as a set of RX 4 plug-ins that are compatible with a wide range of hosts. Built-in tools include declip, declick, hum removal, denoise, spectral repair, deconstruct (advanced), dereverb (advanced), leveler (advanced), EQ match (advanced), ambience match (advanced), time & pitch (advanced), loudness (advanced), gain, EQ, channel operations, resample and dither. Most, but not all of these, are also installed as RX 4 real-time plug-ins. Third-party plug-ins can also be accessed and used in the standalone RX 4 application. The breadth of what RX 4 offers makes it the biggest gun in the arsenal of most dialogue editors and sound designers, who are tasked with cleaning up challenging location recordings.

df_audiotools_3A powerful, new feature is RX Connect, which is a special “conduit” plug-in. It sets up a roundtrip between your host audio application and the RX 4 standalone application. For example, if you edit in Pro Tools, Audition or Sound Forge, highlight a range or a set of audio clips and select the RX Connect plug-in. (Where and how you select it will differ with each application). This opens an RX 4 window where you choose to send the selection to the RX 4 application. There you can process the clip as needed and send it back to the host application. In this way, you can use the individual effects as real-time plug-ins inside your audio host, or use the advanced processing power of the standalone application via the RX Connect roundtrip.

df_audiotools_6In addition to loudness processing, RX 4 Advanced also includes the Insight metering suite. This is iZotope’s extensive set of audio analysis and metering tools. It can be used for troubleshooting or to assure broadcast loudness compliance. Two other Advanced tools – EQ Match and Ambiance Match are ideal for the dialogue editor. EQ Match is exactly what the name implies. Here you send both a reference clip and a clip to be processed to the RX 4 application. The second clip is then analyzed and “matched” to the sonic qualities of the first. A common video editing practice is to cut out unwanted audio in your dialogue tracks, such as director’s cues, background noises, etc. This leaves gaps of silence in your track that need to be filled with ambient sound. In Ambiance Match, RX 4 samples areas of background sound between the spoken dialogue and creates a sound print from the quiet areas. RX 4 uses this to fill in gaps “automagically” between clips.

Finally, the standalone RX 4 application comes with built-in batch processing. There, you can set up a series of processing steps and the output location, naming and file formats. Add a set of files, apply the batch steps and process the files. iZotope’s RX 4 repair suite is a unique tool that is hard to beat when struggling with difficult audio that you want to make pristine. It’s a product that keeps getting better and, with the addition of the new RX Connect plug-in, provides better interoperability than ever before.

Originally written for Digital Video magazine / CreativePlanetNetwork

©2014 Oliver Peters

Color LUTs for the Film Aesthetic

df_lut_1_sm

Newer cameras offer the ability to record in log gamma profiles. Those manufactured by Sony, ARRI and Canon have become popular, and with them, so have a new class of color correction filters used by editors. Color look-up tables, known as LUTs, have long been used to convert one color space to another, but now are increasingly used for creative purposes, including film stock emulation. A number of companies offer inexpensive plug-ins to import and apply common 3D color LUTs within most NLEs, grading software and compositors.

While many of these developers include their own film look LUTs, it is also easy to create your own LUTs that are compatible with these plug-ins. A commonly used LUT format is .cube, which can easily be generated by a knowledgable editor using DaVinci Resolve, AMIRA Color Tool or FilmConvert, to name a few.

Most LUTs are created with a particular color space in mind, which means you actually need two LUTs. The first, known as a camera profile patch, adjusts for a specific model of camera and that manufacturer’s log values. The second LUT provides the desired “look”. Depending on the company, you may have a single filter that combines the look with the camera patch or you may have to apply two separate filters. LUTs are a starting point, so you will still have to color correct (grade) the shots to get the right look. Often in a chain of filters, you will want the LUT as the last effect and do all of your grading ahead of that filter. This way you aren’t trying to recover highlights or shadow detail that might have been compressed by the LUT values.

Color Grading Central’s LUT Utility

df_lut_2a_smThe LUT Utility has become a “go to” tool for Final Cut Pro X editors who want to use LUTs. It installs with eleven basic LUTs, which include a number of camera log to Rec 709 patches, as well as several film looks for Fuji, Kodak, 2 Strip and 3 Strip emulation. LUT Utility installs as a Motion template and also appears as a System Preferences pane. You may use this pane to install additional LUTs, however, you can also install them by placing the LUT file into the correct Motion template folder. Since it uses LUTs in the .cube format, any application that generates a 3D LUT in that format can be used to create a new LUT that is compatible with LUT Utility. When you apply a LUT Utility filter to a clip or an adjustment layer inside FCP X, the inspector pane for the filter gives you access to all recognized LUTs through a pulldown menu. The only control is a slider to adjust the amount of the LUT that is applied.

Color Grading Central also has a partnership with visionColor to bundle the Osiris film emulation filters separately or together with LUT Utility. The Osiris set includes nine film emulations that cover a number of stocks and stylized looks. They are provided in the .cube format. Although both the Color Grading Central and Osiris filters are offered for FCP X, it’s worth noting that the LUTs themselves are compatible with Avid Media Composer, Adobe Creative Cloud, Autodesk Smoke and DaVinci Resolve. One thing to be careful of in FCP X, is that Apple also includes its own log processing for various cameras, such as the ARRI ALEXA, and often applies this automatically. When you apply a LUT to log footage in FCP X, make sure you are not double-processing the image. Either use a LUT designed for Rec 709 imagery, or set the FCP X log processing for the clip to “none”, when using a LUT designed for log color space.

In addition to Osiris, visionColor and Color Grading Central developed the ImpulZ LUT series. ImpulZ comes in a Basic, Pro and Ultimate set of LUTs, based on the camera profiles you typically need to work with. It covers a mixture of stock brands and types, including negative print and still film stocks. The Ultimate collection includes about 1950 different LUT files. In addition to camera profiles, these LUTs also cover four gamma profiles, including film contrast, film print, VisionSpace and Cineon Log (Ultimate only). The VisionSpace profile is their own flatter curve that is more conducive to further grading.

Koji Color

df_lut_2b_smAnother LUT package just released for Final Cut Pro X – but also compatible with other applications – is Koji Color. This is a partnership between noted color timer Dale Grahn (Predator, Saving Private Ryan, Dracula) and plug-in developer Crumplepop. This partnership previously resulted in the Dale Grahn Color iPad app. Unlike many other film emulation packages that attempt to apply a very stylized look, Koji Color, is design to provide an accurate emulation of a number of popular print stocks.

As implied by the name (Koji appears to be a mash-up of “Kodak” and “Fuji”), three key stocks from each brand are covered, including Kodak 2383, 2393, 2302 (B&W) and Fuji 3514, 3521, 3523. Each print stock has specific contrast and color characteristics, which these LUTs seek to duplicate. In FCP X, you select and apply the correct version of the filter based on camera type. Then from the inspector, select the film stock. There are extra skiers to tweak saturation and exposure (overall, hi, mid, shadow). This helps you dial in the look more precisely.

Koji Color comes in three product packages. The basic Koji DSLR is a set of filters that you would apply to standard HD cameras running in the video and not log mode. The output format is Rec 709 video. If you shoot with a lot of log profile cameras, then you’ll want Koji Log, which also includes Koji DSLR. This package includes the same LUTs, but with filters that have built-in camera patches for each of the various camera models that shoot log. Again, the output format is Rec 709.

The most expensive bundle is Koji Studio, which also includes the other two packages. The main difference is that Studio also supports output in the DCI-P3 color space. This is intended for digital intermediate color correction, which goes beyond the needs of most video editors.

SpeedLooks

df_lut_3_smLookLabs is the development side of Canadian post facility Jump Studios. As an outgrowth of their work for clients and shows, the team developed a set of film looks, which they branded under the name SpeedLooks. If you use Adobe Creative Cloud, then you know that SpeedLooks comes bundled for use in Adobe Premiere Pro CC and SpeedGrade CC. Like the other film emulation LUTs, SpeedLooks are based on certain film stocks, but LookLabs decided to make their offerings more stylized. There are specific bundles covering different approaches to color, including Clean, Blue, Gold and others.

SpeedLooks come in both .cube and Adobe’s .look formats, so you are not limited to only using these with Adobe products. LookLabs took a slightly different approach to cameras, by designing their looks based on the starting point of their own virtual log space. This way they can adjust for the differences in the log spaces of the various cameras. A camera patch converts the camera’s log or Rec 709 profile into LookLabs’ own log profile. When using SpeedLooks, you should first apply a camera profile patch as one filter and then the desired look filter as another.

If you use Premiere Pro CC, all you need to do is apply the Lumetri color correction effect. A standard OS dialogue opens to let you navigate to the right LUT file. Need to change LUTs? Simply click the set-up icon in the effect control window and select a different file. If you use SpeedGrade CC, then you apply the camera patch at the lowest level and the film look LUT at the highest level, with primary and secondary grading layers in between. LookLabs also offers a version of SpeedLooks for editors. This lower-cost package supplies the same film look LUTs, but intended for cameras that are already in the Rec 709 color space. All of these filters can be used in a number of applications, as well as in FCP X through Color Grading Central’s LUT Utility.

Like all of these companies, LookLabs has taken time to research how to design LUTs that match the response of film. One area that they take pride in is the faithful reproduction of skin tones. The point is to skew the color in wide ranges, without resulting in unnatural skin tones. Another area is in highlight and shadow detail. LUTs are created by applying curve values to the image that compress the highlight and shadow portion of the signal. By altering the RGB values of the curve points, you get color in addition to contrast changes. With SpeedLooks, as highlights are pushed to the top and shadows to the bottom, there is still a level of detail that’s retained. Images aren’t harshly clipped or crushed, which leaves you with a more filmic response that rolls off in either direction.

FilmConvert

df_lut_4_smFilmConvert (an arm of Rubber Monkey Software) is now in the 2.0 version of its popular film emulation application and plug-ins. You may purchase it as a standalone tool or as filters for popular NLEs. Unlike the others that use a common LUT format, like .cube, FilmConvert does all of its action internally. The plug-ins not only provide film emulation, but are also full-fledged, 3-way color correction filters. In fact, you could use a FilmConvert filter as the sole grading tool for all of your work. FilmConvert filters are available for Adobe, Avid, Final Cut and in the OFX format for Resolve, Vegas and Scratch. The Resolve version doesn’t include the 3-way color correction function.

You may keep your setting generic or select specific camera models. If you don’t have that camera profile installed, the filter will prompt you to download the appropriate camera file from their website. Once installed, you are good to go. FilmConvert offers a wide range of stock types for emulation. These include more brands, but also still photo stocks, such as Polaroid. The FilmConvert emulations are based on color negative film (or slide transparencies) and not release print stocks, like those of Koji Color.

In addition to grading and emulating certain stocks, FilmConvert lets you apply grain in a variety of sizes. From the control pane, dial in the amount of the film color, curve and grain, which is separate from the adjustments made with the 3-way color correction tool. New in this updated version is that you can generate a 3D LUT from your custom look. In doing so, you can create a .cube file ready for application elsewhere. This file will carry the color information, though not the grain. The standalone version is a more complete grading environment, complete with XML roundtrips and accelerated rendering.

Originally written for Digital Video magazine / CreativePlanetNetwork

©2014, 2015 Oliver Peters

Birdman

Birdman-PosterIt’s rare, but exhilarating, when you watch a movie with a unique take on film’s visual language, without the crutch of extensive computer generated imagery. That’s precisely the beauty of Birdman or (The Unexpected Virtue of Ignorance). The film is directed and co-written by Alejandro González Iñárritu (Biutiful, Babel, 21 Grams) and features a dynamic ensemble cast dominated by Michael Keaton’s lead performance as Riggan Thomson. While most films are constructed of intercutting master shots, two-shots and singles, Birdman is designed to look like a continuous, single take. While this has been done before in films, approximately 100 minutes out of the two-hour movie appear as a completely seamless composite of lengthy Steadicam and hand-held takes.

Riggan Thomson (Keaton) is a movie star who rode to fame as the comic book super hero Birdman; but, it’s a role that he walked away from. Searching for contemporary relevance, Riggan has decided to mount a Broadway play, based on the real-life Raymond Carter short story, What We Talk About When We Talk About Love. The film takes place entirely at the historic St. James Theater near Times Square and the surrounding area in New York. Principal photography occurred over a 30-day period, both at the real theater and Times Square, as well as at Kaufman Astoria Studios. The soundstage sets were for the backstage and dressing room portions of the theater. Throughout the film, Riggan struggles with the challenges of getting the play to opening day and dealing with his fellow cast members, but more notably confronting his super ego Birdman, seen in person and heard in voice-over. This is, of course, playing out in Riggan’s imagination. The film, like the play within the film, wrestles with the overall theme of the confusion between love and affection.

Bringing this ambitious vision to life fell heavily upon the skills of the director of photography and the editors. Emmanuel Lubezki, known as Chivo, served as DoP. He won the 2014 Cinematography Oscar for Gravity, a film that was also heralded for its long, seemingly continuous shots. Stephen Mirrione (The Monuments Men, Ocean’s Thirteen, Traffic) and Douglas Crise (Arbitrage, Deception, Babel) teamed up for the edit. Both editors had worked together before, as well as with the director. Mirrione started during the rehearsal process. At the time of production, Crise handled the editing in New York, while Mirrione, who was back LA at this time, was getting dailies and checking in on the cut as well as sending scenes back and forth with changes every day.

It starts with preparation

Stephen Mirrione explains, “When I first saw what they wanted to do, I was a bit skeptical that it could be pulled off successfully. Just one scene that didn’t work would ruin the whole film. Everything really had to align. One of the things that was considered, was to tape and edit all of the rehearsals. This was done about two months before the principal photography was set to start. The rehearsals were edited together, which allowed Alejandro to adjust the script, pacing and performances. We could see what would work and what wouldn’t. Before cameras even rolled, we had an assembly made up of the rehearsal footage and some of the table read. So, together with Alejandro, we could begin to gauge what the film would look and sound like, where a conversation was redundant, where the moves would be. It was like a pre-vis that you might create for a large-scale CGI or animated feature.”

Once production started in New York, Douglas Crise picked up the edit. Typically, the cast and crew would rehearse the first half of the day and then tape during the second half. ARRI ALEXA cameras were used. The media was sent to Technicolor, who would turn around color corrected Avid DNxHD36 dailies for the next day. The team of editors and assistants used Avid Media Composer systems. According to Crise, “I would check the previous day’s scenes and experiment to see how the edit points would ‘join’ together. You are having to make choices based on performance, but also how the camera work would edit together. Alejandro would have to commit to certain editorial decisions, because those choices would dictate where the shot would pick up on the next day. Stephen would check in on the progress during this period and then he picked up once the cut shifted to visual effects.”

Naturally the editing challenge was to make the scenes flow seamlessly in both a figurative and literal sense. “The big difference with this film was that we didn’t have the conventional places where one scene started and another ended. Every scene walks into the next one. Alejandro described it as going down a hill and not stopping. There wasn’t really a transition. The characters just keep moving on,” Crise says.

“I think we really anticipated a lot of the potential pitfalls and really prepared, but what we didn’t plan on were all the speed changes,” Mirrione adds. “At certain points, when the scene was not popping for us, if the tempo was a little off, we could actually dial up the pace or slow it down as need be without it being perceptible to the audience and that made a big difference.”

Score and syncopation

To help drive pace, much of the track uses a drum score composed and performed by Mexican drummer Antonio Sanchez. In some scenes within the film, the camera also pans over to a drummer with a kit who just happens to be playing in an alley or even in a backstage hallway. Sanchez and Iñárritu went into a studio and recorded sixty improvised tracks based on the emotions that the film needed. Mirrione says, “Alejandro would explain the scene to the drummer in the studio and then he’d create it.” Crise continues, “Alejandro had all these drum recordings and he told me to pick six of my favorites. We cut those together so that he could have a track that the drummer could mimic when they shot that scene. He had the idea for the soundtrack from the very beginning and we had those samples cut in from the start, too.”

“And then Martín [Hernández, supervising sound editor] took it to another level. Once there was an first pass at the movie, with a lot of those drum tracks laid in as an outline, he spent a lot of time working with Alejandro, to strip layers away, add some in, trying a lot of different beats. Obviously, in every movie, music will have an impact on point of view and mood and tone. But with this, I think it was especially important, because the rhythm is so tied to the camera and you can’t make those kinds of cadence adjustments with as much flexibility as you can with cuts. We had to lean on the music a little more than normal at times, to push back or pull forward,” Mirrione says.

The invisible art

The technique of this seamless sequence of scenes really allows you to get into the head of Riggan more so than other films, but the editors are reserved in discussing the actual editing technique. Mirrione explains, “Editing is often called the ‘invisible art’. We shape scenes and performances on every film. There has been a lot of speculation over the internet about the exact number and length of shots. I can tell you it’s more than most people would guess. But we don’t want that to be the focus of the discussion. The process is still the same of affecting performance and pace. It’s just that the dynamic has been shifted, because much of the effort was front-loaded in the early days. Unlike other films, where the editing phase after the production is completed, focuses on shaping the story – on Birdman it was about fine-tuning.”

Crise continues, “Working on this film was a different process and a different way to come up with new ideas. It’s also important to know that most of the film was shot practically. Michael [Keaton] really is running through Times Square in his underwear. The shots are not comped together with green screen actors against CGI buildings.” There are quite a lot of visual effects used to enhance and augment the transitions from one shot to the next to make these seamless. On the other hand, when Riggan’s Birdman delusions come to life on screen, we also see more recognizable visual effects, such as a brief helicopter and creature battle playing out over the streets of New York.

Winking at the audience

The film is written as a black comedy with quite a few insider references. Clearly, the casting of Michael Keaton provides allusion to his real experiences in starting the Batman film franchise and in many ways the whole super hero film genre. However, there was also a conscious effort during rehearsals and tapings to adjust the dialogue in ways that kept these references as current as possible. Crise adds, “Ironically, in the scenes on the rooftop there was a billboard in the background behind Emma Stone and Edward Norton, with a reference to Tom Hanks. We felt that audiences would believe that we created it on purpose, when if fact it was a real billboard. It was changed in post, just to keep from appearing to be an insider reference that was too obvious.”

The considerations mandated during the edit by a seamless film presented other challenges, too. For example, simple concerns, like where to structure reel breaks and how to hand off shots for visual effects. Mirrione points out, “Simple tasks such as sending out shots for VFX, color correction, or even making changes for international distribution requirements were complicated by the fact that once we finished, there weren’t individual ‘shots’ to work with – just one long never ending strand.  It meant inventing new techniques along the way.  Steven Scott, the colorist, worked with Chivo at Technicolor LA to perfect all the color and lighting and had to track all of these changes across the entire span of the movie.  The same way we found places to hide stitches between shots, they had to hide these color transitions which are normally changed at the point of a cut from one shot to the next.”

In total, the film was in production and post for about a year, starting with rehearsals in early 2013. Birdman was mixed and completed by mid-February 2014. While it posed a technical and artistic challenge from the start, everything amazingly fell into place, highlighted by perfect choreography of cast, crew and the post production team. It will be interesting to see how Birdman fares during awards season, because it succeeds on so many levels.

Originally written for Digital Video magazine / CreativePlanetNetwork

©2014, 2015 Oliver Peters