Sorenson Squeeze 10

df0415_sqz10_4_sm

Video formats don’t hold still for long and neither do video codecs used for file encoding. As the industry looks at 4K delivery over the web and internet-based television streaming services, we now get more codecs to consider, too. Target delivery had seemed to settle on H.264 and MPEG-2 for awhile, but now there’s growing interest in HEVC/H.265 and VP9, thanks to their improved encoding efficiency. Greater efficiency means that you can maintain image quality in 4K files without creating inordinately large file sizes. Sorenson Media’s Squeeze application has been an industrial strength encoding utility that many pros rely on. With the release of Sorenson Squeeze 10, pros have a new tool designed to accommodate the new, beyond-HD resolutions that are in our future.

df0415_sqz10_5_smSorenson Squeeze comes in server and desktop versions. Squeeze Desktop 10 includes three variations: Lite, Standard and Pro. Squeeze Lite covers a wide range of input formats, but video output is limited to FLV, M4V, MP4 and WebM formats. Essentially, Lite is designed for users who primarily need to encode files for use on the web. Desktop Standard adds VP9 and Multi-Rate Bundle Encoding. The latter creates a package with multiple files of different bitrates, which is a configuration used by many web streaming services. Standard also includes 4K presets (H.264 only) and a wide range of output codecs. The Pro version adds support for HEVC, professional decoding and encoding of Avid DNxHD and Apple ProRes (Mac only), and closed caption insertion.df0415_sqz10_1_sm

All three models have added what Sorenson calls Simple Format Conversion. This is a preset available in some of the format folders. The source size, frame rate, and quality are maintained, but the file is converted into the target media format. It’s available for MP4/WebM with Lite, and MP4/MOV/MKV with Standard and Pro. Squeeze is supposed to take advantage of CUDA acceleration when you have certain NVIDIA cards installed, which accelerates MainConcept H.264/AVC encoding. I have a Quadro 4000 with the latest CUDA drivers installed in my Mac Pro running Yosemite (10.10.1). Unfortunately Squeeze Pro 10 doesn’t recognize the driver as a valid CUDA driver. When I asked Sorenson about this, they explained that MainConcept has dropped CUDA support for the MainConcept H.264 codec. “It will not support the latest cards and drivers. If you do use the CUDA feature you will likely see little-to-no speedup, maybe even a speed decrease, and your output video will have decreased quality compared to H.264 encoded with the CPU.”

df0415_sqz10_2_smAs a test, I took a short (:06) 4096×2160 clip that was shot on a Canon EOS 1DC camera. It was recorded using the QuickTime Photo-JPEG codec and is 402MB large. I’m running a 2009 8-core Mac Pro (2.26GHz), 28 GB RAM and the Quadro 4000 card. To encode, I picked the default Squeeze HEVC 4K preset. It encodes using a 1-pass variable bitrate at a target rate of 18,000Kbps. It also resizes to a UHD size of 3840×2160; however, it is set to maintain the same aspect ratio, so the resulting file was actually 3840×2024. Of course, the preset’s values can be edited to suit your requirements.

It took 3:32 (min/sec) to encode the file with the HEVC/H.265 codec and the resulting size was 13MB. I compared this to an encode using the H.264 preset, which uses the same values. It encoded in 1:43 and resulted in a 14.2MB file. Both files are wrapped as .MP4 files, but I honestly couldn’t tell much difference in quality between the two codecs. They both looked good. Unfortunately there aren’t many players that will decode and play the HEVC codec yet – at least not on the Mac. To play the HEVC file, I used an updated version of VLC, which includes an HEVC component. Of course, most machines aren’t yet optimized for this new codec.

df0415_sqz10_3_smOther features of Squeeze aren’t new, but are still worth mentioning. For example, the presets are grouped in two ways – by format and workflow. Favorites can be assigned for quick access to the few presets that you might use most often. Squeeze enables direct capture from a camera input or batch encoding of files in a monitored watch folder. In addition to video, various audio formats can also be exported.

Encoding presets can include a number of built-in filters, as well as any VST audio plug-in installed on your computer. Finally, you can add publishing destinations, including YouTube, Akamai, Limelight and Amazon S3 locations. Another publishing location is Squeeze Stream, the free account included with a purchase of Squeeze (Standard or Pro versions only). Thanks to all of these capabilities, Sorenson Media’s Squeeze Desktop 10 will continue to be the tool many editors choose for professional encoding.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Camerama 2015

df0715_main

The design of a modern digital video camera comes down to the physics of the sensor and shutter, the software to control colorimetry and smart industrial design to optimize the ergonomics for the operator. Couple that with a powerful internal processor and recording mechanism and you are on your way. Although not exactly easy, these traits no longer require skills that are limited to the traditional camera manufacturers. As a result, innovative new cameras have been popping up from many unlikely sources.

df0715_cionThe newest of these is AJA, which delivered the biggest surprise of NAB 2014 in the form of their CION 4K/UltraHD/2K/HD digital camera. Capitalizing on a trend started by ARRI, the CION records directly to the edit-ready Apple ProRes format, using AJA Pak solid state media. The CION features a 4K APS-C sized CMOS sensor with a global shutter to eliminate rolling-shutter artifacts. AJA claims 12 stops of dynamic range and uses a PL mount for lenses designed for Super 35mm. The CION is also capable of outputting AJA camera raw at frame rates up to 120fps.  It can send out 4K or UHD video from its four 3G-SDI outputs to the AJA Corvid Ultra for replay and center extraction during live events.

df0715_alexaThe darling of the film and high-end television world continues to be ARRI Digital with its line of ALEXA cameras. These now include the Classic, XT, XT Plus, XT M and XT Studio configurations. They vary based on features and sensor size. The Classic cameras have a maximum active sensor photosite size of 2880 x 2160, while the XT models go as high as 3414 x 2198. Another difference is that the XT models allow in-camera recording of ARRIRAW media. The ALEXA introduced ProRes recording and all current XT models permit Apple ProRes and Avid DNxHD recording.

df0715_amiraThe ALEXA has been joined by the newer, lighter AMIRA, which is targeted at documentary-style shooting with smaller crews. The AMIRA is tiered into three versions, with the Premium model offering 2K recording in all ProRes flavors at up to 200fps. ARRI has added 4K capabilities to both the ALEXA and AMIRA line by utilizing the full sensor size using their Open Gate mode. In the Amira, this 3.4K image is internally scaled by a factor of 1.2 to record a UHD file at up to 60fps to its in-camera CFast 2.0 cards. The ALEXA uses a similar technique, but only records the 3.4K signal in-camera, with scaling to be done later in post.

df0715_alexa65To leapfrog the competition, ARRI also introduced its ALEXA 65, which is available through the ARRI Rental division. This camera is a scaled up version of the ALEXA XT and uses a sensor that is larger than a 5-perf 65mm film frame. That’s an Open Gate resolution of 6560 x 3102 photosites. The signal is captured as uncompressed ARRIRAW. Currently the media is recorded on ALEXA XR Capture drives at a maximum frame rate of 27fps.

df0715_bmd_cc_rear_lBlackmagic Design had been the most unexpected camera developer a few years ago, but has since grown its DSLR-style camera line into four models: Studio, Production 4K, Cinema and Pocket Cinema. These vary in cosmetic style and size, which formats they are able to record and the lens mounts they use. df0715_bmdpocketThe Pocket Cinema Camera is essentially a digital equivalent of a Super 16mm film camera, but in a point-and-shoot, small camera form factor. The Cinema and Production 4K cameras feature a larger, Super 35mm sensor. Each of these three incorporate ProRes and/or CinemaDNG raw recording. The Studio Camera is designed as a live production camera. It features a larger viewfinder, housing, accessories and connections designed to integrate this camera into a television studio or remote truck environment. There is an HD and a 4K version.

df0715_ursaThe biggest Blackmagic news was the introduction of the URSA. Compared to the smaller form factors of the other Blackmagic Design cameras, the URSA is literally a “bear” of a camera. It is a rugged 4K camera built around the idea of user-interchangeable parts. You can get EF, PL and broadcast lens mounts, but you can also operate it without a lens as a standalone recording device. It’s designed for UltraHD (3840 x 2160), but can record up to 4,000 pixels wide in raw. Recording formats include CinemaDNG raw (uncompressed and 3:1 compressed), as well as Apple ProRes, with speeds up to 80fps. There are two large displays on both sides of the camera, which can be used for monitoring and operating controls. It has a 10” fold-out viewfinder and a built-in liquid cooling system. As part of the modular design, users can replace mounts and even the sensor in the field.

df0715_c300Canon was the most successful company out of the gate when the industry adopted HD-video-capable DSLR cameras as serious production tools. Canon has expanded these offerings with its Cinema EOS line of small production cameras, including the C100, C100 Mark II, C300 and C500, which all share a similar form factor. Also included in this line-up is the EOS-1D C, a 4K camera that retains its DSLR body. The C300 and C500 camera both use a Super 35mm sized sensor and come in EF or PL mount configurations. The C300 is limited to HD recording using the Canon XF codec. The C500 adds 2K and 4K (4096 cinema and 3840 UHD) recording capabilities, but this signal must be externally recorded using a device like the Convergent Design Odyssey 7Q+. HD signals are recorded internally as Canon XF, just like the C300. The Canon EOS C100 and C100 Mark II share the design of the C300, except that they record to AVCHD instead of Canon XF. In addition, the Mark II can also record MP4 files. Both C100 models record to SD cards, whereas the C300/C500 cameras use CF cards. The Mark II features improved ergonomics over the base C100 model.

df0715_5dThe Canon EOS-1D C is included because it can record 4K video. Since it is also a still photography camera, the sensor is an 18MP full-frame sensor. When recording 4K video, it uses a Motion JPEG codec, but for HD, can also use the AVCHD codec. The big plus over the C500 is that the 1D C records 4K onboard to CF cards, so is better suited to hand-held work. The DSLR cameras that started the craze for Canon continue to be popular, including the EOS 5D Mark III and the new EOS 7D Mark II. Plus the consumer-oriented Rebel versions. All are outstanding still cameras. The 5D features a 22.3MP CMOS sensor and records HD video as H.264 MOV files to onboard CF cards. Thanks to the sensor size, the 5D is still popular for videographers who want extremely shallow depth-of-field shots from a handheld camera.

df0715_d16Digital Bolex has become a Kickstarter success story. These out-of-the-box thinkers coupled the magic of a venerable name from the film era with innovative design and marketing to produce the D16 Cinema Camera. Its form factor mimics older, smaller, handheld film camera designs, making it ideal for run-and-gun documentary production. It features a Super 16mm sized CCD sensor with a global shutter and claims 12 stops of dynamic range. The D16 records in 12-bit CinemaDNG raw to internal SSDs, but media is offloaded to CF cards or via USB3.0 for media interchange. The camera comes with a C-mount, but EF, MFT and PL lens mounts are available. Currently the resolutions include 2048 x 1152 (“S16mm mode”), 2048 x 1080 (“S16 EU”) and HD (“16mm mode”). The D16 records 23.98, 24 and 25fps frame rates, but variable rates up to 32fps in the S16mm mode are coming soon. To expand on the camera’s attractiveness, Digital Bolex also offers a line of accessories, including Kish/Bolex 16mm prime lens sets. These fixed aperture F4 lenses are C-mount for native use with the D16 camera. Digital Bolex also offers the D16 in an MFT mount configuration and in a monochrome version.

df0715_hero4The sheer versatility and disposable quality of GoPro cameras has made the HERO line a staple of many productions. The company continues to advance this product with the HERO4 Black and Silver models as their latest. These are both 4K cameras and have similar features, but if you want full video frame rates in 4K, then the HERO4 Black is the correct model. It will record up to 30fps in 4K, 50fps in 2.7K and 120fps in 1080p. As a photo camera, it uses a 12MP sensor and is capable of 30 frames a one second in burst mode and time-lapse intervals from .5 to 60 seconds. The video signal is recorded as an H264 file with a high-quality mode that’s up 60 Mb/s. MicrosSD card media is used. HERO cameras have been popular for extreme point-of-video shots and its waterproof housing is good for 40 meters. This new HERO4 series offers more manual control, new night time and low-light settings, and improved audio recording.

df0715_d810Nikon actually beat Canon to market with HD-capable DSLRs, but lost the momentum when Canon capitalized on the popularity of the 5D. Nevertheless, Nikon has its share of supportive videographers, thanks in part to the quantity of Nikon lenses in general use. The Nikon range of high-quality still photo and video-enabled cameras fall under Nikon’s D-series product family. The Nikon D800/800E camera has been updated to the D810. This is the camera of most interest to professional videographers. It’s a 36.3MP still photo camera that can also record 1920 x 1080 video in 24/30p modes internally and 60p externally. It can also record up to 9,999 images in a time-lapse sequence. A big plus for many is its optical viewfinder. It records H.264/MPEG-4 media to onboard CF cards. Other Nikon video cameras include the D4S, D610, D7100, D5300 and D3300.

df0715_varicamPanasonic used to own the commercial HD camera market with the original VariCam HD camera. They’ve now reimagined that brand in the new VariCam 35 and VariCam HS versions. The new VariCam uses a modular configuration with each of these two cameras using the same docking electronics back. In fact, a costumer can purchase one camera head and back and then only need to purchase the other head, thus owning both the 35 and the HS models for less than the total cost of two cameras. The VariCam 35 is a 4K camera with wide color gamut and wide dynamic range (14+ stops are claimed). It features a PL lens mount, records from 1 to 120fps and supports dual-recording. For example, you can simultaneously record a 4K log AVC-Intra master to the main recorder (expressP2 card) and 2K/HD Rec 709 AVC-Intra/AVC-Proxy/Apple ProRes to a second internal recorder (microP2 card) for offline editing. VariCam V-Raw camera raw media can be recorded to a separate Codex V-RAW recorder, which can be piggybacked onto the camera. The Panasonic VariCam HS is a 2/3” 3MOS broadcast/EFP camera capable of up to 240fps of continuous recording.  It supports the same dual-recording options as the VariCam 35 using AVC-Intra and/or Apple ProRes codecs, but is limited to HD recordings.

df0715_gh4With interest in DSLRs still in full swing, many users’ interest in Panasonic veers to the Lumix GH4. This camera records 4K cinema (4096) and 4K UHD (3840) sized images, as well as HD. It uses SD memory cards to record in MOV, MP4 or AVCHD formats. It features variable frame rates (up to 96fps), HDMI monitoring and a professional 4K audio/video interface unit. The latter is a dock the fits to the bottom of the camera. It includes XLR audio and SDI video connections with embedded audio and timecode.

RED Digital Cinema started the push for 4K cameras and camera raw video recording with the original RED One. That camera is now only available in refurbished models, as RED has advanced the technology with the EPIC and SCARLET. Both are modular camera designs that are offered with either the Dragon or the Mysterium-X sensor. The Dragon is a 6K, 19MP sensor with 16.5+ stops of claimed dynamic range. The Mysterium-X is a 5K, 14MP sensor that claims 13.5 stops, but up to 18 stops using RED’s HDRx (high dynamic range) technology. df0715_epicThe basic difference between the EPIC and the SCARLET, other than cost, is that the EPIC features more advanced internal processing and this computing power enables a wider range of features. For example, the EPIC can record up to 300fps at 2K, while the SCARLET tops out at 120fps at 1K. The EPIC is also sold in two configurations: EPIC-M, which is hand-assembled using machined parts, and the EPIC-X, which is a production-run camera. With the interest in 4K live production, RED has introduced its 4K Broadcast Module. Coupled with an EPIC camera, you could record a 6K file for archive, while simultaneously feeding a 4K and/or HD live signal for broadcast. RED is selling studio broadcast configurations complete with camera, modules and support accessories as broadcast-ready packages.

df0715_f65Sony has been quickly gaining ground in the 4K market. Its CineAlta line includes the F65, PMW-F55, PMW-F5, PMW-F3, NEX-FS700R and NEX-FS100. All are HD-capable and use Super 35mm sized image sensors, with the lower-end FS700R able to record 4K raw to an external recorder. At the highest end is the 20MP F65, which is designed for feature film production.df0715_f55 The camera is capable of 8K raw recording, as well as 4K, 2K and HD variations. Recordings must be made on a separate SR-R4 SR MASTER field recorder. For most users, the F55 is going to be the high-end camera for them if they purchase from Sony. It permits onboard recording in four formats: MPEG-2 HD, XAVC HD, SR File and XAVC 4K. With an external recorder, 4K and 2K raw recording is also available. High speeds up to 240fps (2K raw with the optional, external recorder) are possible. The F5 is the F55’s smaller sibling. It’s designed for onboard HD recording (MPEG-2 HD, XAVC HD, SR File). 4K and 2K recordings require an external recorder.

df0715_fs7The Sony camera that has caught everyone’s attention is the PXW-FS7. It’s designed as a lightweight, documentary-style camera with a form factor and rig that’s reminiscent of an Aaton 16mm film camera. It uses a Super 35mm sized sensor and delivers 4K resolution using onboard XAVC recording to XQD memory cards. XDCAM MPEG-2 HD recording (now) and ProRes (with a future upgrade) will also be possible. Also raw will be possible to an outboard recorder.

df0715_a7sSony has also not been left behind by the DSLR revolution. The A7s is an APS-C, full frame, mirrorless 12.2MP camera that’s optimized for 4K and low light. It can record up to 1080p/60 (or 720p/120) onboard (50Mbps XAVC S) or feed uncompressed HD and/or 4K (UHD) out via its HDMI port. It will record onboard audio and sports such pro features as Sony’s S-Log2 gamma profile.

With any overview, there’s plenty that we can’t cover. If you are in the market for a camera, remember many of these companies offer a slew of other cameras ranging from consumer to ENG/EFP offerings. I’ve only touched on the highlights. Plus there are others, like Grass Valley, Hitachi, Samsung and Ikegami that make great products in use around the world every day. Finally, with all the video-enabled smart phones and tablets, don’t be surprised if you are recording your next production with an iPhone or iPad!

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Gone Girl

df_gg_4David Fincher is back with another dark tale of modern life, Gone Girl – the film adaptation of Gillian Flynn’s 2012 novel. Flynn also penned the screenplay.  It is the story of Nick and Amy Dunne (Ben Affleck and Rosamund Pike) – writers who have been hit by the latest downturn in the economy and are living in America’s heartland. Except that Amy is now mysteriously missing under suspicious circumstances. The story is told from each of their subjective points of view. Nick’s angle is revealed through present events, while Amy’s story is told through her diary in a series of flashbacks. Through these we learn that theirs is less than the ideal marriage we see from the outside. But whose story tells the truth?

To pull the film together, Fincher turned to his trusted team of professionals including director of photography Jeff Cronenweth, editor Kirk Baxter and post production supervisor Peter Mavromates. Like Fincher’s previous films, Gone Girl has blazed new digital workflows and pushed new boundaries. It is the first major feature to use the RED EPIC Dragon camera, racking up 500 hours of raw footage. That’s the equivalent of 2,000,000 feet of 35mm film. Much of the post, including many of the visual effects, were handled in-house.

df_gg_1Kirk Baxter co-edited David Fincher’s The Curious Case of Benjamin Button, The Social Network and The Girl with the Dragon Tattoo with Angus Wall – films that earned the duo two best editing Oscars. Gone Girl was a solo effort for Baxter, who had also cut the first two episodes of House of Cards for Fincher. This film now becomes the first major feature to have been edited using Adobe Premiere Pro CC. Industry insiders consider this Adobe’s Cold Mountain moment. That refers to when Walter Murch used an early version of Apple Final Cut Pro to edit the film Cold Mountain, instantly raising the application’s awareness among the editing community as a viable tool for long-form post production. Now it’s Adobe’s turn.

In my conversation with Kirk Baxter, he revealed, “In between features, I edit commercials, like many other film editors. I had been cutting with Premiere Pro for about ten months before David invited me to edit Gone Girl. The production company made the decision to use Premiere Pro, because of its integration with After Effects, which was used extensively on the previous films. The Adobe suite works well for their goal to bring as much of the post in-house as possible. So, I was very comfortable with Premiere Pro when we started this film.”

It all starts with dailies

df_gg_3Tyler Nelson, assistant editor, explained the workflow, “The RED EPIC Dragon cameras shot 6K frames (6144 x 3072), but the shots were all framed for a 5K center extraction (5120 x 2133). This overshoot allowed reframing and stabilization. The .r3d files from the camera cards were ingested into a FotoKem nextLAB unit, which was used to transcode edit media, viewing dailies, archive the media to LTO data tape and transfer to shuttle drives. For offline editing, we created down-sampled ProRes 422 (LT) QuickTime media, sized at 2304 x 1152, which corresponded to the full 6K frame. The Premiere Pro sequences were set to 1920 x 800 for a 2.40:1 aspect. This size corresponded to the same 5K center extraction within the 6K camera files. By editing with the larger ProRes files inside of this timeline space, Kirk was only viewing the center extraction, but had the same relative overshoot area to enable easy repositioning in all four directions. In addition, we also uploaded dailies to the PIX system for everyone to review footage while on location. PIX also lets you include metadata for each shot, including lens choice and camera settings, such as color temperature and exposure index.”

Kirk Baxter has a very specific way that he likes to tackle dailies. He said, “I typically start in reverse order. David tends to hone in on the performance with each successive take until he feels he’s got it. He’s not like other directors that may ask for completely different deliveries from the actors with each take. With David, the last take might not be the best, but it’s the best starting point from which to judge the other takes. Once I go through a master shot, I’ll cut it up at the points where I feel the edits will be made. Then I’ll have the assistants repeat these edit points on all takes and string out the line readings back-to-back, so that the auditioning process is more accurate. David is very gifted at blocking and staging, so it’s rare that you don’t use an angle that was shot for a scene. I’ll then go through this sequence and lift my selected takes for each line reading up to a higher track on the timeline. My assistants take the selects and assemble a sequence of all the angles in scene order. Once it’s hyper-organized, I’ll send it to David via PIX and get his feedback. After that, I’ll cut the scene. David stays in close contact with me as he’s shooting. He wants to see a scene cut together before he strikes a set or releases an actor.”

Telling the story

df_gg_5The director’s cut is often where the story gets changed from what works on paper to what makes a better film. Baxter elaborated, “When David starts a film, the script has been thoroughly vetted, so typically there isn’t a lot of radical story re-arrangement in the cutting room. As editors, we got a lot of credit for the style of intercutting used in The Social Network, but truthfully that was largely in the script. The dialogue was tight and very integral to the flow, so we really couldn’t deviate a lot. I’ve always found the assembly the toughest part, due to the volume and the pressure of the ticking clock. Trying to stay on pace with the shoot involves some long days. The shooting schedule was 106 days and I had my first cut ready about two weeks after the production wrapped. A director gets around ten weeks for a director’s cut and with some directors, you are almost starting from scratch once the director arrives. With David, most of that ten week period involves adding finesse and polish, because we have done so much of the workload during the shoot.”

df_gg_9He continued, “The first act of Gone Girl uses a lot of flashbacks to tell Amy’s side of the story and with these, we deviated a touch from the script. We dropped a couple of scenes to help speed things along and reduced the back and forth of the two timelines by grouping flashbacks together, so that we didn’t keep interrupting the present day; but, it’s mostly executed as scripted. There was one scene towards the end that I didn’t feel was in the right place. I kept trying to move it, without success. I ended up taking another pass at the cut of the scene. Once we had the emotion right in the cut, the scene felt like it was in the right place, which is where it was written to be.”

“The hardest scenes to cut are the emotional scenes, because David simplifies the shooting. You can’t hide in dynamic motion. More complex scenes are actually easier to cut and certainly quite fun. About an hour into the film is the ‘cool girls’ scene, which rapidly answers lots of question marks that come before it. The scene runs about eight minutes long and is made up of about 200 set-ups. It’s a visual feast that should be hard to put together, but was actually dessert from start to finish, because David thought it through and supplied all the exact pieces to the puzzle.”

Music that builds tension

df_gg_6Composers Trent Reznor and Atticus Ross of Nine Inch Nails fame are another set of Fincher regulars. Reznor and Ross have typically supplied Baxter with an album of preliminary themes scored with key scenes in mind. These are used in the edit and then later enhanced by the composers with the final score at the time of the mix. Baxter explained, “On Gone Girl we received their music a bit later than usual, because they were touring at the time. When it did arrive, though, it was fabulous. Trent and Atticus are very good at nailing the feeling of a film like this. You start with a piece of music that has a vibe of ‘this is a safe, loving neighborhood’ and throughout three minutes it sours to something darker, which really works.”

“The final mix is usually the first time I can relax. We mixed at Skywalker Sound and that was the first chance I really had to enjoy the film, because now I was seeing it with all the right sound design and music added. This allows me to get swallowed up in the story and see beyond my role.”

Visual effects

df_gg_7The key factor to using Premiere Pro CC was its integration with After Effects CC via Adobe’s Dynamic Link feature. Kirk Baxter explained how he uses this feature, “Gone Girl doesn’t seem like a heavy visual effects film, but there are quite a lot of invisible effects. First of all, I tend to do a lot of invisible split screens. In a two-shot, I’ll often use a different performance for each actor. Roughly one-third of the timeline contains such shots. About two-thirds of the timeline has been stabilized or reframed. Normally, this type of in-house effects work is handled by the assistants who are using After Effects. Those shots are replaced in my sequence with an After Effects composition. As they make changes, my timeline is updated.”

“There are other types of visual effects, as well. David will take exteriors and do sky replacements, add flares, signage, trees, snow, breath, etc. The shot of Amy sinking in the water, which has been used in the trailers, is an effects composite. That’s better than trying to do multiple takes with the real actress by drowning her in cold water. Her hair and the water elements were created by Digital Domain. This is also a story about the media frenzy that grows around the mystery, which meant a lot of TV and computer screen comps. That content is as critical in the timing of a scene as the actors who are interacting with it.”

Tyler Nelson added his take on this, “A total of four assistants worked with Kirk on these in-house effects. We were using the same ProRes editing files to create the composites. In order to keep the system performance high, we would render these composites for Kirk’s timeline, instead of using unrendered After Effects composites. Once a shot was finalized, then we would go back to the 6K .r3d files and create the final composite at full resolution. The beauty of doing this all internally is that you have a team of people who really care about the quality of the project as much as everyone else. Plus the entire process becomes that much more interactive. We pushed each other to make everything as good as it could possibly be.”

Optimization and finishing

df_gg_2A custom pipeline was established to make the process efficient. This was spearheaded by post production consultant Jeff Brue, CTO of Open Drives. The front end storage for all active editorial files was a 36TB RAID-protected storage network built with SSDs. A second RAID built with standard HDDs was used for the .r3d camera files and visual effects elements. The hardware included a mix of HP and Apple workstations running with NVIDIA K6000 or K5200 GPU cards. Use of the NVIDIA cards was critical to permit as much real-time performance as possible doing the edit. GPU performance was also a key factor in the de-Bayering of .r3d files, since the team didn’t use any of the RED Rocket accelerator cards in their pipeline. The Macs were primarily used for the offline edit, while the PCs tackled the visual effects and media processing tasks.

In order to keep the Premiere Pro projects manageable, the team broke down the film into eight reels with a separate project file per reel. Each project contained roughly 1,500 to 2,000 files. In addition to Dynamic Linking of After Effects compositions, most of the clips were multi-camera clips, as Fincher typically shoots scenes with two or more cameras for simultaneous coverage. This massive amount of media could have potentially been a huge stumbling block, but Brue worked closely with Adobe to optimize system performance over the life of the project. For example, project load times dropped from about six to eight minutes at the start down to 90 seconds at best towards the end.

The final conform and color grading was handled by Light Iron on their Quantel Pablo Rio system run by colorist Ian Vertovec. The Rio was also configured with NVIDIA Tesla cards to facilitate this 6K pipeline. Nelson explained, “In order to track everything I used a custom Filemaker Pro database as the codebook for the film. This contained all the attributes for each and every shot. By using an EDL in conjunction with the codebook, it was possible to access any shot from the server. Since we were doing a lot of the effects in-house, we essentially ‘pre-conformed’ the reels and then turned those elements over to Light Iron for the final conform. All shots were sent over as 6K DPX frames, which were cropped to 5K during the DI in the Pablo. We also handled the color management of the RED files. Production shot these with the camera color metadata set to RedColor3, RedGamma3 and an exposure index of 800. That’s what we offlined with. These were then switched to RedLogFilm gamma when the DPX files were rendered for Light Iron. If, during the grade, it was decided that one of the raw settings needed to be adjusted for a few shots, then we would change the color settings and re-render a new version for them.” The final mastering was in 4K for theatrical distribution.

df_gg_8As with his previous films, director David Fincher has not only told a great story in Gone Girl, but set new standards in digital post production workflows. Seeking to retain creative control without breaking the bank, Fincher has pushed to handle as many services in-house as possible. His team has made effective use of After Effects for some time now, but the new Creative Cloud tools with Premiere Pro CC as the hub, bring the power of this suite to the forefront. Fortunately, team Fincher has been very eager to work with Adobe on product advances, many of which are evident in the new application versions previewed by Adobe at IBC in Amsterdam. With a film as complex as Gone Girl, it’s clear that Adobe Premiere Pro CC is ready for the big leagues.

Kirk Baxter closed our conversation with these final thoughts about the experience. He said, “It was a joy from start to finish making this film with David. Both he and Cean [Chaffin, producer and David Fincher’s wife] create such a tight knit post production team that you fall into an illusion that you’re making the film for yourselves. It’s almost a sad day when it’s released and belongs to everyone else.”

Originally written for Digital Video magazine / CreativePlanetNetwork.

_________________________________

Needless to say, Gone Girl has received quite a lot of press. Here are just a few additional discussions of the workflow:

Adobe panel discussion with the post team

PostPerspective

FxGuide

HDVideoPro

IndieWire

IndieWire blog

ICG Magazine

RedUser

Tony Zhou’s Vimeo take on Fincher 

©2014 Oliver Peters

More 4K

df_4Kcompare_main

I’ve talked about 4K before (here, here and here), but I’ve recently done some more 4K jobs that have me thinking again. 4K means different things to different people and in terms of dimensions, there’s the issue of cinema 4K (4096 pixels wide) versus the UltraHD/QuadHD/4K 16:9 (whatever you want to call it) version of 4K (3840 pixels wide). That really doesn’t make a lot of difference, because these are close enough to be the same. There’s so much hype around it, though, that you really have to wonder if it’s “the Emperor’s new clothes”. (Click on any of these images for expanded views.)

First of all, 4K used as a marketing term is not a resolution, it’s a frame dimension. As such, 4K is not four times the resolution of HD. That’s a measurement of area and not resolution. True resolution is usually measured in the vertical direction based on the ability to resolve fine detail (regardless of the number of pixels) and, therefore, 4K is only twice the resolution of HD at best. 4K is also not sharpness, which is a human perception affected by many things, such as lens quality, contrast, motion and grading. It’s worth watching Mark Schubin’s excellent webinar on the topic to get a clearer understanding of this. There’s also a very good discussion among top DoPs here about 4K, lighting, high dynamic range and more.

df_4kcompare_1A lot of arguments have been made that 4K cameras using a color-pattern filter method (Bayer-style), single CMOS sensor don’t even deliver the resolution they claim. The reason is that in many designs 50% of the pixels are green versus 25% each for red and blue. Green is used for luminance, which determines detail, so you do not have a 1:1 pixel relationship between green and the stated frame resolution of the sensor. That’s in part why RED developed 5K and 6K sensors and it’s why Sony uses an 8K sensor (F65) to deliver a 4K image.

The perceived image quality is also not all about total pixels. The pixels of the sensor, called photosites, are the light-receiving elements of the sensor. There’s a loose correlation between pixel size and light sensitivity. For any given sensor of a certain physical dimension, you can design it with a lot of small pixels or with fewer, but larger, pixels. This roughly correlates to a sensor that’s of high resolution, but a smaller dynamic range (many small pixels) or one with lower resolution, but a higher dynamic range (large, but fewer pixels). Although the equation isn’t nearly this simplistic, since a lot of color science and “secret sauce” goes into optimizing a sensor’s design, you can certainly see this play out in the marketing battles between the RED and ARRI camps. In the case of the ALEXA, ARRI adds some on-the-sensor filtering, which results in a softer image that gives it a characteristic filmic quality.df_4kcompare_2

Why do you use 4K?

With 4K there are two possible avenues. The first is to shoot 4K for the purpose of reframing and repositioning within HD and 2K timelines. Reframing isn’t a new production idea. When everyone shot on film, some telecine devices, like the Rank Cintel Mark III, sported zoom boards that permitted an optical blow-up of the 35mm negative. You could zoom in for a close-up in transfer that didn’t cost you resolution. Many videographers shoot 1080 for a 720 finish, as this allows a nice margin for reframing in post. The second is to deliver a final 4K product. Obviously, if your intent is the latter, then you can’t count on the techniques of the former in post.

df_4kcompare_3When you shoot 4K for HD post, then workflow is an issue. Do you shoot everything in 4K or just the items you know you’ll want to deal with? How will this cut with HD and 2K content? That’s where it gets dicey, because some NLEs have good 4K workflows and others don’t. But it’s here that I contend you are getting less than meets the eye, so to speak.  I have run into plenty of editors who have dropped a 4K clip into an HD timeline and then blown it up, thinking that they are really cropping into the native 4K frame and maintaining resolution. Depending on the NLE and the settings used, often they are simply blowing up an HD shot. The NLE scaled the 4K to HD first and then expanded the downscaled HD image. It didn’t crop into the actual 4K native resolution. So you have to be careful. And guess what, if the blow up isn’t that extreme, it may not look much different than the crop.

df_4kcompare_4One thing to remember is that a 4K image that is scaled to fit into an HD timeline gains the benefits of oversampling. The result in HD will be very sharp and, in fact, will generally look better perceptually than the exact same image natively shot in an HD size. When you now crop into the native image, you are losing some of that oversampling effect. A 1:1 pixel relationship is the same effective image size as a 200% blow-up. Of course, it’s not the same result. When you compare the oversampled “wide shot” (4K scaled to HD) to the “close-up” (native 4K crop), the close-up will often look softer. You’ll see defects of the image, like chromatic aberration in the lens, missed critical focus and sensor noise. Instead, if you shoot a wide and then an actual close-up, that result will usually look better.

On the other hand, if you blow up the 4K-to-HD or a native HD shot, you’ll typically see a result that looks pretty good. That’s because there’s often a lot more information there than monitors or the eye can detect. In my experience, you can commonly get away with a blow-up in the range of 120% of the original image size and in some cases, as much as 150%.

To scale or not to scale

df_4K_comparison_Instant4KLet me point out that I’m not saying a native 4K shot doesn’t look good. It does, but often the associated workflow hassles aren’t worth it. For example, let’s take a typical 1080p 50” Panasonic plasma that’s often used as a client monitor in edit suites. You or your client may be sitting 7 to 10 feet away from it, which is closer than most people sit in a living room with that size of a screen. If I show a client the native image (4K at 1:1 in an HD timeline) compared with an separate HD image at the same framing, it’s unlikely that they’ll see a difference. Another test is to take two exact images – one native HD and the other 4K. Scale up the HD and crop down the 4K to match. In theory, the 4K should look better and sharper. In fact, sitting back on the client sofa, most won’t see a difference. It’s only when they step to about 5 feet in front of the monitor that a difference is obvious and then only when looking at fine detail within the shot.

df_gh4_instant4k_smNot all scaling is equal. I’ve talked a lot about the comparison of HD scaling, but that really depends on the scaling that you use. For a quick shot, sure, use what your NLE has built in. For more critical operations, then you might want to scale images separately. DaVinci Resolve has excellent built-in scaling and lets you pick from smooth, sharp and bilinear algorithms. If you want a plug-in, then the best I’ve found is the new Red Giant Instant 4K filter. It’s a variation of their Instant HD plug-in and works in After Effects and Premiere Pro. There are a lot of quality tweaks and naturally, the better it does, the longer the render will be. Nevertheless, it offers outstanding results and in one test that I ran, it actually provided a better look within portions of the image than the native 4K shot.

df_4K_comparison-C500_smIn that case, it was a C500 shot of a woman on a park bench with a name badge. I had three identical versions of the shot (not counting the raw files) – the converted 4K ProRes4444 file, a converted 1080 ProRes4444 “proxy” file for editing and the in-camera 1080 Canon XF file. I blew up the two 1080 shots using Instant 4K and cropped the 4K shot so all were of equal framing. When I compared the native 4K shot to the expanded 1080 ProRes4444 shot, the woman’s hair was sharper in the 1080 blow-up, but the letters on the name badge were better on the original. The 1080 Canon XF blow-up was softer in both areas. I think this shows that some of the controls in the plug-in may give you superior results to the original (crisper hair); but, a blow-up suffers when you are using a worse codec, like Canon’s XF (50 Mbps 4:2:2). It’s fine for native HD, but the ProRes4444 codec has twice the chroma resolution and less compression, which makes a difference when scaling an image larger. Remember all of this pertains to viewing the image in HD.

4K deliverables

df_4K_comparison-to-1080_smSo what about working in native 4K for a 4K deliverable? That certainly has validity for high-resolution projects (films, concerts, large corporate presentations), but I’m less of a believer for television and web viewing. I’d rather have “better” pixels and not simply “more” pixels. Most of the content you watch at theaters using digital projection is 2K playback. Sometimes the master for that DCP was HD, 2K or 4K. If you are in a Sony 4K projector-equipped theater, most of the time, it’s simply the projector upscaling the content to 4K as part of the projection. Even though you may see a Sony 4K logo at the head of the trailers, you aren’t watching 4K content – definitely not, if it’s a stereo3D film. Yet, much of this looks pretty good, doesn’t it?

df_AMIRAEverything I talked about, regarding blowing up HD by up to 120% or more, still applies to 4K. Need to blow up a shot a bit in a 4K timeline? Go ahead, it will look fine. I think ARRI has proven this as well, taking films shot with the ALEXA all the way up to Imax. In fact, ARRI just announced that the AMIRA will get in-camera, on-the-fly upscaling of its image with the ability to record 4K (3840 x 2160 at up to 60fps) on the CFast 2.0 cards. They can do this, because the sensor starts with more pixels than HD or 2K. The AMIRA will expose all of the available photosites (about 3.4K sensor pixels) in what they call the “open gate” method. This image is lightly cropped to 3.2K and then scaled by a 1.2 factor, which results in UltraHD 4K recording on the same hardware. Pretty neat trick and judging by ARRI’s image quality, I’ll bet it will look very good. Doubling down on this technique, the ALEXA XT models will also be able to record ProRes media at this 3.2K size. In the case of the ALEXA, the designers have opted to leave the upscaling to post, rather than to do it in-camera.

To conclude, if you are working in 4K today, then by all means continue to do so. It’s a great medium with a lot of creative benefits. If you aren’t working in 4K, then don’t sweat it. You won’t be left behind for awhile and there are plenty of techniques to get you to the same end goal as much of the 4K production that’s going on.

Click these thumbnails for full resolution images.

df_gh4_instant4k_sm

 

 

 

df_4K_comparison-to-1080_sm

 

 

 

 

©2014 Oliver Peters

HP Z1 G2 Workstation

df_hpz1g2_heroHewlett-Packard is known for developing workstations that set a reliability and performance standard, characterized by the Z-series of workstation towers. HP has sought to extend what they call the “Z experience” to other designs, like mobile and all-in-one computers. The latest of these is the HP Z1 G2 Workstation – the second generation model of the Z1 series.

Most readers will associate the all-in-one concept with an Apple iMac. Like the iMac, the Z1 G2 is a self-contained unit housing all electronics and the display in one chassis. Whereas the top-end iMacs are targeted at advanced consumers and pros with less demanding computing needs, the HP Z1 G2 is strictly for the serious user who requires advanced horsepower. The iMac is a sealed unit, which cannot be upgraded by the user (except for RAM), and is largely configured with laptop-grade parts. In contrast, the HP Z1 G2 is a Rolls-Royce. The build is very solid and it exudes a sense of performance. The user has the option to configure their Z1 G2 from a wide range of components. The display lifts like a car hood for easy accept to the “engine”, making user upgrades nearly as easy as on a tower.

Configuration options

df_hpz1g2_hero_touchThe HP Z1 G2 offers processor choices that include Intel Core i3, Core i5 and three Xeon models. There are a variety of storage and graphics card choices and it supports up to 32GB of RAM. You may also choose between a Touch and non-Touch display. The Touch screen adds a glass overlay and offers finger or stylus interaction with the screen. Non-touch screens are a matte finish, while Touch screens are glossy. You have a choice of operating systems, including Windows 7, Windows 8 and Linux distributions.

I was able to specify the built-to-order configuration of the Z1 G2 for my review. This included a Xeon E3 (3.6GHz) quad-core, 16GB of RAM, optical drive and the NVIDIA K4100M graphics card. For storage, I selected one 256GB mSATA boot drive (“flash” storage), plus two 512GB SSDs that were set-up in a RAID-0 configuration. I also ordered the Touch option with 64-bit Windows 8.1 Pro. Z1 G2 models start at $1,999; however, as configured, this system would retail at over $6,100, including a 20% eCoupon promo discount.

An important, new feature is support for Thunderbolt 2 with an optional module. HP is one of the first PC manufacturers to support Thunderbolt. I didn’t order that, but reps from AJA, Avid and Blackmagic Design all confirmed to me that their Thunderbolt units should work fine with this workstation, as long as you install their Windows device drivers. One of these would be required for any external broadcast or grading monitor.

In addition to the custom options, the Z1 G2 includes wireless support, four USB 2.0 ports, two USB 3.0 ports, Gigabit Ethernet, a DisplayPort connector for an secondary computer monitor, S/PDIF, analog audio connectors, a webcam and a media card reader.

Arrival and set-up

df_hpz1g2_openThe HP Z1 G2 ships as a single, 57 pound package, complete with a wireless mouse and keyboard. The display/electronics chassis is attached to an adjustable arm that connects to the base. This allows the system to be tilted at any angle, as well as completely flat for shipping and access to the electronics. It locks into place when it’s flat (as in shipping), so you have to push down lightly on the display in order to unlock the latch button.

The display features a 27” (diagonal) screen, but the chassis is actually 31” corner-to-corner. Because the stand has to support the unit and counter-balance the weight at various angles, it sticks out about 12” behind the back of the chassis. Some connectors (including the power cord) are at the bottom, center of the back of the chassis. Others are along the sides. The adjustable arm allows any angle from vertical to horizontal, so it would be feasible to operate in a standing or high-chair position looking down at the monitor – a bit like a drafting table. I liked the fact that the arm lets you drop the display completely down to the desk surface, which put the bottom of the screen lower than my stationary 20” Apple Cinemas.

First impressions

df_hpz1g2_win81I picked the Touch option in order to test the concept, but quite frankly I decided it wasn’t for me. In order to control items by touch, you have to be a bit closer than the full length of your arm. As a glasses-wearer, this distance is uncomfortable for me, as I prefer to be a little farther away from a screen of this size. Although the touch precision is good, it’s not as precise as you’d get with a mouse or pen and tablet – even if using an iPad stylus. Only menu and navigation operations, but no drawing tools, worked in Photoshop – an application that seems natural for Touch. While I found the Touch option not to be that interesting to me, I did like the screen that comes with it. It’s glossy, which gives you nice density to your images, but not so reflective as to be annoying in a room with ambient lighting.

The second curiosity item for me was Windows 8.1. The Microsoft “metro” look has been maligned and many pros opt for Windows 7 instead. I actually found the operating system to function well and the “flat” design philosophy much like what Apple is doing with Mac OS X and iOS. The tiled Start screen that highlights this release can easily be avoided when you set-up your preferences. If you prefer to pin application shortcuts to the Windows task bar or on the Desktop, that’s easily done. Once you are in an application like Premiere Pro or Media Composer, the OS differences tend to disappear anyway.

df_hpz1g2_bmdtestSince I had configured this unit with an mSATA boot/applications drive and RAID-0 SSDs for media, the launch and operation of any application was very fast. Naturally the difference from a cold start on the Z1 G2, as compared to my 2009 Mac Pro with standard 7200RPM drives, was night and day. With most actual operations, the differences in application responsiveness were less dramatic.

One area that I think needs improvement is screen calibration. The display is not a DreamColor display, but color accuracy seems quite good and it’s very crisp at 2560 x 1440 pixels. Unfortunately, both the HP and NVIDIA calibration applications were weak, using consumer level nomenclature for settings. For instance, I found no way to accurately set a 6500-degree color temperature or a 2.2 gamma level, based on how the sliders were labelled. Some of the NVIDIA software controls didn’t appear to work at all.

Performance stress testing

I loaded up the Z1 G2 with a potpourri of media and applications, including Adobe CC 2014 (Photoshop, Premiere Pro, After Effects, SpeedGrade), Avid Media Composer 8, DaVinci Resolve 11 Lite (beta) and Sony Vegas Pro 13. Media included Sony XAVC 4K, Avid DNxHD175X, Apple ProRes 4444, REDCODE raw from an EPIC Dragon camera and more. This allowed me to make some direct comparisons with the same applications and media available on my 2009 eight-core Mac Pro. Its configuration included dual Xeon quad-core processors (2.26GHz), 28GB RAM, an ATI 5870 GPU card and a RAID-0 stripe of two internal 7200RPM spinning hard drives. No I/O devices were installed on either computer. While these two systems aren’t exactly “apples-to-apples”, it does provide a logical benchmark for the type of machine a new Z1 G2 customer might be upgrading from.

df_hpz1g2_4kIn typical, side-by-side testing with edited, single-layer timelines, most applications on both machines performed in a similar fashion, even with 4K media. It’s when I started layering sequences and comparing performance and render times that the differences became obvious.

My first test compared Premiere Pro CC 2014 with a 7-layer, 4K timeline. The V1 track was a full-screen, base layer of Sony XAVC. On top of that I layered six tracks of picture-in-picture (PIP) clips consisting of RED Dragon raw footage at various resolutions up to 5K. Some clips were recorded with in-camera slomo. I applied color correction, scaling/positioning and a drop shadow. The 24p timeline was one minute long and was exported as a 4K .mp4 file. The HP handled this task at just under 11 minutes, compared with almost two hours for the Mac Pro.

My second Premiere Pro test was a little more “real world” – a 48-second sequence of ARRI Alexa 1080p ProRes 4444 log-C clips. These were round-tripped through SpeedGrade to add a Rec 709 LUT, a primary grade and two vignettes to blur and darken the outer edge of the clips. This sequence was exported as a 720/24p .mp4 file. The Z1 G2 tackled this in about 14 minutes compared with 37 minutes for the Mac Pro.

df_hpz1g2_appsPremiere Pro CC 2014 uses GPU acceleration and the superior performance of the NVIDIA K4100M card in the HP versus the ATI 5870 in the Mac Pro is likely the reason for this drastic difference. The render times were closer in After Effects, which makes less use of the GPU for effects processing. My 6-layer After Effects stress test was an 8-second composition consisting of six layers of 1080p ProRes clips from the Blackmagic Cinema Camera. I applied various Cycore and color correction effects and then moved them in 3D space with motion blur enabled. These were rendered out using the QuickTime Animation codec. Times for the Z1 G2 and Mac Pro were 6.5 minutes versus 8.5 minutes respectively.

My last test for the HP Z1 G2 involved Avid Media Composer. My 10-layer test sequence included nine PIP video tracks (using the 3D warp effect) over a full-screen background layer on V1. All media was Avid DNxHD175X (1080p, 10-bit, 23.976fps). No frames were dropped in the medium display quality, but in full quality frames started to drop at V6. When I added a drop shadow to the PIP clips, frames were dropped starting at V4 for full quality and V9 for medium quality.

Conclusion

The HP Z1 G2 is an outstanding workstation. Like any alternative form factor, you have to weigh the options of legacy support for older storage systems and PCIe cards. Thunderbolt addresses many of those concerns as an increasing number of adapters and expansion units hits the market. Those interested in shifting from Mac to Windows – and looking for the best in what the PC side has to offer – won’t go wrong with HP products. The company also maintains close ties to Avid and other software vendors, to make sure the engineering of their workstations matches the future needs of the software.

Whether an all-in-one is right for you comes down to individual needs and preferences. I was very happy with the overall ease of installation, operation and performance of the Z1 G2. By adding MacDrive, QuickTime and ProRes software and codecs, I could easily move files between the Z1 and my Mac. The screen is gorgeous, it’s very quiet and the heat output feels less than from my Mac tower. In these various tests, I never heard any fans kick into high. Whether you are upgrading from an older PC or switching platforms, the HP Z1 G2 is definitely worth considering.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters

Offline to Online with Premiere Pro or Final Cut Pro X

df_offon_1

Most NLE makers are pushing the ability to edit with native camera media, but there are still plenty of reasons to work in an offline-to-online editing workflow. Both Apple Final Cut Pro X and Adobe Premiere Pro CC make it very easy to do this.

Apple Final Cut Pro X

df_offon_2Apple built offline/online right into the design of FCP X. The application can internally transcode optimized media (such as converting GoPro files to ProRes) and proxy media. Proxy media is usually a half-sized version using the ProRes Proxy codec. There’s a preference toggle to switch between original/optimized or proxy media, with FCP X taking care of making sure all transforms and effects are applied properly between both selections.

df_offon_3What most folks don’t know is that you can “cheat” this system. If you import media and choose to copy it into your Event folder, then source media is stored in the Original Media folder within the Event folder. If you create proxies, those files are stored in the Transcoded Media – Proxy Media folder within the Event folder. It is possible to create and place these folders via the Finder. You just have to be careful about exact name and location. Once you do this, it is possible via the Finder, to copy camera media and edit proxies directly into these folders. For example, your DIT might have created proxies for you on location, using Resolve.

df_offon_4Once you launch FCP X, it will automatically find these files. The main criteria is that file names, timecode and duration are identical between the two sets of files. If X properly recognizes the files, you can easily toggle between original/optimized and proxy with the application behaving correctly. If you are unsure of creating these folders in the first place, then I suggest setting these up within FCP X by importing and transcoding a single bogus clip, like a slate or camera bars. Once the folders are set by FCP X, delete this first clip. DO NOT mix the workflows by importing/transcoding some of the clips via FCP X and then later altering or replacing these clips via the Finder. This will completely confuse X. With these few caveats, it is possible to set up a multi-user offline-online workflow using externally-generated media, but still maintaining control via FCP X.

UPDATE: With the FCP X 10.1 update, you must generate proxies with FCP X. Externally-generated proxies do not link as they did up to 10.0.9.

Adobe Premiere Pro CC

df_offon_5A more customary solution is available to Adobe editors thanks to the new Link and Locate feature. A common scenario is that editors might cut a spot in an offline edit session using proxy edit media – such as low-res files with timecode “burn-ins”. Then the camera files are color corrected in an outside grading session and rendered as final, trimmed clips that match the timeline clip lengths, with a few seconds of “handles”. Now the editor has to conform the sequence by linking to the new high-res, graded files.

With Premiere Pro CC you’d start the process in the normal manner by ingesting and cutting with the proxy files. When the cut is locked, create a trimmed project for the sequence, using the same handle length as the colorist will use. This is created using the Project Manger and you can select the option to make the clips Offline. Next, send an EDL or XML file for your locked cut, plus the camera media to the colorist.

df_offon_6Once you get the graded files back, open your trimmed Premiere Pro project. All media will be offline. Select the master clips and pick the Link Media option to open the Link Media dialogue window. Using the Match File Properties settings, set the parameters so that Premiere Pro will properly link to the altered files. Sometimes files names will be different, so you will have to adjust the the Link and Locate parameters accordingly, by deselecting certain matching options. For example, you might want a match strictly by timecode, ignoring file names.

Press Locate and navigate to the new location of the first missing file and relink. Normally all other clips in the same relative path will automatically relink, as well. Now you’ve got your edited sequence back, except with media populated by the final, high-quality files.

©2013 Oliver Peters