24p HD Restoration

df_24psdhd_6

There’s a lot of good film content that only lives on 4×3 SD 29.97 interlaced videotape masters. Certainly in many cases you can go back and retransfer the film to give it new life, but for many small filmmakers, the associated costs put that out of reach. In general, I’m referring to projects with $0 budgets. Is there a way to get an acceptable HD product from an old Digibeta master without breaking the bank? A recent project of mine would say, yes.

How we got here

I had a rather storied history with this film. It was originally shot on 35mm negative, framed for 1.85:1, with the intent to end up with a cut negative and release prints for theatrical distribution. It was being posted around 2001 at a facility where I worked and I was involved with some of the post production, although not the original edit. At the time, synced dailies were transferred to Beta-SP with burn-in data on the top and bottom of the frame for offline editing purposes. As was common practice back then, the 24fps film negative was transferred to the interlaced video standard of 29.97fps with added 2:3 pulldown – a process that duplicates additional fields from the film frames, such that 24 film frames evenly add up to 60 video fields in the NTSC world. This is loaded into an Avid, where – depending on the system – the redundant fields are removed, or the list that goes to the negative cutter compensates for the adjustments back to a frame-accurate 24fps film cut.

df_24psdhd_5For the purpose of festival screenings, the project file was loaded into our Avid Symphony and I conformed the film at uncompressed SD resolution from the Beta-SP dailies and handled color correction. I applied a mask to hide the burn-in and ended up with a letter-boxed sequence, which was then output to Digibeta for previews and sales pitches to potential distributors. The negative went off to the negative cutter, but for a variety of reasons, that cut was never fully completed. In the two years before a distribution deal was secured, additional minor video changes were made throughout the film to end up with a revised cut, which no longer matched the negative cut.

Ultimately the distribution deal that was struck was only for international video release and nothing theatrical, which meant that rather than finishing/revising the negative cut, the most cost-effective process was to deliver a clean video master. Except, that all video source material had burn-in and the distributor required a full-height 4×3 master. Therefore, letter-boxing was out. To meet the delivery requirements, the filmmaker would have to go back to the original negative and retransfer it in a 4×3 SD format and master that to Digital Betacam. Since the negative was only partially cut and additional shots were added or changed, I went through a process of supervising the color-corrected transfer of all required 35mm film footage. Then I rebuilt the new edit timeline largely by eye-matching the new, clean footage to the old sequence. Once done and synced with the mix, a Digibeta master was created and off it went for distribution.

What goes around comes around

After a few years in distribution, the filmmaker retrieved his master and rights to the film, with the hope of breathing a little life into it through self-distribution – DVDs, Blu-rays, Internet, etc. With the masters back in-hand, it was now a question of how best to create a new product. One thought was simply to letter-box the film (to be in the director’s desired aspect) and call it a day. Of course, that still wouldn’t be in HD, which is where I stepped back in to create a restored master that would work for HD distribution.

Obviously, if there was any budget to retransfer the film negative to HD and repeat the same conforming operation that I’d done a few years ago – except now in HD – that would have been preferable. Naturally, if you have some budget, that path will give you better results, so shop around. Unfortunately, while desktop tools for editors and color correction have become dirt-cheap in the intervening years, film-to-tape transfer and film scanning services have not – and these retain a high price tag. So if I was to create a new HD master, it had to be from the existing 4×3 NTSC interlaced Digibeta master as the starting point.

In my experience, I know that if you are going to blow-up SD to HD frame sizes, it’s best to start with a progressive and not interlaced source. That’s even more true when working with software, rather than hardware up-convertors, like Teranex. Step one was to reconstruct a correct 23.98p SD master from the 29.97i source. To do this, I captured the Digibeta master as a ProResHQ file.

Avid Media Composer to the rescue

df_24psdhd_2_sm

When you talk about software tools that are commonly available to most producers, then there are a number of applications that can correctly apply a “reverse telecine” process. There are, of course, hardware solutions from Snell and Teranex (Blackmagic Design) that do an excellent job, but I’m focusing on a DIY solution in this post. That involves deconstructing the 2:3 pulldown (also called “3:2 pulldown”) cadence of whole and split-field frames back into only whole frames, without any interlaced tearing (split-field frames). After Effects and Cinema Tools offer this feature, but they really only work well when the entire source clip is of a consistent and unbroken cadence. This film had been completed in NTSC 29.97 TV-land, so frequently at cuts, the cadence would change. In addition, there had been some digital noise reduction applied to the final master after the Avid output to tape, which further altered the cadence at some cuts. Therefore, to reconstruct the proper cadence, changes had to be made at every few cuts and, in some scenes, at every shot change. This meant slicing the master file at every required point and applying a different setting to each clip. The only software that I know of to effectively do this with is Avid Media Composer.

Start in Media Composer by creating a 29.97 NTSC 4×3 project for the original source. Import the film file there. Next, create a second 23.98 NTSC 4×3 project. Open the bin from the 29.97 project into the 23.98 project and edit the 29.97 film clip to a new 23.98 sequence. Media Composer will apply a default motion adapter to the clip (which is the entire film) in order to reconcile the 29.97 interlaced frame rate into a 23.98 progressive timeline.

Now comes the hard part. Open the Motion Effect Editor window and “promote” the effect to gain access to the advanced controls. Set the Type to “Both Fields”, Source to “Film with 2:3 Pulldown” and Output to “Progressive”. Although you can hit “Detect” and let Media Composer try to decide the right cadence, it will likely guess incorrectly on a complex file like this. Instead, under the 2:3 Pulldown tab, toggle through the cadence options until you only see whole frames when you step through the shot frame-by-frame. Move forward to the next shot(s) until you see the cadence change and you see split-field frames again. Split the video track (place an “add edit”) at that cut and step through the cadence choices again to find the right combination. Rinse and repeat for the whole film.

Due to the nature of the process, you might have a cut that itself occurs within a split-field frame. That’s usually because this was a cut in the negative and was transferred as a split-field video frame. In that situation, you will have to remove the entire frame across both audio and video. These tiny 1-frame adjustments throughout the film will slightly shorten the duration, but usually it’s not a big deal. However, the audio edit may or may not be noticeable. If it can’t simply be fixed by a short 2-frame dissolve, then usually it’s possible to shift the audio edit a little into a pause between words, where it will sound fine.

Once the entire film is done, export a new self-contained master file. Depending on codecs and options, this might require a mixdown within Avid, especially if AMA linking was used. That was the case for this project, because I started out in ProResHQ. After export, you’ll have a clean, reconstructed 23.98p 4×3 NTSC-sized (720×486) master file. Now for the blow-up to HD.

DaVinci Resolve

df_24psdhd_1_smThere are many applications and filters that can blow-up SD to HD footage, but often the results end up soft. I’ve found DaVinci Resolve to offer some of the cleanest resizing, along with very fast rendering for the final output. Resolve offers three scaling algorithms, with “Sharper” providing the crispest blow-up. The second issue is that since I wanted to restore the wider aspect, which is inherent in going from 4×3 to 16×9, this meant blowing up more than normal – enough to fit the image width and crop the top and bottom of the frame. Since Resolve has the editing tools to split clips at cuts, you have the option to change the vertical position of a frame using the tilt control. Plus, you can do this creatively on a shot-by-shot basis if you want to. This way you can optimize the shot to best fit into the 16×9 frame, rather than arbitrarily lopping off a preset amount from the top and bottom.

df_24psdhd_3_smYou actually have two options. The first is to blow up the film to a large 4×3 frame out of Resolve and then do the slicing and vertical reframing in yet another application, like FCP 7. That’s what I did originally with this project, because back then, the available version of Resolve did not offer what I felt were solid editing tools. Today, I would use the second option, which would be to do all of the reframing strictly within Resolve 11.

As always, there are some uncontrollable issues in this process. The original transfer of the film to Digibeta was done on a Rank Cintel Mark III, which is a telecine unit that used a CRT (literally an oscilloscope tube) as a light source. The images from these tubes get softer as they age and, therefore, they require periodic scheduled replacement. During the course of the transfer of the film, the lab replaced the tube, which resulted in a noticeable difference in crispness between shots done before and after the replacement. In the SD world, this didn’t appear to be a huge deal. Once I started blowing up that footage, however, it really made a difference. The crisper footage (after the tube replacement) held up to more of a blow-up than the earlier footage. In the end, I opted to only take the film to 720p (1280×720) rather than a full 1080p (1920×1080), just because I didn’t feel that the majority of the film held up well enough at 1080. Not just for the softness, but also in the level of film grain. Not ideal, but the best that can be expected under the circumstances. At 720p, it’s still quite good on Blu-ray, standard DVD or for HD over the web.

df_24psdhd_4_smTo finish the process, I dust-busted the film to fix places with obvious negative dirt (white specs in the frame) caused by the initial handling of the film negative. I used FCP X and CoreMelt’s SliceX to hide and cover negative dirt, but other options to do this include built in functions within Avid Media Composer. While 35mm film still holds a certain intangible visual charm – even in such a “manipulated” state – the process certainly makes you appreciate modern digital cameras like the ARRI ALEXA!

As an aside, I’ve done two other complete films this way, but in those cases, I was fortunate to work from 1080i masters, so no blow-up was required. One was a film transferred in its entirety from a low-contrast print, broken into reels. The second was assembled digitally and output to intermediate HDCAM-SR 23.98 masters for each reel. These were then assembled to a 1080i composite master. Aside from being in HD to start with, cadence changes only occurred at the edits between reels. This meant that it only required 5 or 6 cadence corrections to fix the entire film.

©2014 Oliver Peters

The Zero Theorem

df_tzt_1Few filmmakers are as gifted as Terry Gilliam when it comes to setting a story inside a dystopian future. The Monty Python alum, who brought us Brazil and Twelve Monkeys, to name just a few, is back with his newest, The Zero Theorem. It’s the story of Qohen Leth – played by Christoph Walz (Django Unchained, Water for Elephants, Inglorious Basterds) – an eccentric computer programmer who has been tasked by his corporate employer to solve the Zero Theorem. This is a calculation, that if solved, might prove that the meaning of life is nothingness.

The story is set in a futuristic London, but carries many of Gilliam’s hallmarks, like a retro approach to the design of technology. Qohen works out of his home, which is much like a rundown church. Part of the story takes Qohen into worlds of virtual reality, where he frequently interacts with Bainsley (Melanie Thierry), a webcam stripper that he met at a party, but who may have been sent by his employer, Mancom, to distract him. The Zero Theorem is very reminiscent of Brazil, but in concept, also of The Prisoner, a 1960s-era television series. Gilliam explores themes of isolation versus loneliness, the pointlessness of mathematical modeling to derive meaning and privacy issues.

I recently had a Skype chat with Mick Audsley, who edited the film last year. Audsley is London-based, but is currently nearing completion of a director’s cut of the feature film Everest in Iceland. This was his third Gilliam film, having previously edited Twelve Monkeys and The Imaginarium of Doctor Parnassus. Audsley explained, “I knew Terry before Twelve Monkeys and have always had a lot of admiration for him. This is my third film with Terry, as well as a short, and he’s an extraordinarily interesting director to work with. He still thinks in a graphic way, since he is both literally and figuratively an artist. He can do all of our jobs better than we can, but really values the input from other collaborators. It’s a bit like playing in a band, where everyone feeds off of the input of the other band members.”df_tzt_5

The long path to production

The film’s screenplay writer Pat Rushin teaches creative writing at the University of Central Florida in Orlando, Florida. He originally submitted the script for The Zero Theorem to the television series Project Greenlight, where it made the top 250. The script ended up with the Zanuck Company. It was offered to Gilliam in 2008, but initially other projects got in the way. It was revived in June 2012 with Gilliam at the helm. The script was very ambitious for a limited budget of under $10 million, so production took place in Romania over a 37-day period. In spite of the cost challenges, it was shot on 35mm film and includes 250 visual effects.

df_tzt_6Audsley continued, “Nicola [Pecorini, director of photography] shot a number of tests with film, RED and ARRI ALEXA cameras . The decision was made to use film. It allowed him the latitude to place lights outside of the chapel set – Qohen’s home – and have light coming in through the windows to light up the interior. Kodak’s lab in Bucharest handled the processing and transfer and then sent Avid MXF files to London, where I was editing. Terry and the crew were able to view dailies in Romania and then we discussed these over the phone. Viewing dailies is a rarity these days with digitally-shot films and something I really miss. Seeing the dailies with the full company provides clarity, but I’m afraid it’s dying out as part of the filmmaking process.”df_tzt_7

While editing in parallel to the production, Audsley didn’t upload any in-progress cuts for Gilliam to review. He said, “It’s hard for the director to concentrate on the edit, while he’s still in production. As long as the coverage is there, it’s fine. Certainly Terry and Nicola have a supreme understanding of film grammar, so that’s not a problem. Terry knows to get those extra little shots that will make the edit better. So, I was editing largely on my own and had a first cut within about ten days of the time that the production wrapped. When Terry arrived in London, we first went over the film in twenty-minute reels. That took us about two to three weeks. Then we went through the whole film as one piece to get a sense for how it worked as a film.”

Making a cinematic story

df_tzt_4As with most films, the “final draft” of the script occurs in the cutting room. Audsley continued, “The film as a written screenplay was very fluid, but when we viewed it as a completed film, it felt too linear and needed to be more cinematic – more out of order. We thought that it might be best to move the sentences around in a more interesting way. We did that quite easily and quickly. Thus, we took the strength of the writing and realized it in cinematic language. That’s one of the big benefits of the modern digital editing tools. The real film is about the relationship between Bainsley and Qohen and less about the world they inhabit. The challenge as filmmakers in the cutting room is to find that truth.”

df_tzt_8Working with visual effects presents its own editorial challenge. “As an editor, you have to evaluate the weight and importance of the plate – the base element for a visual effect – before committing to the effect. From the point-of-view of cost, you can’t keep undoing shots that have teams of artists working on them. You have to ensure that the timing is exactly right before turning over the elements for visual effects development. The biggest, single visual challenge is making Terry’s world, which is visually very rich. In the first reel, we see a futuristic London, with moving billboards. These shots were very complex and required a lot of temp effects that I layered up in the timeline. It’s one of the more complex sequences I’ve built in the Avid, with both visual and audio elements interacting. You have to decide how much can you digest and that’s an open conversation with the director and effects artists.”

The post schedule lasted about twenty weeks ending with a mix in June 2013. Part of that time was tied up in waiting for the completion of visual effects. Since there was no budget for official audience screenings, the editorial team was not tasked with creating temp mixes and preview versions before finishing the film. Audsley said, “The first cut was not overly long. Terry is good in his planning. One big change that we made during the edit was to the film’s ending. As written, Qohen ends up in the real world for a nice, tidy ending. We opted to end the film earlier for a more ambiguous ending that would be better. In the final cut the film ends while he’s still in a virtual reality world. It provides a more cerebral versus practical ending for the viewer.”

Cutting style 

df_tzt_9Audsley characterizes his cutting style as “old school”. He explained, “I come from a Moviola background, so I like to leave my cut as bare as possible, with few temp sound effects or music cues. I’ll only add what’s needed to help you understand the story. Since we weren’t obliged on this film to do temp mixes for screenings, I was able to keep the cut sparse. This lets you really focus on the cut and know if the film is working or not. If it does, then sound effects and music will only make it better. Often a rough cut will have temp music and people have trouble figuring out why a film isn’t working. The music may mask an issue or, in fact, it might simply be that the wrong temp music was used. On The Zero Theorem, George Fenton, our composer, gave us representative pieces late in the  process that he’d written for scenes.” Andre Jacquemin was the sound designer who worked in parallel to Audsley’s cut and the two developed an interactive process. Audsley explained, “Sometimes sound would need to breath more, so I’d open a scene up a bit. We had a nice back-and-forth in how we worked.”

df_tzt_3Audsley edited the film using Avid Media Composer version 5 connected to an Avid Unity shared storage system. This linked him to another Avid workstation run by his first assistant editor, Pani Ahmadi-Moore. He’s since upgraded to version 7 software and Avid ISIS shared storage. Audsley said, “I work the Avid pretty much like I worked when I used the Moviola and cut on film. Footage is grouped into bins for each scene. As I edit, I cut the film into reels and then use version numbers as I duplicate sequences to make changes. I keep a daily handwritten log about what’s done each day. The trick is to be fastidious and organized. Pani handles the preparation and asset management so that I can concentrate on the edit.”

df_tzt_2Audsley continued, “Terry’s films are very much a family type of business. It’s a family of people who know each other. Terry is supremely in control of his films, but he’s also secure in sharing with his filmmaking family. We are open to discuss all aspects of the film. The cutting room has to be a safe place for a director, but it’s the hub of all the post activity, so everyone has to feel free about voicing their opinions.”

Much of what the editor does, proceeds in isolation. The Zero Theorem provided a certain ironic resonance for Audsley, who commented, “At the start, we see a guy sitting naked in front of a computer. His life is harnessed in manipulating something on screen, and that is something I can relate to as a film editor! I think it’s very much a document of our time, about the notion that in this world of communication, there’s a strong aspect of isolation. All the communication in the world does not necessarily connect you spiritually.” The Zero Theorem is scheduled to open for limited US distribution in September.

Originally written for DV magazine / CreativePlanetNetwork.

©2014 Oliver Peters

Avid Everywhere

df_avid_ev_1It’s interesting to see that in spite of a lot of press, the Avid Everywhere concept still results in confusion. They’ve certainly been enunciating it since last year, with a full roll-out at NAB this past April. For whatever reason, Avid Everywhere seems to be lumped together with Adobe Anywhere in the minds of many. Maybe it’s the similarity of names or it’s that they both have a cloud component, but they aren’t the same thing. Avid Everywhere is a corporate vision, while Adobe Anywhere is a specific product (more on that later).

Vision and strategy

Avid Technology is a company with a diverse range of hardware and software products, covering content creation (video, audio, graphics, news), asset management, audio/video hardware i/o, consoles and control surfaces, storage and servers. In an effort to consolidate and rebrand a wide-ranging set of offerings, Avid has repackaged these existing (and future) products under the banner of Avid Everywhere. This is a marketing strategy designed to convey the message that whatever your media needs might be, Avid has a product or service to satisfy that need. This is coupled to a community of users that can benefit from their common use of Avid products.

This vision positions Avid’s products as a “platform”, in the same way that Windows, Mac OS X, iOS, Android, Apple hardware and PC hardware are all platforms. Within this platform concept, the products become stratified into product tiers or “suites”. Bear in mind that “suite” really refers to a group of products and not specifically a collection of hardware or software that you purchase as a single unit. The base layer of this platform contains the various software hooks that tie the products together – for example, APIs required to use Media Composer software with Interplay asset management or in an ISIS SAN environment. This is called the Avid MediaCentral Platform.

df_avid_ev_2On top of this sits the Storage Suite, which consists of the various Avid storage solutions, such as ISIS, along with news play-out servers. The next tier is the Media Suite, which encompasses the Interplay asset management and iNews newsroom products. In the transition to the Avid Everywhere strategy, you’ll see a lot of references on Avid’s website and in their marketing literature to “formerly Interplay ___”. That’s because Avid is in the process of rebranding these products into something with a “Media ___” name.

Most users who are editing and audio professionals will mainly associate Avid with the Artist Suite tier. This is the layer of content creation tools, including Media Composer, Pro Tools, Sibelius and the control surfaces that came out of Digidesign and Euphonix, including the Artist panels. If you are a single user of Media Composer, Pro Tools or Sibelius and own no other Avid infrastructure, like ISIS or Interplay, then the entire Avid Everywhere media platform doesn’t touch you very much for now.

The top layer of the platform chart is MediaCentral | UX, which was formerly known as Interplay Central. This is a web front-end that allows you to browse, log and notate Interplay assets from a desktop computer, laptop or mobile device. Although the current iteration is targeted at news production, the concept is story-centric and could provide functionality in other arenas, such as drama and reality series production.

Surrounding the entire structure are support services (tech support and professional integration services) plus a private and public marketplace. Media Composer software has included a Marketplace menu item for a few versions. Until now, this has been a web portal to buy plug-ins and stock footage. The updated vision for this is more along the lines of services like SoundCloud, Adobe’s Behance service or the files section of Creative Cloud. For example, let’s say you are a composer that uses Pro Tools. You create licensable music tracks and post them to the Marketplace. Other users can browse the Marketplace and find your tracks, complete with licensing and payment arrangements. To make this work, the Avid MediaCentral Platform includes things like proper security to enable such transactions.

All clouds are not the same

df_avid_ev_3I started this post with the comment that I feel many editors confuse Adobe Anywhere and Avid Everywhere. I believe that’s because they mistakenly interpret Avid Everywhere as the specific version of the Media Composer product that enables remote-access editing. As I’ve explained above, Everywhere is a concept and vision, not a product. That specific Media Composer product (formerly Interplay Sphere) is now branded as Media Composer | Cloud. As a product, it most closely approximates Adobe Anywhere, but there are key differences.

Adobe Anywhere is a system that requires a centralized server and storage. Any computer with Premiere Pro CC or CC 2014 can remotely access the assets on this system, which streams proxy media back to that computer. All the “heavy lifting” is done at the central site and the editor’s Premiere Pro is effectively working only as a local front-end. The operation does not allow hybrid editing with a combination of local and remote assets. All local assets have to be uploaded to the server and then streamed back to the editor. That’s because Anywhere manages the assets for multiple editors during collaborative workflows and handles project versioning. If you are working on an Anywhere production, you always have to be connected to the network.

df_avid_ev_4In contrast, Media Composer | Cloud is primarily a plug-in that works with an otherwise standard version of the Media Composer software. In order for it to function, the “home base” facility must have an appropriate Interplay/ISIS infrastructure so that Media Composer | Cloud can talk to it. In Avid marketing parlance “you’ve got to get on the platform” for some of these things to work.

Media Composer | Cloud permits hybrid editing. For example, a news videographer in the field can be editing at the proverbial Starbucks using local assets. Maybe part of the story requires access to past b-roll footage that lives back at the station on its newsroom storage. Through Media Composer | Cloud and Interplay, the videographer can access those files as proxies and integrate them into the piece. Meanwhile, local assets can be uploaded back to the station. When the piece is cut, a “publish” command (an AAF of the sequence) goes back to the station for quick turnaround to air. Media Composer | Cloud, by its nature, doesn’t require continuous connection, so editing can continue during transit, such as in a vehicle.

While not everything about Avid Everywhere has been fully implemented, yet, it certainly is an aggressive strategy. It is an attempt to move the company as a whole into areas beyond just editing software, while still allowing users and owners to leverage their Avid assets into other opportunities.

©2014 Oliver Peters

Amira Color Tool and your NLE

df_amiracolor_1I was recently alerted to the new Amira Color Tool by Michael Phillips’ 24p blog. This is a lightweight ARRI software application designed to create custom in-camera looks for the Amira camera. You do this by creating custom color look-up tables (LUT). The Amira Color Tool is available as a free download from the ARRI website (free registration required). Although the application is designed for the camera, you can also export looks in a variety of LUT file formats, which in turn, may be installed and applied to footage in a number of different editing and color correction applications. I tested this in both Apple Final Cut Pro X and Avid Media Composer | Software (v8) with good results.

The Amira Color Tool is designed to correct log-C encoded footage into a straight Rec709 offset or with a custom look. ARRI offers some very good instructions, white papers, sample looks and tutorials that cover the operation of this software. The signal flow is from the log-C image, to the Rec709 correction, and then to the CDL-based color correction. To my eye, the math appears to be floating point, because a Rec709 conversion that throws a shot into clipping, can be pulled back out of clipping in the look tab, using the CDL color correction tools. Therefore it is possible to use this tool for shots other than ARRI Amira or Alexa log-C footage, as long as it is sufficiently flat.

The CDL correction tools are based on slope, offset and power. In that model slope is equivalent to gain, offset to lift and power to gamma. In addition to color wheels, there’s a second video look parameters tab for hue intensities for the six main vectors (red, yellow, green, cyan, blue and magenta). The Amira Color Tool is Mac-only and opens both QuickTime and DPX files from the clips I tested. It worked successfully with clips shot on an Alexa (log-C), Blackmagic Cinema Camera (BMD Film profile), Sony F-3 (S-log) and Canon 1DC (4K Canon-log). Remember that the software is designed to correct flat, log-C images, so you probably don’t want to use this with images that were already encoded with vibrant Rec709 colors.

FCP X

df_amiracolor_4To use the Amira Color Tool, import your clip from the application’s file browser, set the look and export a 3D LUT in the appropriate format. I used the DaVinci Resolve setting, which creates a 3D LUT in a .cube format file. To get this into FCP X, you need to buy and install a LUT filter, like Color Grading Central’s LUT Utility. To install a new LUT there, open the LUT Utility pane in System Preferences, click the “+” symbol and navigate to where the file was saved.df_amiracolor_5_sm In FCP X, apply the LUT Utility to the clip as a filter. From the filter’s pulldown selection in the inspector, choose the new LUT that you’ve created and installed. One caveat is to be careful with ARRI files. Any files recorded with newer ARRI firmware are flagged for log-C and FCP X automatically corrects these to Rec709. Since you don’t want to double up on LUTs, make sure “log processing” is unchecked for those clips in the info tab of the inspector pane.

Media Composer

df_amiracolor_6_smTo use the custom LUTs in Media Composer, select “source settings” for the clip. Go to the color management tab and install the LUT. Now it will be available in the pull-down menu for color conversions. This color management change can be applied to a single clip or to a batch of clips within a bin.

In both cases, the source clips in FCP X and/or Media Composer will play in real-time with the custom look already applied.

df_amiracolor_2_sm

df_amiracolor_3_sm

©2014 Oliver Peters

Avid Media Composer | Software v8

df_mc8_sm

At NAB Avid presented its Avid Everywhere concept. While Everywhere is a over-arching marketing concept, involving “the cloud”, storage, asset management, a marketplace and more, for most independent editors, Avid is all about Media Composer and/or Pro Tools. Given that, there’s very little in the Everywhere concept that affects these users. The most salient part is a restructuring of licensing and software options.

Media Composer and the options

Avid’s flagship NLE is now known as Media Composer | Software and version numbers are only internal, rather than part of the product branding. Avid released Media Composer version 8.0 in May, but it is only known as Media Composer. Added to this are three options: Media Composer | Symphony, Media Composer | NewsCutter and Media Composer | Cloud. NewsCutter, which always was a variation of Media Composer, is now sold as an option, which adds news-centric features to the interface. Media Composer | Cloud (formerly known as Interplay Sphere) is essentially a plug-in to Media Composer that allows remote access to an Avid asset management and storage system. NewsCutter and Cloud require a larger facility infrastructure, so I’ll skip them in this discussion. They have little bearing on what most independent editors do.

Two other past options, PhraseFind and ScriptSync, are currently not available, as these are based on a phonetic search engine technology licensed from Nexidia. Avid and Nexidia are in current discussions for a new licensing arrangement. While many editors rely on this technology, most do not. It is important to realize that Avid’s script integration and the internal Find tool are not completely tied to this technology and continue to work fine. The Nexidia options add a level of automation to the process through a phonetic match-up between waveforms and typed text.

Without ScriptSync, you can still create script-based bins, but the alignment of takes to script lines has to be done manually. Without PhraseFind, you can still search for text found in bin fields, but you cannot search by audio. Nexidia sells its own products, as well as licenses another application for editors that is sold through BorisFX as Soundbite. This is a standalone application geared to Final Cut and Premiere, but is not compatible with Media Composer. Until this gets resolved, Avid has advised editors who are dependent on ScriptSync or PhraseFind, not to upgrade past Media Composer version 7 software. Resellers still have these options available, in a version that is compatible with earlier versions of Media Composer.

Enter the new model

Media Composer version 8 is the first release of the application under the new licensing guidelines. You can now buy or rent Media Composer using three methods: perpetual license (own), subscription (rental) or floating license. The latter applies to larger facilities that are interested in purchasing “packs” of 20 or 50 perpetual licenses, which can be assigned to various machines as their production needs shift. The subscription license is based on an annual commitment ($49.99/mo-individual) or month-by-month ($74.99/mo-individual) rental and may be used by individuals or facilities. For example, facilities may have a number of perpetual licenses, but need to add a few seats of Media Composer for several months to accommodate an incoming, short-term production. They could choose to augment their “owned” licenses with additional subscription licenses to get through this immediate production crunch.

Most customers are likely to be interested in the changes in how you “own” the software, as the perpetual license model has changed from that of the past. When you now purchase Media Composer | Software, the cost is $1299, which covers the cost of the software plus one year of Avid support and any upgrades within the course of that year. (The actual support portion of that includes unlimited tech support over the web and one tech support phone call per month.) Customers still interested in a hardware license key (dongle) may purchase one for an additional $500. The Symphony option adds $749 to the bill. Current Media Composer owners (MC 6.5 or higher) can upgrade to MC 8 simply by purchasing a single year of support at $299 before the end of 2014. No matter how they got there (new purchase or renewal of an existing license) the software license is now on the current plan.

The important thing is that you have to renew again at the end of the first year of support. This is where the complaints have come in. As long as you renew your support contract each year at $299 (current price) then you will get any Avid updates to the software without having to purchase a separate software upgrade. (In the past, a Media Composer version upgrade has been more expensive than that year of support.) However, if you decide to let the support lapse for a year and then decide you want an upgrade, you will have to repurchase the product and any options anew.

Let’s say you bought Media Composer with the Symphony option – $1299 + $749. Hypothetically, by the end of the first year, Media Composer | Software has moved up to v8.5 and then you decide not to renew. From that point on, your version is “frozen” and cannot be upgraded. A year later, Media Composer | Software v10 comes out with enough compelling features to get you back on board. You cannot renew your v8.5 software license to upgrade, but instead have to purchase the current version Media Composer and Symphony again. Now you have two licenses: MC v8.5 and MC v10. Both work, but the older one is not upgradeable while the newer one is, as long as you renew its support contract after the end of the first year from the time of purchase.

Third-party bundles

In addition to the Nexidia issue, Avid now offers fewer third-party applications and effects as a bundle with the software. With the last few versions, you received Avid DVD, AvidFX, Sorenson Squeeze and BorisFX BCC filters (BCC only with the Symphony option). Avid DVD is no longer being developed. Variations of the others are now sold with a separate Production Pack third-party bundle. It gets a little confusing, because the options vary a bit between the perpetual and subscription models. If you buy the software, you now only get the NewBlueFX Titler Pro 1 and a starter set of their filters. “Lite” versions of Sorenson Squeeze and BCC (4 effects only) are offered with the subscription model. Since these are third-party products, you can still purchase them independently and existing versions that you already own will continue to work with Media Composer. BorisFX is offering upgrade deals to their products from past versions. Since AvidFX is simply an OEM version of Boris RED, one of their current deals is to upgrade from AvidFX to Boris RED 5.5 for $295. You can also upgrade to BCC 9 AVX for $599.

It’s a shame to lose the tools that were included in the past, but it really boils down to a consequence of the industry’s “race to the bottom”. At the prices that Avid currently sells Media Composer | Software, there simply is no margin left over to make third-party bundling deals. Developers aren’t going to accept a pittance just to be packaged with Media Composer. From the customer’s angle, you still have a decent set of audio and video filters included with Media Composer, including the NewBlueFX starter filters, Avid Illusion effects and the built-in Animatte effects tools. If you need more than that, you’ll simply have to purchase other plug-ins.

What to do

You own Media Composer version 7. What should you do now? The good news is that there’s no urgency to upgrade. MC v8 is essentially the same as v7.0.4, except with a new resident license tool (Application Manager). There are no new compelling features in MC v8 itself. Avid has promised one or more upgrades to happen during the year and resolution-independence has been mentioned as a technology that will come to Media Composer (although no specific commitment to a timeframe). You have until the end of the year to spend your $299 for support and get onto version 8.x. The smart money is advising to wait a few months and see what the next update brings. If it’s compelling, then you can take advantage of the deal and purchase the annual support, which gives you access to the new software (if you are on a recent version of Media Composer). The advantage to this is that the one-year clock starts at that time, so the later in the year that you do this, the longer the time (from now) that you have, before you need to renew again.

Changes like this always create a certain amount of tension. That’s been clear in the debates around Adobe’s shift to subscription with Creative Cloud. Users will inevitably compare the new costs to their old upgrade patterns and what the software used to cost them. I’m not sure that’s entirely fair, since financial pressures change and none of these companies have ever said that changes to their pricing wouldn’t happen, if it’s necessary. It seems to me that Avid has adopted the best blend of purchase and rental that I’ve seen among the NLE companies. There’s an incentive to stay current with the software, which is both to Avid’s and the customer’s advantage. If you were a loyal user who stayed current and always bought the upgrades when they came out, then the new deal is better for you financially. If you tended to sit on old versions and only sporadically upgraded, then you are likely to pay more this way. No right or wrong – just the way it is.

©2014 Oliver Peters

Comparing Color, Resolve, SpeedGrade and Symphony

df_ccc_main_sm

It’s time to talk about color correctors. In this post, I’ll compare Color, Resolve, SpeedGrade and Symphony. These are the popular desktop color correction systems in use today. Certainly there are other options, like Filmlight’s Baselight Editions plug-in, as well as other NLEs with their own powerful color correction tools, including Autodesk Smoke and Quantel Rio. Some of these fall outside of the budget range of small shops or don’t really provide a correction workflow. For the sake of simplicity, in this post I’ll stick with the four I see the most.

df_ccc_sym_sm

Avid Technology Media Composer + Symphony

Although it started as a separate NLE product with dedicated hardware, today’s Symphony is really an add-on option to Media Composer. The main feature that differentiates Symphony from Media Composer in file-based workflows is an enhanced color correction toolset. Symphony used to be the “gold standard” for color correction within an NLE, combining controls “borrowed” from many other software and systems, like Photoshop, hardware proc amps and hardware versions of the DaVinci correctors. It was the first to use the color wheel control model for balance/hue offsets. A subset of the Symphony tools has been migrated into Media Composer. Basic correction features in Symphony include channel mixing, hue offsets (color balance), levels, curves and more.

Many perceive Symphony correction as a single level or layer of correction, but that’s not exactly true. Color correction occurs on two levels – segment and program track. Most of your correction is on individual clips and Symphony offers a relational grading system. This means you can apply grades based on single clips or all instances of a master clip, tape ID, camera, etc. All clips used from a common source can be automatically graded once the first instance of that clip is graded on the timeline. The program track grade allows the colorist to apply an additional layer of grading to a clip, a section of the timeline or the entire timeline. So, when the client asks for everything to be darker, a global adjustment can be made using the program track.

Symphony also offers secondary grading based on isolating colors via an HSL key and adjusting that range. Although Symphony doesn’t offer nodes or correction layers like other software, you can use Avid’s video track timeline hierarchy to add additional correction to blank tracks above those tracks containing the video clips. In this way you are using the tracks as de facto adjustment layers. The biggest weakness is the lack of built-in masking tools to create what is commonly referred to as “power windows” (a term originated by DaVinci). The workaround is to use Avid’s built-in Intraframe/Animatte effects tools to create masks. Then you can apply additional spot correction within the mask area. It takes a bit more work than other tools, but it’s definitely possible. Finally, many plug-in packages, like GenArts Sapphire, Boris Continuum Complete and Magic Bullet Looks include vignette filters that will work with Symphony.

The bottom line is that Symphony started it all, though by today’s standards is “long-in-the-tooth”. Nevertheless, the relational grading model – and the fact that you are working within the NLE and can freely move between color correction and editing/trimming – makes Symphony a fast unit to operate, especially in time-sensitive, long-form productions, like TV shows.

df_ccc_spgrd_sm

Adobe SpeedGrade CC

If you are current as a Creative Cloud subscriber, then you have access to the most recent version of Adobe Premiere Pro CC and SpeedGrade CC. With the updates introduced late last year, Adobe added Direct Link interaction between Premiere Pro and SpeedGrade. When you use Direct Link to send your Premiere Pro timeline to SpeedGrade, the actual Premiere Pro sequence becomes the SpeedGrade sequence. This means codec decoding, transitions and Premiere Pro effects are handled by Premiere Pro’s effects engine, even though you are working inside SpeedGrade. As such, a project created via Direct Link supports features and codecs that would not be possible within a standalone SpeedGrade project.

Another unique aspect is that native and third-party transitions and effects used in Premiere Pro are visible (though not adjustable) when you are working inside SpeedGrade. This is an important distinction, because other correction workflows that rely on roundtrips don’t include NLE-based filters. You can’t see how the correction will be affected by a filter used in the NLE timeline. Naturally, in the case of SpeedGrade, this only works if you are working on a machine with the same third-party filters installed. When you return to Premiere Pro from SpeedGrade, the color corrections on clips are collapsed into a Lumetri filter effect that is applied to the clip or adjustment layer within the Premiere Pro sequence. Essentially this Lumetri effect is similar to a LUT that encapsulates all of the grading layers applied in SpeedGrade into a single effect in Premiere Pro. This is possible because the two applications share the same color science. The result is a render-free workflow with the easy ability to go back-and-forth between Premiere Pro and SpeedGrade for changes and adjustments. Unlike a standard LUT, Lumetri filters can carry masks, keyframes and are 100% precise.

As a color corrector, SpeedGrade is designed with a layer-based interface, much like Photoshop. Layers can be primary (fullscreen), secondary (keys and masks) or filters. A healthy selection of effects filters and LUTs are included. The correction model splits the signal into what amounts to a 12-way color wheel arrangement. There are lift/gamma/gain controls for the overall image, as well as for each of the shadow, middle and highlight ranges. Controls can be configured as wheels or sliders, with additional sliders for contrast, pivot, temperature (red vs. blue bias), magenta (red/blue vs. green bias) and saturation. There are no curves controls.

Overall, I like the looks I get with SpeedGrade, but I find it lacking in some ways. There are definite plusses and minuses. I miss the curves. It currently does not work with Blackmagic Design hardware. Matrox, Bluefish and AJA are OK. It’s got a tracker, but I find both tracking and masking to be mediocre. The biggest workflow shortcoming is the lack of a temporary memory register feature. You can save a whole grade, which saves the entire stack of grading layers applied to a clip as a Lumetri filter. You can apply grades from earlier timeline clips quite simply and SpeedGrade lets you open multiple playheads for comparison/correction between multiple shots on the timeline. You can access the nine grades ahead and the nine grades beyond the current playhead position. You can also copy the grade from the clip below mouse position to the clip under the playhead by pressing the C key. What you cannot do is store a random set of grades or just a single layer in a temporary buffer and then apply it from that buffer somewhere else in the timeline. Adding these two items would greatly speed up the SpeedGrade workflow.

df_ccc_resolve_sm

Blackmagic Design DaVinci Resolve

The DaVinci name is legendary among color correction products, but that reputation was earned with its hardware products, like the DaVinci 2K. Resolve was the software-based product built around a Linux cluster. When Blackmagic bought the assets and technology of DaVinci, all of the legacy hardware products were dropped, in favor of concentrating on Resolve as the software that had the most life for the future. There are now four versions, including Resolve Lite (free), Resolve (paid – software only), Resolve with a Blackmagic control surface and Resolve for Linux. The first three work on Mac and PC. You may download the free Lite version from the Blackmagic website or Apple’s Mac App Store. The Lite version has nearly all of the power of the paid software, but with these limitations: noise reduction, stereoscopic tools and the ability to output at a resolution above UltraHD requires a paid version.

I’m writing this based on Resolve 10, which has rudimentary editing features. It is designed as a standalone color corrector that can be used for some editing. Blackmagic Design doubled-down on the editing side with Resolve 11 (shown at NAB 2014). When that’s finally released this summer, you’ll have a powerful NLE built into the application. The demos at NAB were certainly impressive. If that turns out to be the case, Resolve 11 would function as an Avid Symphony or Quantel Rio type of system. That means you could freely move between creative editing and color correction, simply by changing tabs in the interface. For now, Resolve 10 is mainly a color corrector, with some very good roundtrip and conforming support for other NLEs. Specifically there is very good support for Avid and FCP X workflows.

As a color corrector, Resolve offers the widest set of correction tools of any of these systems. In the work I’ve done, Resolve allows for more extreme grading and is more precise when trying to correct problem shots. I’ve done corrections with it that would have been impossible with any other tool. The correction controls include curves, wheels, primary sliders, channel mixers and more. Corrections are node-based and can be applied to clips or an entire track. Nodes can be applied in a serial or parallel fashion, with special splitter/combiner and layer mixing nodes. The latter includes Photoshop-style blend modes. Unlike SpeedGrade, you can store the value of a single node in a buffer (using the keyboard copy function) and then paste the value of just that node somewhere else. This makes it pretty fast when working up and down a timeline. Finally, the tracker is amazing.

A few things bother me about Resolve, in spite of its powerful toolset. The interface almost presents too many tools and it becomes very easy to lose track of what was done and where. There is no large viewer or fullscreen mode that doesn’t hide the node tree. This forces a lot of toggling between workspace configurations. If you have two displays, you cannot use the second display for anything other than the scopes and audio mixer. (This will change with Resolve 11.) Finally, you can only use Blackmagic Design hardware to view the video output on a grading monitor.

df_ccc_color_sm

Apple Color

Some of you are saying, “Why talk about that? It was killed off a few years ago! Who uses that anymore?” Yes, I know. What people so quickly forget, was that when the software was FinalTouch (before Apple’s purchase), it was very expensive and considered to be very innovative. Apple bought it, added some features and cleaned up some of the workflow. As part of Final Cut Studio, it set the standard for round-tripping with an NLE. Unfortunately for many Mac users, it retained its less glossy, “Unixy” interface and thus, didn’t really catch on for many editors. However, it still works just fine on the newest machines and OS versions and remains a fast, high-quality color corrector.

Nearly all of the long-form jobs I’ve done – including feature films and TV shows up to even a few months ago – have been done with Color. There are two reasons that I prefer it. The first is that most of these jobs were cut using FCP 7, so it’s still the most integrated software for these projects. More importantly, there are several key features that make it faster than SpeedGrade and Resolve for projects that fall within a standard range of grading. In other words, the in-camera look was good and there were no huge problem areas, plus the desired grade didn’t swing into extreme looks.

Color is designed with 10 levels of grading per clip – primary in, eight secondaries and primary out. Since secondaries can be fullscreen or a portion of the image qualified by an HSL key or mask, each secondary layer can actually have two corrections – inside and outside of the mask. In addition to these, there’s a ColorFX layer for node-based filter effects, which can also include color adjustments. In reality, the maximum number of corrections to a single clip could be up to 19. The primary corrections can include value changes for RGB lift/gamma/gain and saturation levels, as well a printer lights. On top of this are lift/gamma/gain color wheels and luma controls. Lastly there are curves. The secondaries include custom mask shapes and hue/sat/luma curves. There’s a tracker, too, but it’s not that great.

Where Color still shines for me is in workflow. Each layer is represented by a labelled bar on the timeline under the clip. This makes it easy to apply only a single secondary adjustment to other clips on the timeline simply by sliding the corresponding secondary bar from one timeline clip to one or more of the others. For example, I used Secondary 3 to qualify a person’s face and brighten it. I could then simply drag the bar for S3 that appears under the first clip on the timeline over to every other clip with the same person and similar set-up. All without selecting each of these clips prior to applying the adjustment.

Color works with all cards that work with Final Cut Pro, so there’s no AJA versus Blackmagic issue as mentioned above. Dual monitors work well. You can have scopes and the viewer (or a fullscreen viewer) on one display and the full control interface on the other. Realistically, Color works best with up to 2K video and one of the standard Apple codecs (uncompressed or ProRes work best). A lot of the footage I’ve graded with it was ProResHQ or ProRes 4444 that came native from an ARRI Alexa or transcoded from a C300, RED or a Canon 5D/7D. But I’ve also done a film that was all native EX rewrapped as .mov from a Sony camera and Color had no issues. Log-profile footage grades very nicely in Color, so Alexa ProRes 4444 encoded as Log-C forms a real sweet spot for Apple Color.

©2014 Oliver Peters

NAB 2014 Thoughts

Whodathunkit? More NLEs, new cameras from new vendors and even a new film scanner! I’ve been back from NAB for a little over a week and needed to get caught up on work while decompressing. The following are some thoughts in broad strokes.

Avid Connect. My trip started early with the Avid Connect costumer event. This was a corporate gathering with over 1,000 paid attendees. Avid execs and managers outlined the corporate vision of Avid Everywhere in presentations that were head-and-shoulders better than any executive presentations Avid has given in years. For many who attended, it was to see if there was still life in Avid. I think the general response was receptive and positive. Avid Everywhere is basically a realignment of existing and future products around a platform concept. That has more impact if you own Avid storage or asset management software. Less so, if you only own a seat of Media Composer or ProTools. No new software features were announced, but new pricing models were announced with options to purchase or rent individual seats of the software – or to rent floating licenses in larger quantities.

4K. As predicted, 4K was all over the show. However, when you talked to vendors and users, there was little clear direction about actual mastering in 4K. It is starting to be a requirement in some circles, like delivering to Netflix, for example; but for most users 4K stops at acquisition. There is interest for archival reasons, as well as for reframing shots when the master is HD or 2K.

Cameras. New cameras from Blackmagic Design. Not much of a surprise there. One is the bigger, ENG-style URSA, which is Blackmagic’s solution to all of the add-ons people use with smaller HDSLR-sized cameras. The biggest feature is a 10” flip-out LCD monitor. AJA was the real surprise with its own 4K Cion camera. Think KiPro Quad with a camera built around it. Several DPs I spoke with weren’t that thrilled about either camera, because of size or balance. A camera that did get everyone jazzed was Sony’s A7s, one of their new Alpha series HDSLRs. It’s 4K-capable when recorded via HDMI to an external device. The images were outstanding. Of course, 4K wasn’t everywhere. Notably not at ARRI. The news there is the Amiraa sibling to the Alexa. Both share the same sensor design, with the Amira designed as a documentary camera. I’m sure it will be a hit, in spite of being a 2K camera.

Mac Pro. The new Mac Pro was all over the show in numerous booths. Various companies showed housings and add-ons to mount the Mac Pro for various applications. Lots of Thunderbolt products on display to address expandability for this unit, as well as Apple laptops and eventually PCs that will use Thunderbolt technology. The folks at FCPworks showed a nice DIT table/cart designed to hold a Mac Pro, keyboard, monitoring and other on-set essentials.

FCP X. Speaking of FCP X, the best place to check it out was at the off-site demo suite that FCPworks was running during the show. The suite demonstrated a number of FCP X-based workflows using third-party utilities, shared storage from Quantum and more. FCP X was in various booths on the NAB show floor, but to me it seemed limited to partner companies, like AJA. I thought the occurrences of FCP X in other booths was overshadowed by Premiere Pro CC sightings. No new FCP X feature announcements or even hints were made by Apple in any private meetings.

NLEs. The state of nonlinear editing is in more flux than ever. FCP X seems to be picking up a little steam, as is Premiere Pro. Yet, still no clear market leader across all sectors. Autodesk announced Smoke 2015, which will be the last version you can buy. Following Adobe’s lead, this year they shift to a rental model for their products. Smoke 2015 diverges more from the Flame UI model with more timeline-based effects than Smoke 2013. Lightworks for the Mac was demoed at the EditShare booth, which will make it another new option for Mac editors. Nothing new yet out of Avid, except some rebranding – Media Composer is now Media Composer | Software and Sphere is now Media Composer | Cloud. Expect new features to be rolled in by the end of this year. The biggest new player is Blackmagic Design, who has expanded the DaVinci Resolve software into a full-fledged NLE. With a cosmetic resemblance to FCP X, it caused many to dub it “the NLE that Final Cut Pro 8 should have been”. Whether that’s on the mark or just irrational exuberance has yet to be determined. Suffice it to say that Blackmagic is serious about making it a powerful editor, which for now is targeted at finishing.

Death of i/o cards. I’ve seen little mention of this, but it seems to me that dedicated PCIe video capture cards are a thing of the past. KONA and Decklink cards are really just there to support legacy products. They have less relevance in the file-based world. Most of the focus these days is on monitoring, which can be easily (and more cheaply) handled by HDMI or small Thunderbolt devices. If you looked at AJA and Matrox, for example, most of the target for PCIe cards is now to supply the OEM market. AJA supplies Quantel with their 4K i/o cards. The emphasis for direct customers is on smaller output-only products, mini-converters or self-contained format converters.

Film. If you were making a custom, 35mm film scanner – get out of the business, because you are now competing against Blackmagic Design! Their new film scanner is based on technology acquired through the purchase of Cintel a few months ago. Now Blackmagic introduced a sleek 35mm scanner capable of up to 30fps with UltraHD images. It’s $30K and connects to a Mac Pro via Thunderbolt2. Simple operation and easy software (plus Resolve) will likely rekindle the interest at a number of facilities for the film transfer business. That will be especially true at sites with a large archive of film.

Social. Naturally NAB wouldn’t be the fun it is without the opportunity to meet up with friends from all over the world. That’s part of what I get out of it. For others it’s the extra training through classes at Post Production World. The SuperMeet is a must for many editors. The Avid Connect gala featured entertainment by the legendary Nile Rodgers and his band Chic. Nearly two hours of non-stop funk/dance/disco. Quite enjoyable regardless of your musical taste. So, another year in Vegas – and not quite the ho-hum event that many had thought it would be!

Click here for more analysis at Digital Video’s website.

©2014 Oliver Peters