Steve Jobs

df0216_sj1_smIt’s challenging to condense the life of a complex individual into a two-hour-long film. So it’s no wonder that the filmmakers of Steve Jobs have earned both praise and criticism for their portrayal of the Apple co-founder. The real Steve Jobs generated differing emotions from those who knew him or those who viewed his life from the outside. To tackle that dilemma screenwriter Aaron Sorkin (Moneyball, The Social Network, Charlie Wilson’s War) and director Danny Boyle (127 Hours, Slumdog Millionaire, 28 Days Later) set out to create a “painting instead of a photograph”.

Steve Jobs with Michael Fassbender in the central role uses a classic Shakespearean three-act structure, focusing on three key product launches. Act 1 depicts the unveiling of the first Macintosh computer (1984); Act 2 is the introduction of the NeXT computer (1988); Act 3 is the reveal of the original iMac (1998). These three acts cover the narrative arc of Jobs’ rise, humiliation/revenge, and his ultimate return to prominence at Apple. All of the action takes place backstage at these launch events, but is intercut with flashbacks. The emotional thread that ties the three acts together is Jobs’ relationship with his daughter, Lisa Brennan-Jobs.

An action film of words

Aaron Sorkin’s scripts are known for their rapid fire dialogue and Steve Jobs is no exception. Clocking in at close to 190 script pages, the task of whittling that down to a two-hour movie fell to editor Elliot Graham (Milk, 21, Superman Returns). I recently spoke with Graham about how he connected with this project and some of the challenges the team faced. He explains, “I’ve been a fan of Danny’s and his regular editor wasn’t available to cut this film. So I reached out and met with them and I joined the team.”

Steve Jobs“When I read the script, I characterized it as an ‘action film of words.’ Early on we talked about the dialogue and the need to get to two hours. I’ve never talked about the film’s final length with a director at the start of the project, but we knew the information would come fast and we didn’t want the audience to feel pummeled. We needed to create a tide of energy from beginning to end that takes the viewer through this dialogue as these characters travel from room to room. It’s our responsibility to keep each entrance into a different room or hallway revelatory in some fashion – so that the viewer stays with the ideas and the language. Thank goodness we had sound recordist Lisa Pinero on hand – she really helped the cast stay true to the musicality of the writing. The script is full of intentional overlaps, and Danny didn’t want to stop them from happening. Lisa captured it so that I could edit it. We knew we wanted very little ADR in this film, so we let the actors play out the scene. That was pivotal in capturing Aaron’s language.”

“Each act is a little different, both in production design and in the format. [Director of photography] Alwin Küchler (Divergent, R.I.P.D., Hanna) filmed Act 1 on 16mm, Act 2 on 35mm, and Act 3 digitally with the ARRI Alexa. We also added visuals in the form of flashbacks and other intercutting to make it more cinematic. Danny would keep rolling past the normal end of a take and would get some great emotions from the actors that I could use elsewhere. Also when the audience arrives to take their seats at these launch events, Danny would record that, which gave us additional material to work with. In one scene with Jobs and Joanna Hoffman (Kate Winslet), Danny kept rolling on Kate after Michael left the room. In that moment we got an exquisite emotional performance from her that was never in the script. In another example, he got this great abstract close-up of Michael that we were able to use to intercut with the boardroom scene later. This really puts the audience into Steve’s head and is a pay-off for the revenge concept.”

Building structure

df0216_sj2Elliot Graham likes to make his initial cut tight and have a first presentation that’s reasonably finished. His first cut was approximately 147 minutes long compared with a final length of 117 minutes plus credits. He continues, “In the case of this film, cutting tight was beneficial, because we needed to know whether or not the pace would work. The good news is that this leaves you more time to experiment, because less time is spent in cutting it down for time. We needed to make sure the viewer would stay engaged, because the film is really three separate stories. To avoid the ‘stage play’ feeling and move from one act into the next, we added some interstitial visual elements to move between acts. In our experimenting and trimming, we opted to cut out part of the start of Act 2 and Act 3 and join the walking-talking dialogue ‘in progress.’ This becomes a bit of a montage, but it serves the purpose of quickly bringing the viewer along even though they might have to mentally fill in some of the gaps. That way it didn’t feel like Act 2 and Act 3 were the start of new films and kept the single narrative intact.”

“At the start, the only way to really ascertain the success of our efforts was to see Act 1, as close to screen-ready as we could come. So I put together an assemblage and Danny, the producers, and I viewed it. Not only did we want to see how it all worked together before moving on, we wanted to see that we had achieved the tone and quality we were after, because each act needed to feel completely different. And since Danny was shooting each piece a bit differently, I was cutting each one differently. For example, there’s a lot of energy, almost frenetic, to the camera movements in Act 1, plus it was shot on 16mm, so it gives it this cinema verité feel and harkens back to a less technically-savvy time. Act 2 has a more classical technique to it, so the cutting becomes a little slower in pacing. By getting a sense of what was working and maybe what wasn’t, it helped define how we were going to shoot the subsequent two acts and ensure we were creating an evolution for the character and the story. We would not have been able to do this if we had shot this film chronologically out of order, the way most features are.”

It’s common for a film’s scene structure to be re-arranged during the edit, but that’s harder to do with a film like Steve Jobs. There’s walking-talking dialogue that moves from one room to the next, which means the written script forces a certain linear progression. It’s a bit like the challenge faced in Birdman: Or (The Unexpected Virtue of Ignorance), except without the need to present the story as a continuous, single take. Graham says, “We did drop some scenes, but it was tricky, because you have to bridge the gap without people noticing. One of the scenes that was altered a lot from how it was written was the fight between John Scully (Jeff Daniels) and Steve Jobs (Michael Fassbender). This scene runs about eleven minutes and Danny and I felt it lost momentum. So we spent about 48 hours recutting the scene. Instead of following the script literally, we followed the change in emotion of the actors’ performances. This led to a better emotional climax, which made the scene work.”

From San Francisco to London

df0216_sj4Steve Jobs was shot in San Francisco from January to April of this year and then post shifted to London from April until October. The editorial team worked with two Avid Media Composers connected to Avid ISIS shared storage. The film elements were scanned and then all media transcoded to Avid DNxHD for the editing team. Graham explains, “From the standpoint of the edit, it didn’t matter whether it was shot on film or digitally – the different formats didn’t change our workflow. But it was still exciting to have part of this on film, because that’s so rare these days. Danny likes a very collaborative process, so Aaron and the producers were all involved in reviewing the cuts and providing their creative input. As a director, Danny is very involved with the edit. He’d go home and review all the dailies again on DVD just to make sure we weren’t missing anything. This wasn’t an effects-heavy film like a superhero film, yet there were still several hundred visual effects. These were mostly clean-ups, like make-up fixes, boom removals, but also composites, like wall projections.”

Various film editors have differing attitudes about how much sound they include in their cut. For Elliot Graham it’s an essential part of the process. He says, “I love working with sound and temp music, because it changes your perception and affects how you approach the cut. For Steve Jobs, music was a huge part of the process from the beginning. Unlike other films, we received a lot of pieces of music from Daniel Pemberton (composer, The Man from U.N.C.L.E., Cuban Fury, The Counselor) right at the start. He had composed a number of options based on his reading of the script. We tried different test pieces even before the shoot. Once some selections were made, Daniel gave us stems so that I could really tailor the music to the scene. This helped to define the flashbacks musically. The process was much more collaborative between the director and composer than on other films and it was a really unique way to work.”

Getting the emotion right

Elliot Graham joined the project after Michael Fassbender was signed to play Steve Jobs. Graham comments, “I’ve always thought Michael was a brilliant actor and I’d much rather have that to work with than someone who just looks like Jobs. Steve Wozniak (who is played by actor Seth Rogan in the film) watched the film several times and he commented that although the actual events were slightly different, the feeling behind what’s in the film was right. He’s said that to him, it was like seeing the real Steve.  So Michael was in some way capturing the essence of this guy.  I’m biased, of course, but Danny’s aim was to get the emotional approach right and I think he succeeded.”

“I’m a big Apple fan, so the whole process felt a bit strange – like I was in some sort of wonderful Charlie Kaufman wormhole. Here I was working on a Mac and using an iPhone to communicate while cutting a film about the first Mac and the person who so impacted the world through these innovations. I felt that by working on this film, I could understand Jobs just a little bit better. You get a sense of Jobs through his coming into contact with all of these people and his playing out whatever conflicts that existed. I think it’s more of a ‘why’ and ‘who’ story – rather than a point for point biography – why this person, whose impact on our lives is immeasurable, was the way he was. It’s my feeling that we were trying to look at his soul much more than track his life story.”

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2016 Oliver Peters

The On Camera Interview

df4115_iview_15

Many projects are based on first person accounts using the technique of the on camera interview. This approach is used in documentaries, news specials, corporate image presentations, training, commercials, and more. I’ve edited a lot of these, especially for commercials, where a satisfied customer might give a testimonial that gets cut into a five-ish minute piece for the web or a DVD and then various commercial lengths (:10, :15, :30, :60, 2 min.). The production approach and editing techniques are no different in this application than if you are working on a documentary.

The interviewer

The interview is going to be no better than the quality of the interviewer asking the off camera (and unheard) questions. Asking good questions in the right manner will yield successful results. Obviously the interviewer needs to be friendly enough to establish a rapport with the subject. People get nervous on camera, so the interviewer needs to get them relaxed. Then they can comfortably answer the questions and tell the story in their own words. The interviewer should structure the questions in a way that the totality of the responses tells a story. Think in terms of story arc and strive to elicit good beginning and ending statements.

df4115_iview_2Some key points to remember. First, make sure you get the person to rephrase the question as part of their answer, since the audience won’t hear the interviewer. This makes their answer a self-contained statement. Second, let them talk. Don’t interject or jump on the end of the answer, since this will make editing more difficult.

Sometimes in a commercial situation, you have a client or consultant on set, who wants to make sure the interviewee hits all the marketing copy points. Before you get started, you’ll need to have an understanding with the client that the interviewee’s answers will often have to be less than perfect. The interviewees aren’t experienced spokespersons. The more you press them to phrase the answer in the exact way that fits the marketing points or to correctly name a complex product or service in every response, the more stilted their speaking style will become. Remember, you are going for naturalness, honesty and emotion.

The basics

df4115_iview_1As you design the interview set, think of it as staging a portrait. Be mindful of the background, the lighting, and the framing. Depending on the subject matter, you may want a matching background. For example, a doctor’s interview might look best in a lab or in the medical office with complex surgical gear in the background. An interview with an average person is going to look more natural in a neutral environment, like their living room.

You will want to separate the interview subject from the background and this can be achieved through lighting, lens selection, and color scheme. For example, a blonde woman in a peach-colored dress will stand out quite nicely against a darker blue-green background. A lot of folks like the shallow depth-of-field and bokeh effect achieved by a full-frame Canon 5D camera with the right lens. This is a great look, but you can achieve it with most other cameras and lenses, too. In most cases, your video will be seen in the 16:9 HD format, so an off-center framing is desirable. If the person is looking camera left, then they should be on the right side of the frame. Looking camera right, then they should be on the left side.df4115_iview_7

Don’t forget something as basic as the type of chair they are sitting in. You don’t want a chair that rocks, rolls, leans back, or swivels. Some interviews take a long time and subjects that have a tendency to move around in a chair become very distracting – not to mention noisy – in the interview, if that chair moves with them. And of course, make sure the chair itself doesn’t creak.

Camera position

df4115_iview_3The most common interview design you see is where the subject is looking slightly off camera, as they are interacting with the interviewer who sitting to the left or the right of the camera. You do not want to instruct them to look into the camera lens while you are sitting next to the camera, because most people will dart between the interviewer and the camera when they try to attempt this. It’s unnatural.

The one caveat is that if the camera and interviewer are far enough away from the interview subject – and the interviewer is also the camera operator – then it will appear as if the interviewee is actually looking into the lens. That’s because the interviewer and the camera are so close to each other. When the subject addresses the interviewer, he or she appears to be looking at the lens when in fact the interviewee is really just looking at the interviewer.

df4115_iview_16If you want them looking straight into the lens, then one solution is to set up a system whereby the subject can naturally interact with the lens. This is the style documentarian Errol Morris has used in a rig that he dubbed the Interrotron. Essentially it’s a system of two teleprompters. The interviewer and subject can be in the same studio, although separated in distance – or even in other rooms. The two-way mirror of the teleprompter is projecting each person to the other. While looking directly at the interviewer in the teleprompter’s mirror, the interviewee is actually looking directly in the lens. This feels natural, because they are still looking right at the person.

Most producers won’t go to that length, and in fact the emotion of speaking directly to the audience, may or may not be appropriate for your piece. Whether you use Morris’ solution or not, the single camera approach makes it harder to avoid jump cuts. Morris actually embraces and uses these, however, most producers and editors prefer to cover these in some way. Covering the edit with a b-roll shot is a common solution, but another is to “punch in” on the frame, by blowing up the shot digitally by 15-30% at the cut. Now the cut looks like you used a tighter lens. This is where 4K resolution cameras come in handy if you are finishing in 2K or HD.

df4115_iview_6With the advent of lower-cost cameras, like the various DSLR models, it’s quite common to produce these interviews as two camera shoots. Cameras may be positioned to the left or the right of the interviewer, as well as on either side. There really is no right or wrong approach. I’ve done a few where the A-camera is right next to the interviewer, but the B-camera is almost 90-degrees to the side. I’ve even seen it where the B-camera exposes the whole set, including the crew, the other camera, and the lights. This gives the other angle almost a voyeuristic quality. When two cameras are used, each should have a different framing, so a cut between the cameras doesn’t look like a jump cut. The A-camera might have a medium framing including most of the person’s torso and head, while the B-camera’s framing might be a tight close-up of their face.

While it’s nice to have two matched cameras and lens sets, this is not essential. For example, if you end up with two totally mismatched cameras out of necessity – like an Alexa and a GoPro or a C300 and an iPhone – make the best of it. Do something radical with the B-camera to give your piece a mixed media feel. For example, your A-camera could have a nice grade to it, but the B-camera could be black-and-white with pronounced film grain. Sometimes you just have to embrace these differences and call it a style!

Coverage

df4115_iview_4When you are there to get an interview, be mindful to also get additional b-roll footage for cutaway shots that the editor can use. Tools of the trade, the environment, the interview subject at work, etc. Some interviews are conducted in a manner other than sitting down. For example, a cheesemaker might take you through the storage room and show off different rounds of cheese. Such walking-talking interviews might make up the complete interview or they might be simple pieces used to punctuate a sit-down interview. Remember, that if you have the time, get as much coverage as you can!

Audio and sync

It’s best to use two microphones on all interviews – a lavaliere on the person and a shotgun mic just out of the camera frame. I usually prefer the sound of the shotgun, because it’s more open; but depending on how noisy the environment is, the lav may be the better channel to use. Recording both is good protection. Not all cameras have great sound systems, so you might consider using an external audio recorder. Make sure you patch each mic into separate channels of the camera and/or external recorder, so that they are NOT summed.

df4115_iview_8Wherever you record, make sure all sources receive audio. It would be ideal to feed the same mics to all cameras and recorders, but that’s not always possible. In that case, make sure that each camera is at least using an onboard camera mic. The reason to do this is for sync. The two best ways to establish sync is common timecode and a slate with a clapstick. Ideally both. Absent either of those, then some editing applications (as well as a tool like PluralEyes) can analysis the audio waveform and automatically sync clips based on matching sound. Worst case, the editor can manually sync clips be marking common aural or visual cues.

Depending on the camera model, you may have media cards that don’t span and automatically start a new clip every 4GB (about every 12 minutes with some formats). The interviewer should be mindful of these limits. If possible, all cameras should be started together and re-slated at the beginning of each new clip.

Editing workflow

df4115_iview_13Most popular nonlinear editing applications (NLE) include great features that make editing on camera interviews reasonably easy. To end up with a solid five minute piece, you’ll probably need about an hour of recorded interview material (per camera angle). When you cut out the interviewer’s questions, the little bit of chit chat at the beginning, and then repeats or false starts that an interviewee may have, then you are generally left with about thirty minutes of useable responses. That’s a 6:1 ratio.

The goal as an editor is to be a storyteller by the soundbites you select and the order into which you arrange them. The goal is to have the subject seamlessly tell their story without the aide of an on camera host or voice-over narrator. To aid the editing process use NLE tools like favorites, markers, and notes, along with human tools like written transcripts and your own notes to keep the project organized.

This is the standard order of things for me:

Sync sources and create multi-cam sequences or multi-cam clips depending on the NLE.

Pass 1 – create a sequence with all clips synced up and organized into a single timeline.

Pass 2 – clean up the interview and remove all interviewer questions.

Pass 3 – whittle down the responses into a sequence of selected answers.

Pass 4 – rearrange the soundbites to best tell the story.

Pass 5 – cut between cameras if this is a multi-camera production.

Pass 6 – clean up the final order by editing out extra words, pauses, and verbal gaffs.

Pass 7 – color correct clips, mix audio, add b-roll shots.

df4115_iview_9As I go through this process, I am focused on creating a good “radio cut” first. In other words, how does the story sound if you aren’t watching the picture. Once I’m happy with this, I can worry about color correction, b-roll, etc. When building a piece that includes multiple interviewees, you’ll need to pay attention to several other factors. These include getting a good mix of diversity – ethnic, gender, job classification. You might want to check with the client first as to whether each and every person interviewed needs to be used in the video. Clearly some people are going to be duds, so it’s best to know up front whether or not you’ll need to go through the effort to find a passable soundbite in those cases or not.

There are other concerns when re-ordering clips among multiple people. Arranging the order of clips so that you can cut between alternating left and right-framed shots makes the cutting flow better. Some interviewees comes across better than others, however, make sure not to lean totally on these responses. When you get multiple, similar responses, pick the best one, but if possible spread around who you pick in order to get the widest mix of respondents. As you tell the story, pay attention to how one soundbite might naturally lead into another – or how one person’s statement can complete another’s thoughts. It’s those serendipitous moments that you are looking for in Pass 4. It’s what should take the most creative time in your edit.

Philosophy of the cut

df4115_iview_11In any interview, the editor is making editorial selections that alter reality. Some broadcasters have guidelines at to what is and isn’t permissible, due to ethical concerns. The most common editorial technique in play is the “Frankenbite”. That’s where an edit is made to truncate a statement or combine two statements into one. Usually this is done because the answer went off into a tangent and that portion isn’t relevant. By removing the extraneous material and creating the “Frankenbite” you are actually staying true to the intent of the answer. For me, that’s the key. As long as your edit is honest and doesn’t twist the intent of what was said, then I personally don’t have a problem with doing it. That part of the art in all of this.

df4115_iview_10It’s for these reasons, though, that directors like Morris leave the jump cuts in. This lets the audience know an edit was made. Personally, I’d rather see a smooth piece without jump cuts and that’s where a two camera shoot is helpful. Cutting between two camera angles can make the edit feel seamless, even though the person’s expression or body position might not truly match on both sides of the cut. As long as the inflection is right, the audience will accept it. Occasionally I’ll use a dissolve, white flash or blur dissolve between sections, but most of the time I stick with cuts. The transitions seem like a crutch to me, so I use them only when there is a complete change of thought that I can’t bridge with an appropriate soundbite or b-roll shot.

df4115_iview_12The toughest interview edit tends to be when you want to clean things up, like a repeated word, a stutter, or the inevitable “ums” and “ahs”. Fixing these by cutting between cameras normally results in a short camera cut back and forth. At this point, the editing becomes a distraction. Sometimes you can cheat these jump cuts by staying on the same camera angle and using a short dissolve or one of the morphing transitions offered by Avid, Adobe, or MotionVFX (for FCPX). These vary in their success depending on how much a person has moved their body and head or changed expressions at the edit point. If their position is largely unchanged, the morph can look flawless. The more the change, the more awkward the resulting transition can be. The alternative is to cover the edit with a cutaway b-roll shot, but that’s often not desirable if this happens the first time we see the person. Sometimes you just have to live with it and leave these imperfections alone.

Telling the story through sight and sound is what an editor does. Working with on camera interviews is often the closest an editor comes to being the writer, as well. But remember that mixing and matching soundbites can present nearly infinite possibilities. Don’t get caught in the trap so many do of never finishing. Bring it to a point where the story is well-told and then move on. If the entire production is approached with some of these thoughts in mind, the end result can indeed be memorable.

©2015 Oliver Peters

PDFviewer for Premiere Pro

df4015_pcpdf_1_sm

Small developers often create the coolest tools for editing. Such is the case with Primal Cuts and their PDFviewer extension for Premiere Pro CC. Ever find yourself shuffling between paper scripts and storyboards, while trying to edit? Or juggling between different apps on-screen to view electronic versions, while going back-and-forth to your NLE? That’s what PDFviewer solves for you.

df4015_pcpdf_3Adobe has created a feature called extensions, which allows a developer to create a custom, dockable panel to perform certain function right inside the application’s interface. TypeMonkey is one example of this for After Effects. The same interface feature is also available in Premiere Pro. Extensions developed for Adobe applications also have the benefit of being cross-platform compatible.

PDFviewer is an extension designed for Adobe Premiere Pro CC. Once installed, it’s accessible from the extensions pulldown menu. When you select it, PDFviewer opens as a floating interface panel that can then be docked anywhere in the interface. If you dock it, make sure to do so in all of your workspaces and save those configurations. That way, if you have a file open, it will stay open as you jump between different layouts.

df4015_pcpdf_2Any PDF file can be opened in PDFviewer, including scripts, storyboards, and other documents. If you work in scripted long-form productions, then check if the script supervisor is using ScriptE Systems products. These are ideal for generating numerous electronic versions of common filming documents, including shot logs and lined scripts. However, any PDF works, including manually scanned PDFs of handwritten reports and lined scripts. Simply open up the lined script in the PDFviewer panel and now you have it right there within Premiere Pro. It’s not exactly the same as Avid’s Script Integration tools in Media Composer, but it’s the next best thing to it.

df4015_pcpdf_4PDFviewer lets you open multiple PDFs by clicking the “+” icon and adding another file. Multiple PDFs are accessible as tabs across the top of the PDFviewer window. It also includes a “hand” tool to easily scroll and pan within larger documents. Search is another great feature, which is perfect for working with transcripts. Search terms will be highlighted throughout the document. You can also copy-and-paste text from within PDFviewer to any metadata field in Premiere Pro.

Primal Cuts’ PDFviewer is a straightforward tool that every Premiere Pro editor will find to be a handy addition to their toolkit. At $10, the price is hard to pass up, simply based on the convenience of not shuffling more paper on your desk.

©2015 Oliver Peters

Fear the Walking Dead

df3615_ftwd_1_sm

When AMC cable network decided to amp up the zombie genre with The Walking Dead series, it resulted in a huge hit. Building upon that success, they’ve created a new series that could be viewed as a companion story, albeit without any overlapping characters. Fear the Walking Dead is a new, six-episode series that starts season one on August 23. The story takes place across the country in Los Angeles and chronologically just before the outbreak in the original series. The Walking Dead was based on Robert Kirkman’s graphic novels by the same name and he has been involved in both versions as executive producer.

Unlike the original series, which was shot on 16mm film, Fear the Walking Dead is being shot digitally with ARRI ALEXA cameras and anamorphic lenses. That’s in an effort to separate the two visual styles, while maintaining a cinematic quality to the new series. I recently spoke with Tad Dennis, the editor of two of the six episodes in season one, about the production.

Tad Dennis started his editing career as an assistant editor on reality TV shows. He says, “I started in reality TV and then got the bump-up to full-time editing (Extreme Makeover: Home Edition, America’s Next Top Model, The Voice). However, I realized my passion was elsewhere and made the shift to scripted television. I started there again as an assistant and then was bumped back up to editing (Fairly Legal, Manhattan, Parenthood). Both types of shows really do have a different workflow, so when I shifted to scripted TV, it was good to start back as an assistant. That let me be very grounded in the process.”

Creating a new show with a shared concept

Dennis started with these thoughts on the new show, “We think of this series as more of a companion show to the other and not necessarily a spin-off or prequel. The producers went with different cameras and lenses for a singular visual aesthetic, which affects the style. In trying to make it more ‘cinematic’, I tend linger on wider shots and make more selective use of tight facial close-ups. However, the material really has to dictate the cut.”

df3615_ftwd_3Three editors and three assistant editors work on the Fear the Walking Dead series, with each editor/assistant team cutting two of the six shows of season one. They are all working on Avid Media Composer systems connected to an Avid Isis shared storage solution. Scenes were shot in both Vancouver and in Los Angeles, but the editing teams were based in Los Angeles. ALEXA camera media was sent to Encore Vancouver and Encore Hollywood, depending on the shooting location. Encore staff synced sound and provided the editors with Avid DNxHD editorial media. The final color correction, conform, and finishing was also handled at Encore Hollywood.

Dennis described how post on this show differed from other network shows he’s worked on in the past. He says, “With this series, everything was shot and locked for the whole season by the first airdate. On other series, the first few shows will be locked, but then for the rest of the season, it’s a regular schedule of locking a new show each week until the end of the season. This first season was shot in two chunks for all six episodes – the Vancouver settings and then the Los Angeles scenes. We posted everything for the Vancouver scenes and left holes for the LA parts. The shows went all the way through director cuts, producer cuts, and network notes with these missing sections. Then when the LA portions came in, those scenes were edited and incorporated. This process was driven by the schedule. Although we didn’t have the pressure of a weekly airdate, the schedule was definitely tight.” Each of the editors had approximately three to four days to complete their cut of an episode after receiving the last footage. Then the directors got another four days for a director’s cut.

df3615_ftwd_5Often films and television shows go through adjustments as they move from script to actual production and ultimately the edit. Dennis feels this is more true of the first few shows in a new series than with an established series. He explains, “With a new series, you are still trying to establish the style. Often you’ll rethink things in the edit. As I went through the scenes, performances that were coming across as too ‘light’ had to be given more ‘weight’. In our story, the world is falling apart and we wanted every character to feel that all the way throughout the show. If a performance didn’t convey a sense of that, then I’d make changes in the takes used or mix takes, where picture might be better on one and audio better on the other.”

Structure and polish in post

In spite of the tight schedule, the editors still had to deal with a wealth of footage. Typical of most hour-long dramas, Fear the Walking Dead is shot with two or three cameras. For very specific moments, the director would have some of the footage shot on 48fps. In those cases, where cameras ran at different speeds, Dennis would treat these as separate clips. When cameras ran at the same speed (for example, at 24fps for sync sound), such as in dialogue scenes, Susan Vinci (assistant editor) would group the clips as multicam clips. He explains, “The director really determines the quality of the coverage. I’d often get really necessary options on both cameras that weren’t duplicated otherwise. So for these shows, it helped. Typically this meant three to four hours of raw footage each day. My routine is to first review the multicam clips in a split view. This gives me a sense of what the coverage is that I have for the scene. Then I’ll go back and review each take separately to judge performance.”

df3615_ftwd_4Dennis feels that sound is critical to his creative editing process. He continues, “Sound is very important to the world of Fear the Walking Dead. Certain characters have a soundscape that’s always associated with them and these decisions are all driven by editorial. The producers want to hear a rough cut that’s as close to airable as possible, so I spend a lot of time with sound design. Given the tight schedule on this show, I would hand off a lot of this to my long-time assistant, Susan. The sound design that we do in the edit becomes a template for our sound designer. He takes that, plus our spotting notes, and replaces, improves, and enhances the work we’ve done. The show’s music composer also supplied us with a temp library of past music he’d composed for other productions. We were able to use these as part of our template. Of course, he would provide the final score customized to the episode. This score would be based on our template, the feelings of the director, and of course the composer’s own input for what best suited each show.”

df3615_ftwd_2Dennis is an unabashed Avid Media Composer proponent. He says, “Over the past few years, the manufacturers have pushed to consolidate many tools from different applications. Avid has added a number of Pro Tools features into Media Composer and that’s been really good for editors. There are many tools I rely on, such as those audio tools. I use the Audiosuite and RTAS filters in all of my editing. I like dialogue to sound as it would in a live environment, so I’ll use the reverb filters. In some cases, I’ll pitch-shift audio a bit lower. Other tools I’ll use include speed-ramping and invisible split-screens, but the the trim tool is what defines the system for me. When I’m refining a cut, the trim tool is like playing a precise instrument, not just using a piece of software.”

Dennis offered these parting suggestions for young editors starting out. “If you want to work in film and television editing, learn Media Composer inside and out. The dominant tool might be Final Cut or Premiere Pro in some markets, but here in Hollywood, it’s largely Avid. Spend as much time as possible learning the system, because it’s the most in-demand tool for our craft.”

Originally written for Digital Video magazine / CreativePlanetNetwork

©2015 Oliver Peters

Automatic Duck Redux

df3915_ad_1_sm

Automatic Duck invented timeline translations between applications. Necessity is the mother of invention, leading Wes Plate, an Avid Media Composer editor who tackled compositing in Adobe After Effects, to team with his programmer father, Harry. The goal was to design a tool to get Avid timelines into After Effects compositions. Automatic Duck grew from this beginning to create a series of translation products that let editors seamlessly move timelines between a number of different hosts, including Media Composer, Pro Tools, After Effects, and Apple Final Cut Pro “classic”.

Four years ago Adobe licensed the IP for the original Automatic Duck Pro Import products, as well as brought the father/son team on board to develop tools for Adobe. Now they are back on their own and have decided to reboot Automatic Duck, which has been mothballed for the past four years. Seeing an opportunity in Apple’s Final Cut Pro X, the company has developed Ximport AE, a timeline translation tool to bring Final Cut Pro X projects (edited sequences) into After Effects. The team is no stranger to Final Cut Pro X’s new FCPXML format, since it was the first developer to create a companion utility that translated Final Cut Pro X 10.0 projects into Pro Tools sessions.

Knowing the market

df3915_ad_2First, let’s define the market. Who is Automatic Duck Ximport AE for? Editors who do most of their heavy lifting in Media Composer, Final Cut, or Premiere Pro might not see the attraction. On the flip side, though, there are quite a few editors for whom After Effects is the tool of choice for all effects and even finishing. For this group, the NLE is where they spend the least amount of time. They use an editing application for shot selection and assembly and then go straight to After Effects for everything else.

If you are a motion graphics designer who relies on After Effects, then your occasional need for an NLE might be best served by FCP X. The interface is fast and easy to master, compared with more traditional track-based edit software. Finally, if you are a dedicated FCP X editor, you no longer have a “send to Motion” function as in the old Final Cut Studio. This means you can’t send more than a single shot to Motion for treatment. Besides, After Effects may still be your preferred motion graphics application. Take all of these points into consideration and you’ll see that there’s a clear need to get a project from FCP X into After Effects – the industry’s dominant motion graphics application.

How it works

df3915_ad_4Automatic Duck Ximport AE is designed as a plug-in that’s installed into After Effects, including CS6 up through the current CC2015 version (and beyond). There are several other competing translation tools on the market, which convert between flavors of XML or from FCPXML into AE Scripts. Automatic Duck is the only one that integrates directly into the After Effects import menu. Ximport AE cuts out one middle step in the process and should provide for a more complete translation from FCP X into After Effects.

I’ve been beta testing the product for a few months and it certainly hits the mark for serious users. The steps are simple. Just cut your sequence in Final Cut Pro X and then export an FCPXML for that project (sequence). When you open After Effects, select File > Import > Automatic Duck Ximport AE. This opens a dialogue box with a few settings and it’s where you navigate to the correct FCPXML file. Settings include whether to let your clips cascade up or down in the After Effects timeline, as well as an option to create pre-comps from Final Cut’s secondary storylines. The question mark icon also launches the user guide.

In the timelines I’ve tested, the translation is quite good. Compound clips are packaged as pre-comps. The active angle of Multicam clips and the selected pick of Audition clips are translated. Alternate angles aren’t.  Generally transform, crop, opacity, and blend functions are supported, as are audio and video keyframes. A number of third party filters are accurately translated between applications, assuming that the same filter is installed into each host. At launch, these include selected plug-ins from Boris FX, Digital Anarchy, Noise Industries/FxFactory, PHYX, Red Giant, and Yanobox. Check the user guide for a detailed list with specific filters.

Some caveats

df3915_ad_3It’s worth noting, however, that just about all of the built-in FCP X filters are not translated into an equivalent filter in After Effects. For example, the color board metadata is included in the FCPXML, but there’s no way to read that info on the After Effects side. This is true even when there are filters that appear to be the same. For example, both hosts include a native Gaussian blur filter, yet that doesn’t get translated. On the other hand, if you apply a Flipped filter in FCP X, it will be correctly translated into the -100 transform scale value in After Effects. So again, read the user guide and do a little experimentation to see what works and what doesn’t in your projects. Whenever an effect is not supported, a note is made in the companion HTML file created at import. A marker is also placed on that clip in the After Effects timeline, naming the missing plug-in.

df3915_ad_6I tested a number of supported third-party products, staying mainly within the Red Giant family. Translation was good between the Magic Bullet tools, but not without issue. For example, Universe ToonIt Expressionist Noise was available in both hosts, yet the effect was not applied in the After Effect composition. That’s because at the time I tested this using a beta build, that specific Universe filter had not been included. This has since been corrected. Other effects, like Looks, Colorista III, Mojo, Universe Glow, and others worked flawlessly. According to Wes Plate, the plug-in has been architected in a way to easily add support for new effects plug-ins. The bottom line is that if you stay within the supported features, you will get the richest translation experience from FCP X into After Effects that’s currently available in the market.

Automatic Duck Media Copy 4.0

df3915_ad_5Along with Ximport AE, the company will also introduce Automatic Duck Media Copy 4.0. The original Media Copy grew out the need to collect, copy, and move sequences and their associated media. The original version worked for Avid Media Composer and Apple Final Cut Pro “classic” sequences. It would read either the AAF or XML file and copy all associated media, plus the timeline edit info. This new folder could then be moved to another system for more editing or used as a back-up archive. Media Copy 4.0 has been updated to add FCPXML support. As before, it collects media and timeline files for use elsewhere. It does not trim or transcode the media, but you have the choice to copy media all into a single folder or to maintain a folder hierarchy matching the original paths within the newly created location. Media Copy works well as a standalone application or as a companion to Ximport AE. It supports Avid Media Composer, Final Cut Pro X, and Final Cut Pro 6/7.

With the reboot of Automatic Duck, they’ve decided to partner with Red Giant Software to provide marketing, sales, and customer support. Red Giant will offer Automatic Duck Ximport AE for $199 and Media Copy 4.0 for $99. If you still have need for Automatic Duck’s legacy products, the company is posting them again on their own website for free, with an optional “donate” button. These include Pro Import FCP, Pro Export FCP (for FCP 7 users), and Pro Import AE (for importing AAF and XML into AE CS 5.5 or earlier).

Regardless of which NLE you use, I’ve found Media Copy to be an essential tool, whether or not you work with effects or motion graphics. It’s great to see Automatic Duck update it, as well as launch their next great product, Ximport AE. Adobe After Effects will continue to be the ubiquitous compositing and motion graphics choice for most editors, so this marriage between Final Cut Pro X and After Effects make great sense.

For more, here’s a good interview with Wes Plate at Red Shark News.

©2015 Oliver Peters

Blackmagic Design DaVinci Resolve 12

df3115_R12_1_sm
The industry has been eager to check out Blackmagic Design’s DaVinci Resolve 12. This “first look” is based on the initial build of the Resolve 12 public beta. A number of functions have not yet been enabled, so expect to see some changes in the product by the time you read this.

As with any public beta, the point is to get feedback and reap the benefit of crowdsourced quality testing, so be careful about using it on real jobs. That being said, so far I’ve found the public beta builds to be reasonably stable. I’ve had a chance to test the application on several different machines, including two 2009-2010 Mac Pro towers and a new 15” Retina MacBook Pro. Testing included a Sapphire 7950 and an Nvidia Quadro 4000 GPU, as well as the built-in Nvidia card on the laptop.

Blackmagic is no longer using the “Lite” name to identify the free version. The branding is now DaVinci Resolve 12 (free) and DaVinci Resolve 12 Studio ($995). The free version includes the majority of features and is limited to an output no larger that UltraHD 4K. The paid version adds advanced features, including stereoscopic functions, networked collaboration between users, multiple GPU support, and the ability to output at larger than UltraHD 4K frame sizes.

Blackmagic Design hardware products are required to output an analog or digital signal to an external video monitor or tape deck. If you are comfortable making color judgements based on the viewer image, then no hardware is require for operation and rendering. You can also hot rod your system with the DaVinci Control Surface ($29,995) or a number of supported third-party surfaces that are less costly.

Refreshing the user interface

df3115_R12_3DaVinci Resolve 12 ushers in a fresh user interface. Previous versions mimicked the style of Apple Final Cut Pro X, but the new UI is flatter with thinner fonts. It takes on the trendy design aesthetic employed in Windows 8/10 and Mac OS X/iOS. The background colors are a lighter grey with a faint blue cast to them. Although pleasing, I find that last part strange for a color correction application, where a true grey is considered the norm.

The interface has been optimized for single and dual-monitor systems, as well as higher-density displays, like Apple’s Retina. Resolve 12 is divided into four modes or pages: Media, Edit, Color, and Deliver. Software control panels can be opened or closed as needed, including videoscopes, media storage locations, mixers, audio meters, inspector, effects, and more. There are some interesting options to control whether or not a panel or window runs the full horizontal or vertical length of your display. However, there is no way to create a custom workspace by docking panels in different places and then saving that as your personal layout. Interface colors also can’t be personalized.

As before, timelines support sources with mixed formats and frame rates, however, the base timeline setting must match that of the project. This means you cannot have a 720p/59.94 and a 1080i/29.97 timeline within the same project. You can’t have multiple timelines open, but it’s easy to access different timelines in the same project quickly. You can also cut one timeline into another as a nested sequence. Such nests (as well as compound clips) can be decomposed in the timeline, leaving the original source clips to work with.

Resolve 12 no longer includes a separate section in the UI for timelines, as these are placed together with the source media in the Media Pool. One simple solution is to create a Bin for your edits and manually drag the timelines you’ve created into that Bin. Another option is to filter timelines into a Smart Bin by including some common element in the name. For example, you could append “seq” (for sequence) to the end of the name of each timeline. Set your filtering criteria to names that contain “seq” and then timelines will automatically show up in the Smart Bin that you’ve created for timelines.

Editing with Resolve 12

df3115_R12_4As a a nonlinear editing application (NLE), Resolve 12 is an interesting mash-up among several other NLEs, including Premiere Pro, FCP 7 and FCP X. There are new features clearly intended for editors, including multi-camera editing. You can now organize clips and timelines into custom bins, add metadata, assign sortable color flags and other metadata values, and automatically filter clips into Smart Bins. You can sync grouped clips (double-system sound) and multi-camera clips using in-points, timecode, or audio waveforms. The multi-cam editing routine is similar to other NLEs, where you drop a multi-cam clip onto your main timeline and then cut between camera angles.

Blackmagic placed a lot of attention on timeline trim functions. It’s now possible to do some very elaborate asymmetrical trims of multiple clips. Slip/slide trimming and split audio is all very easy and fluid. There is no trim window, so on-the-fly JKL trimming – a la Media Composer – isn’t possible. When you trim via the mouse or keyboard, you get a 2-up preview in the viewer and a 4-up display when slipping and sliding clips. You can access a curve editor in the timeline for transitions, which lets you control the transition acceleration. When you select source clips in the list view mode of the browser, you get a skimmable filmstrip of the selected clip, much like in FCP X.

Video effects are still based on OpenFX, so any third-party filters and transitions that offer OFX host support (FilmConvert, BorisFX, NewBlueFX, etc.) will show up in either the Edit page effects palette or the Color page, depending on whether the filter is something that requires a color correction node in order to be applied. Blackmagic also includes its own toolbox of effects and transitions, including the new Smoothcut transition. This is a morphing dissolve designed to smooth jumpcuts between edited soundbites from on-camera interviews. It is similar to Adobe’s Morph Cut or Avid’s FluidMorph, but seems to rely more heavily on GPU processing. Therefore, you don’t have to wait until a lengthy analysis pass is completed before you can review the results. As with all of these effects, real-world results vary with how closely the alignment is on both sides of the cut. It tends to work best with a duration of two to four frames.

Audio went through big changes in Resolve 12 to improve performance and to add features. VST and AU plug-ins are supported. Any that are installed on your system will show up in the audio effects palette. Effects can be applied to clips or tracks and there’s automation-style track mixing. The way audio tracks are implemented seems confusing to me – especially audio track patching. Tracks can be mono, stereo, 5.1, or adaptive, but there’s no indication in the timeline window as to what type of track it is. When you edit a multi-cam clip to the timeline and the source audio contains several channels, then it is no longer possible to break those clips apart or access individual channels from the timeline. Both Adobe and Apple use similar methods, but with a better approach in each’s implementation. Like in Premiere Pro, it is best to start out by properly setting the source audio channel configuration in the clip properties menu for each clip. You can access this in the Media page.

Other improvements

df3115_R12_5DaVinci Resolve 12 is not only about editing. Since Resolve is used a lot as a DIT tool to generate dailies, there’s a new capability in the Media page to apply color space changes and camera LUTs to a group of clips. If you shot log-encoded footage and apply a Rec709 LUT on the Media page, you’ll now see the corrected color throughout. The downside is that such LUTs are not visible on the Color page and can’t be removed in any of the color adjustment nodes.

The new blue and greenscreen 3D keyer is accessible on the Color page. It yields high-quality results and is aided by new, matte finesse controls, plus Resolve’s great masking and tracking capabilities. There’s also improved ACES support, better shot-matching between clips, and more.

Resolve 12 uses a central database to house all project files. This makes it harder to move files between users than with other NLEs. Previous versions let you export Resolve projects to move them to other systems, but now Resolve 12 adds copy, move, transcode, relink, and consolidate functions. Support for FCPXML (for projects offline-edited using FCP X) has been updated to the newest version of this format.

There had been a bug in how Resolve wrote FCPXML files, so going back into FCP X from Resolve exhibited relinking issues. This only occurred when importing on a different machine than where the files were generated. This bug appears to have been fixed in version 3 of the public beta build.

To include another tool for editors, Blackmagic added an AAF export to Pro Tools feature. I don’t have ProTools, so I wasn’t able to test the Pro Tools export properly. All audio clips are exported in .MXF format, which means many applications can’t play the audio. For example, when I imported the AAF into Apple Logic Pro X, the track sheet was blank. I have been able to send audio from Final Cut Pro X into Logic Pro X using X2Pro Audio Convert to create an AAF.

Performance

df3115_R12_2Real-time media performance is critical to a good editing experience. Resolve 12 is optimized for hardware using the PCIe 3.0 bus, which supports greater bandwidth. Older Mac Pro towers or Windows computers that use PCIe 2.0, are going to be challenged when loaded with PCIe cards. You see this mainly in the Edit page, because more things are going on in the interface on that page. Windows user with the newest hardware and Mac users who own new “trash can” Mac Pros will most likely have a better editing experience than owners of legacy machines.

I experienced choppy video being displayed in the viewer of the Edit page, even though output through the Decklink was fine. Ironically, viewer and video output were smooth on the other pages. After consulting with Blackmagic, the following recommendations gave me the performance I would expect out of an NLE: run in the single-screen layout, close the audio mixer panel, close the audio meters, and/or switch the video monitoring setting to 8-bit. Of these, the mixer suggestion made the biggest difference. The ability to create on-the-fly, low-resolution proxies for editing wasn’t enabled with the first few builds of the public beta. It was turned on in build three. This gives you similar results to that of other NLEs running in a half-resolution, quarter-resolution or “dynamic real-time” mode.

One common mistake that I see users make, when I read some of the internet forum posts, is that they load up the timeline clips with color correction nodes and still expect real-time editing performance. Physics hasn’t changed. Adding effects and color correction to clips is going to negatively impact playback. As a general rule, get all of your editing done first and then save your color correction until last. You’ll be a lot happier.

Final thoughts

Once the official Resolve 12 release rolls out, we’ll see where it finds a place as an editor. This release won’t sway editors who are currently happy with one of the other popular NLEs to switch to Resolve 12 as their main axe. However, I suspect it will increasingly become the finishing tool of choice – probably edging out Autodesk Smoke over time. Now that the editing tools and performance are there, it becomes the ideal application for final edit revisions, grading, and mastering. It can already combine lists and media from a range of creative editing systems.

The other element in this equation is Fusion, the node-based composting application they picked up from EyeOn. There’s already a connecting plug-in between it and NLE timelines that Avid has enjoyed. With a bit more development time, I could clearly see some integration between Resolve and Fusion. That might be why “Studio” is now part of the name change. Hmmm…

When Resolve 11 came out, it, too, was touted as an editor. My critical assessment was that it was a grading tool that could be used as an editor, but you wouldn’t want to. With Resolve 12, Blackmagic has produced an application that is both grading tool and an editor. I could easily see myself using it as my secondary NLE. There is certainly great synergy between Final Cut Pro X and Resolve. Why not have both in your arsenal?

The enticement of a free editing application to many new users is hard to resist. Not to mention that it is cross-platform and unfettered by a software subscription business model. Clearly the development pace by Blackmagic Design since they acquired the product has been impressive. This makes me believe that Resolve will find a new audience willing to use it as their primary creative tool for start-to-finish post production.

Click here for a look back at Resolve 11, which will give you an additional insight into some of Resolve’s feature set.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Greater LUT Control with Koji Advance

df3715_ka_1_sm

For folks who like to use film emulsion LUTs (look up tables), Koji Color has recently updated its product line with Koji Advance. Koji Color is a collaboration between Dale Grahnthe highly regarded film lab timer (the film equivalent of a colorist) behind many blockbusters – and plug-in developer Crumplepop. Other products have included an iPad application and an earlier version of the Koji Color plug-in. (Click on any image in this post for an expanded view.)

Typically LUT packages require camera patch LUT files (to correct for each manufacturer’s log encoding scheme) and the “look” file. Some LUT developers split these into two sets of LUTs, while others combine both into a single 3D LUT file. Koji combines their LUTs, so each file is specific to a camera manufacturer and film stock type. The original version of the Koji Color plug-in was designed for Apple Final Cut Pro X and came in the form of two products – Koji DSLR and Koji Log. The lower cost DSLR package used emulation presets designed for Rec709 video signals. The Log package cost a bit more and added files and presets to be used with log gamma encoding, like ARRI Log-C. The FCP X plug-in itself also allowed for control over shadow, mid, and highlight exposure, plus saturation and a Film Stock Mix slider. The Mix slider controlled the amount of the LUT plug-in that was mixed into the image.

df3715_ka_10_smKoji Advance has replaced both the DSLR and Log plug-ins and added more controls and film grain. It is now also compatible with Motion, Premiere Pro CC, and After Effects CC, along with Final Cut Pro X. Koji Color also sells Koji Studio, which is a package of technical versions of these same LUTs intended for facilities outputting to DCI-P3 colorspace. It includes all of the Advance features as part of the package.

df3715_ka_2_smAll packages include presets built around one black-and-white and five color print film stocks. These presets were based on research intended to faithfully reproduce the look of specific Fuji and Kodak print stocks as a medium. 2302 is a black-and-white stock. 2393 is considered by Grahn to be the best print film made. 2383 is similar, but warmer. The other three options are on the cooler side. S versions are more saturation, N versions are more neutral, and LC is low contrast. There is also a 2302 HC (high contract) black-and-white stock.

At this point, it’s important to understand that these LUTs are not designed as creative looks like you’ll find in many other LUT products on the market. The application of any of the LUTs adds the color character of that medium and forms a starting point for your color grade.

When you install Koji Advance, you can opt to install it into any or all of the available host applications. A folder of the Koji 3D LUT files in the .cube format is also installed to your desktop. These are available to be used with other applications that allow LUT files to be imported, like DaVinci Resolve, Avid Media Composer, or Autodesk Smoke. You can move or copy this folder to any location you like.

Fine-tuning the look

df3715_ka_3_smTo use Koji Advance, drop the plug-in effect onto your clip. The first two choices you need to make are the camera preset and film stock. Pick these from the pulldown menus at the top of the control panel. If your footage is from a specific camera encoding scheme, such as ARRI Log-C or RedLogFilm, select the matching choice. If it is already a Rec709 color profile, then select the generic Rec709 choice. Various DSLR camera types also have available options. The film stock selector lets you choose from a number of presets based on the six film stocks and their variants. The LC preset is a brighter version that is more conducive to downstream color correction, which may be added on top of the LUT filter. As before, there’s a Film Stock Mix slider to control how much of this look is being applied to the image.

df3715_ka_4_smThe next series of sliders in the panel turns Koji Advance into a full-on color correction plug-in. You can opt for automatic white balance or manual control. If you pick auto, the controls still let you adjust the image further. There’s a Kelvin-based color temperature slider to warm up or cool off the image. Next are the three lift/gamma/gain controls, which are similar to the exposure sliders in the previous version. These act much like level controls in other applications and plug-ins. Lift adjusts shadow/black levels. Gamma for midrange. Gain for highlights.

df3715_ka_6_smDensity is a film-style control that’s probably unfamiliar to most video operators. It effectively works like an offset control that moves the whole signal higher or lower as you look at the videoscope. Using the density control doesn’t affect saturation in the same way as changing a lift/shadow control. Something to keep in mind is that Lift and Gain will clip the image at 0% and 100% on the scope. Density can move the image into the overshoot and undershoot areas below 0 and above 100. This is actually a good thing, because it preserves the full dynamic range of the image during the processing pipeline; however, it needs be corrected before any broadcast output. Therefore, when you make extreme adjustments, it’s a good idea to use a broadcast safe filter on the final output.

df3715_ka_8_smSaturation controls the chroma level, but this is only true for the color of the image, not including the coloration caused by the LUT file itself. In other words, if a film stock preset is designed to increase blue in the image and is thus a cooler tone, cranking the saturation all the way down will not result in a true black-and-white image. It will still have a slight blue cast. Only the black-and-white presets will be truly black-and-white.

df3715_ka_7_smThe last three color controls are printer point sliders. Again, this is for film-style “color timing” (color correction). The controls work globally for the whole image, so there are no separate color controls for shadows, midtones, and highlights. It works a lot like a single-wheel color corrector. Colors are grouped according to their opposites with sliders for red/cyan, green/magenta, and blue/yellow. To use these controls effectively, it’s best to understand how they work by viewing a vectorscope. If you slide the red/cyan slider all the way to red, it doesn’t increase the intensity of only reds within the image. It shifts the balance of the whole image towards red. Look at the vectorscope and you’ll see the entire chroma signal slide towards the red vector. Same for the other colors.

I’ve seen a few online comments questioning why not put a color wheel here instead of sliders. Apart from the UI issue (especially with design limitations in FCP X), it’s effectively the same thing. Let’s say on a system with color wheels you want to shift the balance towards orange. That’s halfway between red and yellow on the vectorscope. In Koji Advance, you would simply adjust the sliders for more red and more yellow, which results in a combined orange look. Two different methods to achieve the same goal, but sliders offer the advantage of a numerical value, which is easier to repeat for consistent results.

df3715_ka_5_smThe last section, which is new for Koji Advance, is film grain. They’ve picked five stock choices ranging from finer to coarser grain. Since adding grain contaminates the image, the grain section includes three adjustment controls – Film Grain Contrast, Film Grain Saturation, and Film Grain Mix. These let you dial in how subtle the presence of grain is within your shot.

In use

I’ve used the Koji film emulsion looks on previous jobs and they add a nice touch when it’s appropriate. You have to view this as the equivalent to audio engineers working with digital and analog recording systems. Analog tape is said to sound warmer, but that’s because the medium adds its own sonic character to the recording. Many engineers will record digitally, but then use analog somewhere in the final stages. Or, they’ll use a plug-in that emulates the attributes and coloration that analog tape recording gives to the sound. Using a film stock emulation for the purpose of adding character is exactly the same thing. The Koji LUTs are subtle enough that you’ll use them more frequently than some of the other choices. The controls offered by the plug-in enable you to do the work all within Koji’s panel, if you choose.

That being said, LUTs should be used as part of the grading process, not to be the process by itself. Typically I use the Koji plug-in together with other color correction tools, so it’s important to see how it affects the signal when it is part of a stack of several plug-ins. In FCP X, Hawaiki Color is still one of my favorite color correction tools. I like the on-screen controls, the tools are comprehensive, and the results are very pleasing. As a test, I stacked the two filters – Hawaiki Color, then Koji Advance. This let me grade upstream of Koji and use the two filters interactively.

df3715_ka_9_smAn issue I ran into was one of signal clipping. Hawaiki Color also permits overshoot and undershoot, meaning that dark areas can be pushed below zero on the scope. Video can be crushed for extreme contrast. This caused some sparkling color artifacts once those extreme levels hit the Koji plug-in. However, this was easily solved, by selecting Legalize in the Hawaiki Color controls. If you do this with another filter that doesn’t have a “clip” or “legalize” option and you encounter the same issue, then use any filter that does level clipping, such as the Broadcast Safe filter. Place it between your color correction filter and the Koji Advance plug-in and these artifacts will disappear.

The Koji Advance plug-in performance seems fine on most systems and shots I’ve tested. It’s an easy plug-in to understand and use, and will quickly become a tool you’ll use on every production.

©2015 Oliver Peters