Color Concepts and Terminology


It’s time to dive into some of the terms and concept that brought you modern color correction software. First of all – color grading versus color correction. Many use these terms to identify different processes, such as technical shot matching versus giving a shot a subjective “look”. I do this too, but the truth of the matter is that they are the same and are interchangeable. Grading tends to be a more European way of naming the process, but it is the same as color correction. (Click on any of the images in this article for an expanded and more descriptive view.)

All of our concepts stem from the film lab process known as color timing. Originally this described the person who knew how long to leave the negative in the chemical bath to achieve the desired result (the “timer”). Once the industry figured out to manipulate color in the negative-to-positive printing process, the “color timer” was the person who controlled the color analyzer and who dialed in degrees of density and red/blue/green coloration. The Dale Grahn Color iPad application will give you a good understanding of this process. Alexis Van Hurkman also covers it in his “Color Correction Handbook”.df_clrterms_09_sm

Electronic video color correction started with early color cameras and telecine (film-to-tape transfer or “film chain”) devices. These were based on red/blue/green color systems, where the video engineer (or “video shader”) would balance out the three components, along with exposure and black level (shadows). He or she would adjust the signal of the pick-up systems, including tubes, CCDs and photoelectric cells.

RCA added circuitry onto their cameras called a chroma proc, which divided the color spectrum according to the six divisions of the vectorscope – red, blue, green, cyan, magenta and yellow. The chroma proc let the operator shift the saturation and/or hue of each one of these six slices. For instance, you could desaturate the reds within the image. Early color correction modules for film-to-tape transfer systems adopted this same circuity. The “primary” controls manipulated the actual pick-up devices, while the “secondary” controls were downstream in the signal chain and let you further fine tune the color according to this simple, six-vector division.


Early color correction system were built to transfer color film to air or to videotape. They were part machine control and part color corrector. Modern color correction for post production came to be, because of these three key advances: memory storage, scene detection and signal decoding.

Memory storage. Once you could store and recall color correction settings, then it was easy to go back and forth between camera angles or shots and apply a different setting to each. Or you could create several looks and preview those for the client. The addition of this technology was the basis for a seminal patent lawsuit, known as the Rainbow patent suit, as the battle ranged over who first developed this technology.

Scene detection. Film transfer systems had to play in real-time to be recorded to videotape, which meant that shot changes had to trigger the change from one color correction setting to the next. Early systems did this via the operator manually marking an edit point (called “notching”), via an EDL (edit decision list) or through automatic scene detection circuitry. This was important for the real-time transfer of edited content, including film prints, cut negative and eventually videotape programs.

Signal decoding. The ability of color correction systems to decode a composite or component analog (and later digital) signal through added hardware, shifted color correction from camera shading and film transfer to being another general post production tool at a post facility. The addition of a signal decoder board in a DaVinci unit split the input signal into RGB parts and enabled the colorist to enhance the correction of an already-edited master using the “secondary” signal electronics of the system. This enabled “tape-to-tape” color correction of edited masters. Thanks to scene detection or an EDL, color correction could be shot-to-shot and frame-accurate, when played back in real-time for its re-encoded, corrected output back to a second videotape master.

Eventually the tools used in hardware-based, tape-to-tape color correction systems became standard. Quantel and Avid led the way by being first to incorporate these features into their nonlinear editing software.


Color correction software tends to break up its control into primary and secondary functions. As you can see from the earlier explanations, there’s really no reason to do that, since we are no longer controlling the pick-up devices within a camera or telecine. Nevertheless, it’s terminology we seem to be comfortable with. Often secondary controls enable masking and keys to isolate color – not because it has to be that way – but, because DaVinci added these features into their secondary control set. In modern correction tools, any function could happen on any layer, node, room, etc.df_clrterms_03_sm

The core language for color manipulation still boils down to the simple controls exemplified by the Dale Grahn app. A signal can be brighter, darker, more or less “dense” (contrast) and have its colorimetry shifted by added or subtracting red, blue or green for the overall image or in the highlight, midrange or shadow portions of the image. This basic approach can be controlled through sliders, knobs, color wheels and other user interfaces. Different software applications and plug-ins get to the same point through different means, so I’ll cover a few approaches here. Bear in mind, that since some of these actually represent somewhat different color science and math, examples that I present might not yield exactly the same results. Many controls are equivalent in their effect, though not necessarily identical in how they affect the image.

df_clrterms_01_smA common misconception is that shadow/mid/highlight controls on a 3-way color corrector will evenly divide the waveform into three discrete ranges. In fact, these are very large, overlapping ranges that interact with each other. If you shift a shadow luminance control up, it doesn’t typically just expand or compress the lower third of the waveform. Although some correctors act this way, most tend to shift the whole waveform up or down. If you change the color balance of the midrange, this color change will also affect shadows and highlights. The following is a quick explanation of some of the popular color control models.


df_clrterms_07_smContrast and temperature controls have recently become more popular and are considered a more photographic approach to correction. When you adjust contrast, the image levels expand or stretch as viewed on a waveform. Highlights get brighter and shadows deepen. This contrast expansion centers on a pivot point, which by default is at the center of the signal. If you change the pivot slider you are shifting the center point of this contrast expansion. In one direction, this means the contrast control will stretch the range below the pivot point more than above it. Shift the pivot slider in the other direction for the opposite effect.

df_clrterms_06_smColor temperature and tint (also called magenta) controls balance the red/blue/green signal channels in relationship to each other. If you slide a color temperature control while watching an RGB parade display on a waveform, you’ll note that adjustments shift the red and blue channels up or down in the opposite direction to each other, while leaving green unaffected. When you adjust the tint (or magenta) slider, you are adjusting the green channel. As you raise or lower the green, both the red and blue channels move together in a compensating direction.


df_clrterms_08_smThe SOP model is used for CDL (color decision list) values and breaks down the signal according to luma (master), red, green and blue and are expressed in the form of plus or minus values for slope, offset and power. Scratch Play’s color adjustments are a good example of the SOP model in action. Slope is equivalent to gain. Picture the waveform as a diagonal line from dark to light. As you rotate this imaginary line, the higher part becomes taller, which represents brightness values. Think of the slope concept as this rotating line. As such, its results are comparable to a contrast control.

The offset control shifts the entire signal up or down, similar to other shadow or lift controls. The power control alters gamma. As you adjust power, the gamma signal is curved in a positive or negative direction, effectively making the midrange tones lighter or darker.


df_clrterms_02_smThe LGG model is the common method used for most 3-way color wheel-style correctors. It effectively works in a similar manner to contrast and SOP, except that the placement of controls makes more sense to most casual users. Gain, as the name implies, increases the signal, effectively expanding the overall values and making highlights brighter. Lift shifts the entire signal higher or lower. Changing a lift control to darken shadows, will also have some effect on the overall image. Gamma bends the curve and effectively makes the midrange values lighter or darker.

Luma ranges

df_clrterms_04_smThe portions of the signal altered by highlight/shadow/midrange controls (like SOP, LGG or other) overlap. If you change the color balance for the midrange tones, you will also contaminate shadows and highlights with this color shift. The extent of the portion that is affected is controlled by a luma range control. Many color correction applications do not give you control over shifting the crossover points of these luma ranges. Some that do, include Avid Symphony, Synthetic Aperture Color Finesse and Adobe SpeedGrade. Each offers curves or sliders to reduce or expand the area controlled by each luma range and effectively tightens or widens the overlap or crossover between the ranges.

DaVinci Resolve includes a similar function within its log-style color wheels panel. It uses range adjustments that can limit the area affected by the balance and saturation controls. Similar results may be achieved by using HSL keyers or qualifiers that include softening controls.

Channels or printer lights

df_clrterms_05_smVideo signals are made up of red, blue and green channel information. It is not uncommon for properly-balanced digital cameras to still maintain a green color cast to the overall image, especially if log-profile recording was used. Here, it’s best to simply balance the overall channels first to neutralize the image, rather than attempt to do this through color wheel adjustments. Some software uses actual channel controls, so it’s easy to make a base-level adjustment to the output or mix of a channel. If your software uses printer lights, you can achieve the same results. Printer lights harken back to lab color timing, using point values that equate to color analysis values. Regardless, dialing in a plus or minus red/blue/green printer light value effectively gives you the same results as altering the output value of a specific color channel.

This is just a short post to go over some of the more confusing terminology found in modern color correction software. Many applications tend to blend the color science models, so as you apply the points mentioned to your favorite tool, you may see somewhat different results. Hopefully I’ve gotten you in the ballpark, in order to understand what happens when you twirl the knob the next time.

©2014 Oliver Peters

The Wolf of Wall Street


Few directors have Martin Scorsese’s talent to tell entertaining stories about the seamier side of life. He has a unique ability to get us to understand and often be seduced by the people who live outside of the accepted norms. That’s an approach he’s used with great success in films like Taxi Driver, Goodfellas, Gangs of New York and others. Following this path is Scorsese’s newest, The Wolf of Wall Street, based on the memoir of stock broker Jordan Belfort.

Belfort founded the brokerage firm Stratton Oakmont in the 1990s, which eventually devolved into an operation based on swindling investors. The memoir chronicles Belfort’s excursions into excesses and debauchery that eventually led to his downfall and federal prosecution for securities fraud and money laundering. He served three years in federal prison and was sentenced to pay $110 million in restitution after cooperating with the FBI. The film adaptation was written by Terence Winter (Boardwalk Empire, Sopranos), who himself spent some time working in a tamer environment at Merrill Lynch during law school. Leo DiCaprio stars as Belfort, along with Jonah Hill and Matthew McConaughy as fellow brokers. ( Note: Due to the damage caused by the real Belfort and Stratton Oakmont to its investors, the release of the film is not without its critics. Click here, here and here for some reactions.)

df_wows_04I recently spoke with Thelma Schoonmaker, film editor for The Wolf of Wall Street. Schoonmaker has been a long-time collaborator with Martin Scorsese, most recently having edited Hugo. I asked her how it was to go from such an artistic and technically complex film, like Hugo, to something as over-the-top as The Wolf of Wall Street. She explained, “When I encounter people outside of this industry and they learn I had some connection with Hugo, they make a point of telling me how much they loved that film. It really touched them. The Wolf of Wall Street is a completely different type of film, of course.”

df_wows_01“I enjoyed working on it, because of its unique humor, which no one but Scorsese expected. It’s highly entertaining. Every day I’d get these fantastically funny scenes in dailies. It’s more of an improvisational film like Raging Bull, Casino or Goodfellas. We haven’t done one of those in awhile and I enjoyed getting back to that form. I suppose I like the challenge, because of the documentary background that Marty and I have from our early careers. Continuity doesn’t always match from take to take, but that’s what makes the editing great fun, but also hard.  You have to find a dramatic shape for the improvised scenes, just as you do in a documentary.”

Schoonmaker continued, “The scenes and dialogue are certainly scripted and Scorsese tells the actors that they need to start ‘here’ and end up ‘there’. But then, ‘have fun with the part in the middle’. As an editor, you have to make it work, because sometimes the actors go off on wonderful tangents that weren’t in the script. The cast surrounding Belfort and his business partner, Donnie Azoff (played by Jonah Hill), very quickly got into creating the group of brokers who bought into the method Belfort used to snag investors into questionable stock sales. They are portrayed as not necessarily the smartest folks and Belfort used that to manipulate them and become their leader.  This is fertile ground for comedy and everyone dove into their parts with incredible gusto – willing to do anything to create the excess that pervaded Belfort’s company.  They also worked together perfectly as an ensemble – creating jealousies between themselves for the film.”

df_wows_03The Wolf of Wall Street is in many ways a black comedy. Schoonmaker addressed the challenges of working with material that portrayed some pretty despicable behavior. “Improvisation changed the nature of this film. You could watch the actors say the most despicable things in a take and then they’d crack up afterwards. I asked Leo at one point how he could even say some of the lines with a straight face! Some of it is pretty bizarre, like talking about how to create a dwarf-tossing contest, which Belfort organized as morale boosting for his office parties. Or offering a woman $10,000 to shave her head. And this was actually done in dead seriousness, just for sport.”

In order to get the audience to follow the story, you can’t avoid explaining the technical intricacies of the stock market. Schoonmaker explained, “Belfort started out selling penny stocks. Typically these have a fifty percent profit compared with blue chips stocks that might only have a one percent profit margin. Normally poorer investors buy penny stocks, but Belfort got his brokers to transfer those sales techniques to richer clients, who were first sold a mix of blue chip and penny stocks. From there, he started to manipulate the penny stocks for his own gain, ultimately leading to his downfall. We had to get some of that information across, without getting too technical. Just enough – so the audience could follow the story. Not everything is explained and there are interesting jumps forward. Leo fills in a lot of this information with his voice-overs. These gave Leo’s character additional flavor, reinforcing his greed and callousness because of the writing.  A few times Scorsese would have Leo break the fourth wall by talking directly to the audience to explain a concept.”

The Wolf of Wall Street started production in 2012 for a six-month-long shoot and completed post in November 2013. It was shot primarily on 35mm film, with additional visual effects and low-light material recorded on an ARRI ALEXA. The negative was scanned and delivered as digital files for editing on a Lightworks system.

df_wows_06Schoonmaker discussed the technical aspects. “[Director of Photography] Rodrigo Prieto did extensive testing of both film and digital cameras before the production. Scorsese had shot Hugo with the ALEXA, and was prepared to shoot digitally, but he kept finding he liked the look of the film tests best. Rob Legato was our visual effects supervisor and second unit director again. This isn’t an effects film, of course, but there are a lot of window composites and set extensions. There were also a lot of effects needed for the helicopter shots and the scenes on the yacht. Rob was a great collaborator, as always.

Scott Brock, my associate editor, helped me with the temp sound mixes on the Lightworks and Red Charyszyn was my assistant handling the complex visual effects communication with Rob. They both did a great job.” Scott Brock added some clarification on their set-up. According to Brock, “The lab delivered the usual Avid MXF media to us on shuttle drives, which we copied to our EditShare Xstream server.  We used two Avids and three Lightworks for Wolf, all of which were networked to the Xstream server.  We would use one of the Avids to put the media into Avid-style folders, then our three Lightworks could link to that media for editing.”

Schoonmaker continued, “I started cutting right at the beginning of production. As usual, screening dailies with Scorsese was critical, for he talks to me constantly about what he has shot. From that and my own feelings, I start to edit. This was a big shoot with a very large cast of extras playing the brokers in the brokerage bullpens. These extras were very well-trained and very believable, I think. You really feel immersed in the world of high-pressure selling. The first cut of the film came in long, but still played well and was very entertaining. Ultimately we cut about an hour out to get to the final length of just under three hours with titles.”

df_wows_05“The main ‘rewriting of the scenes’ that we did in the edit was because of the improvisations and the occasional need for different transitions in some cases.  We had to get the balance right between the injected humor and the scripted scenes. The center of the film is the big turning point. Belfort turns a potentially damaging blow to an IPO that the company is offering into a triumph, as he whips up his brokers to a fever pitch. We knew we had to get to that earlier than in the first cut. Scorsese didn’t want to simply do a ‘rise and fall’ film. It’s about the characters and the excesses that they found themselves caught up in and how that became so intoxicating.”

An unusual aspect of The Wolf of Wall Street is the lack of a traditional score. Schoonmaker said, “Marty has a great gift for putting music to film. He chose  unexpected pre-recorded pieces to reflect the intensity and craziness of Belfort’s world. Robbie Robertson wrote an original song for the end titles, but the rest of the film relies completely on existing songs, rather than score. It’s not intended to be period-accurate, but rather music that Scorsese feels is right for the scene. He listens to [SiriusXM] The Loft while he’s shaving in the morning and often a song he hears will just strike him as perfect. That’s where he got a lot of his musical inspiration for Wolf.”

Originally written for DV magazine / CreativePlanetNetwork.

©2014 Oliver Peters

LUTs and FCP X


LUTs or color look-up tables are a method of converting images from one color space or gamma profile into another. LUTs are usually a mathematically correct transform of one set of color and level values into another. For most editors and colorists, LUTs are commonly associated with log profiles that are increasingly used with various digital cameras, like an ARRI ALEXA, RED One, RED Epic or Blackmagic Design Cinema Camera. (Click on the images in this article for an expanded view.)

The concept gets confusing, because there are various types of LUTs and they can be inserted into different stages of the pipeline. There are display LUTs, used to convert the viewing color space, such as from Rec. 709 (video) into P3 (digital cinema projection). These can be installed into hardware conversion boxes, monitors and within software grading applications. There are camera LUTs, which are used to convert gamma profiles, such as from log-C to Rec. 709. And finally, there are creative LUTs used for aesthetic purposes, like film stock emulation.

df_luts_02One of the really sweet parts of Apple Final Cut Pro X is that it offers a vastly improved color pipeline that ties in closely to underpinnings of the OS, such as ColorSync. This offers developers opportunities over FCP “legacy” and quite frankly over many other competitors. Built into the code is the ability to recognize certain camera metadata if the camera manufacturer chooses to take advantage of Apple’s SDK. ARRI, Sony and RED are among those that have done so. For example, when you import ARRI ALEXA footage that was recorded with a log-C gamma profile, a metadata flag in the file toggles on log processing automatically within FCP X. Instead of seeing the flat log-C image, you see one that has already been converted, on-the-fly into Rec. 709 color space.

This built-in log processing comes with some caveats, though. The capability is only enabled with files recorded on ALEXA cameras with more recent firmware. It cannot be manually applied to older log-C footage, nor to any other log-encoded video file. It can only be toggled on or off without any adjustments. Finally, because this is done via under-the-hood ColorSync profile changes, it happens prior to the point any filters or color correction can be applied within FCP X itself.

df_luts_03A different approach has been developed by colorist Denver Riddle, known for his Color Grading Central website, products and tutorials. His new product, LUT Utility, is designed to provide FCP X editors with a better way of using LUTs for both corrective and creative color transforms. The plug-in installs into both Final Cut Pro X and Motion 5 and comes with a number of built-in LUTs for various cameras, such as the ALEXA, Blackmagic and even the Cinestyle profiles used with the Canon HDSLRs. Simply drop the filter onto a clip and select the LUT from the pulldown menu in the FCP X inspector pane. As a filter, you can freely apply any LUT selection, regardless of camera – plus, you can adjust the strength of the LUT via a slider. It can work within a series of filters applied to the same clip and can be placed upstream or downstream of any other filters, as well as within an adjustment layer (blank title effect). You can also stack multiple instances of the LUT with different settings on the same clip for creative effect.

df_luts_04The best part of LUT Utility is that you aren’t limited to the built-in LUTs. When you install the plug-in, a LUT Utility pane is added to System Preferences. In that pane, you can add additional LUTs sold by Color Grading Central or that you have created yourself. (External LUT files can be directly accessed within the filter when working in Motion 5.) One such package is the set of Osiris Digital Film Emulation LUTs developed jointly by Riddle and visionCOLOR. These are a set of nine film LUTs designed to mimic the looks of various film stocks. Each has two settings designed for either log or Rec. 709 video. For example, you can take an ALEXA log-C file and apply two instances of LUT Utility. Set the first filter to use the log-C-to-Rec.709 LUT. Then in the second filter, pick one of the film LUTs, but use the Rec. 709 version of it. Or, you could apply one instance of the LUT Utility filter and simply pick the same film LUT, but instead, select its log version. Both work, but will give you slightly different looks. Using the filter’s amount slider, it’s easy to fine tune the intensity of the effect.

df_luts_05LUT Utility is applied as a filter, which means you can still add other color correction filters before or after it. Applying a filter, like Hawaiki Color, prior to a log conversion LUT, means that you would be adjusting color values of the log image, before converting it into Rec. 709. If you add such a filter after the LUT, then you are grading the already-converted image. Each position will give you different results, but most of this is handled gracefully, thanks to FCP X’s floating-point processing. Finally, you can also apply the LUT as a filter and then do additional corrections downstream of the filter by using the built-in Color Board tools.

I found these LUTs easy to install and use. They appear to be pretty lightweight in how they affect FCP X playback performance. I’m running a 2009 Mac Pro with a new Mavericks installation. I can apply one or more instances of the LUT Utility filter and my unrendered ProRes media plays in real-time. With the widespread use of log and log-style gamma profiles, this is one of the handiest filter sets to have if you are a heavy FCP X user. Not only are most of the common cameras covered, but the Osiris LUTs add a nice creative edge that you won’t find at this price point in competitive products. If you use FCP X for color correction and finishing, then it’s really an essential tool.

©2014 Oliver Peters

American Hustle


Hot off of his success with Silver Linings Playbook, writer/director David O. Russell is back with the year-end release of American Hustle. The film (co-written with Eric Singer) was inspired by the true life FBI ABSCAM sting operation of the late 1970s. It tells the story of how FBI agent Richie DiMaso (Bradley Cooper) recruits con man Irving Rosenfeld (Christian Bale) and his partner Sydney Prosser (Amy Adams) to pull the sting off. While the film builds on many of the facts of the actual events, Russell chose to make this a work of fiction to allow himself to infuse the characters with his usual unconstrained depth and richness. It’s not as much about politicians who take bribes, but rather about the personalities who develop the con that’s at the heart of the sting operation.

I interviewed Jay Cassidy, Crispin Struthers and Alan Baumgarten, American Hustle’s editing trio, at the beginning of November – just a few days after the cut was locked. Jay Cassidy pointed out the compressed time frame they were under. He explained, “American Hustle had a longer shoot and shorter post schedule than David’s usual films. This project was always intended as coming straight on the heels of Silver Linings Playbook, which Crispin and I had both worked on. In fact, we read through the first draft of the script while still cutting Silver Linings. Thanks to the awards season and the success of that film, the transition into this film became more compressed with less prep. However, the actors that David wanted for American Hustle were scheduled, so if the film was to be made this year with this cast, then the production company had to move forward. They started shooting in mid-March and wrapped in mid-May after a 42-day shoot schedule. We’ve been in post since then. I started at the beginning of principal photography, Crispin came on board four weeks later and Alan six weeks later.”

df_ah_06This accelerated schedule with a December release target was facilitated by the post production sound team getting an early jump on things.  Headed up by sound editor/re-recording mixer John Ross, dialogue clean-up, sound design and music editing had already been happening throughout the period from May until November. Therefore, it wasn’t a matter of starting final sound editing and mixing from scratch, once the cut was finally locked in November.

 Bucking the digital trend, Russell shot American Hustle on film. Cassidy explained, “David likes to shoot 2-perf 35mm. Film was the right look for this drama and 2-perf gives him 22-minute runs on the camera. This means he can keep rolling with fewer stops, so he gets longer production time before the magazine needs to be reloaded. Although, film’s days are definitely numbered. We used Fuji stock and during the filming were informed by Fuji that they were discontinuing film manufacturing. Of course, they did reassure us that there would be enough negative stock available for us to complete the film without any worries!” Deluxe Labs in New York handled film processing. Company 3 in New York transferred the film for dailies and then delivered digital files to the editing team on hard drives for editing.

df_ah_02Struthers continued, “David likes to dive right into post after production. We don’t watch a first assembly of the full movie as with many other directors. We tend to cut individual scenes and then David reviews those and works with us to build the scenes moment by moment. David is very confident about the editing process, so he’s covered himself in order to have options. He likes to shoot the performances with different ‘calibrations’ to the actors’ emotions to give himself choices in the cutting room.”

Cassidy added, “With David, we’ve all learned that you can’t presume to know which is the best version of an actor’s performance, because of the context around it. It’s usually better to take a scene with three variations to the performance and cut three versions of the scene. This gives David a good starting point with the dialogue and lets him see how the options work.”

df_ah_05Editors often face creative challenges from a film’s length or structure and American Hustle was no exception. Baumgarten explained, “We used a pattern of parallel and overlapping action to condense the film. Rather than drop whole scenes, we found that many of the important story points from those scenes could be preserved by inserting pieces of them into other scenes. This let us tell a more succinct and better story, plus frame the information into a context that makes sense for the audience. Once we did that on a few scenes and saw that it worked well for this film, we decided to find other sections where we could use the same pattern.”

df_ah_03Visual effects were handled in a unique fashion. Cassidy said, “This film has a surprising number of effects, including green screen composites and period fixes. Also the characters wear sunglasses. Many of those shots ended up needing some touch-up to remove unwanted reflections. The production company set up an in-house team and hired the compositors to do most of the effects. They were divided up into two groups, running [Adobe] After Effects and [The Foundry’s] Nuke software. This proved to be very cost-effective, because they handled both temp effects for screenings, as well as final effects. Towards the end, some of the more time-consuming or complex shots were sent to outside vendors, but the bulk of the work was done in-house. We had quicker turnaround for effects, because the compositors were right next door. It was a very interactive process. You could ask for an effect in the morning and have it by the end of the day.”

df_ah_04American Hustle was edited using Avid Media Composer systems connected to an Avid ISIS shared storage network. There were seven Avid systems in the cutting rooms for editors and assistants, one for visual effects, and one in the mix stage. John Ross also used Avid Pro Tools connected with the video satellite system. Cassidy offered his take on the technology side, “It was great working with the ISIS system. It’s Ethernet-based, so this makes it easy to add on more machines, as needed – like my laptop for editing. We were cutting with version 6.5.3 of the Media Composer software and I really like the improvements Avid made. For example, the ability to copy-and-paste audio keyframes and the new ‘select-to-the-right’ function without also grabbing timeline filler media.”

Jay Cassidy concluded, “I’m a big proponent of [Avid] Script Sync. Our first assistant, Mike Azevedo did a great job getting this all loaded and organized. Script Sync was a real time-saver on this film. Sometimes the media was organized by the script and sometimes we had to use transcriptions. There was a lot of coverage on the film mags and this was often not in scene order, but rather, all over the reel. Using Script Sync made it possible to have all of the performances grouped together by the dialogue lines of the scene. In the past, David had been used to the little bit of time it might take to find some of the coverage when he’d ask for alternates. With Script Sync it was all right there, so David could be assured that he had truly seen all of the available coverage for a scene.”

Originally written for DV magazine / CreativePlanetNetwork.

©2014 Oliver Peters