HDR and RAW Demystified, Part 1

Two buzzwords have been the highlight of many tech shows within this past year – HDR and RAW. In this first part, I will attempt to clarify some of the concepts surrounding video signals, including High Dynamic Range (HDR). In part 2, I’ll cover more about camera raw recordings.

Color space

Four things define the modern video signal: color space (aka color gamut), white point, gamma curve, and dynamic range. The easiest way to explain color space is with the standard triangular plot of the color spectrum, known as a chromaticity diagram. This chart defines the maximum colors visible to most humans when visualized on an x,y grid. Within it are numerous ranges that define a less-than-full range of colors for various standards. These represent the technical color spaces that cameras and display systems can achieve. On most charts, the most restrictive ranges are sRGB and Rec. 709. The former is what many computer displays have used until recently, while Rec. 709 is the color space standard for high definition TV. (These recommendations were developed by the International Telecommunications Union, so Rec. 709 is simply shorthand for ITU-R Recommendation BT.709.)

Next out is P3, a standard adopted for digital cinema projection and more recently, new computer displays, like those on the Apple iMac Pro. While P3 doesn’t display substantially more color than Rec. 709, colors at the extremes of the range do appear different. For example, the P3 color space will render more vibrant reds with a more accurate hue than Rec. 709 or sRGB. With UHD/4K becoming mainstream, there’s also a push for “better pixels”, which has brought about the Rec. 2020 standard for 4K video. This standard covers about 75% of the visible spectrum, although, it’s perfectly acceptable to deliver 4K content that was graded in a Rec. 709 color space. That’s because most current displays that are Rec. 2020 compatible can’t actually display 100% of the colors defined in this standard, yet.

The center point of the chromaticity diagram is white. However, different systems consider a slightly different color temperature to be white. Color temperature is measured in Kelvin degrees. Displays are a direct illumination source and for those, 6500-degrees (more accurately 6504) is considered pure white. This is commonly referred to as D-65. Digital cinema, which is a projected image, uses 6300-degrees as its white point. Therefore, when delivering something intended for P3, it is important to specify whether that is P3 D-65 or P3 DCI (digital cinema).

Dynamic range

Color space doesn’t live on its own, because the brightness of the image also defines what we see. Brightness (and contrast) are expressed as dynamic range. Up until the advent of UHD/4K we have been viewing displays in SDR (standard dynamic range). If you think of the chromaticity diagram as lying flat and dynamic range as a column that extends upward from the chart on the z-axis, you can quickly see that the concept can be thought of as a volumetric combination of color space and dynamic range. With SDR, that “column” goes from 0 IRE up to 100 IRE (also expressed as 0-100 percent).

Gamma is the function that changes linear brightness values into the weighted values that are translated to our screens. It defines numerical pixel value to its actual brightness. By increasing or decreasing gamma values, you are in effect, bending that straight-line between darkest and lightest values into a curve. This changes the midtone of the displayed image, making the image appear darker or lighter. Gamma values are applied to both the original image and to the display system. When they don’t match, then you run into situations where the image will look vastly different when viewed on one system versus another.

With the advent of UHD/4K, users have also been introduced to HDR (high dynamic range), which allows us to display brighter images and recover the overshoot elements in a frame, like bright lights and reflections. It is important to understand that HDR video is not the same as HDR photography. HDR photos are created by capturing several bracketed exposures of the same image and then blending those into a composite – either in-camera or via software, like Photoshop or Lightroom. HDR photos often yield hyper-real results, such as when high-contrast sky and landscape elements are combined.

HDR video is quite different. HDR photography is designed to work with existing technology, whereas HDR video actually takes advantage of the extended brightness range made possible in new displays. It is also only visible with the newest breed of UHD/4K TV sets that are HDR-capable. Display illumination is measured in nits. One nit equals one candela per square meter – in other words, the light of a single candle spread over a square meter. SDR displays have been capable of up to 100 nits. Modern computer displays, monitors, and consumer television sets can now display brightness in the range of 500 to 1,000 nits and even brighter. Anything over 1,000 nits is considered HDR. But that’s not the end of the story, as there are currently four competing standards: Dolby Vision, HDR10, HDR10+, and HLG. I won’t get into the weeds about the specifics of each, but they all apply different peak brightness levels and methods. Their nit levels range from 1,000 up to Dolby Vision’s theoretical limit of 10,000 nits.

Just because you own a high-nits display doesn’t mean you are seeing HDR. It isn’t simply turning up the brightness “to 11”, but rather providing the headroom to extend the parts of the image that exceed the normal range. These peaks can now be displayed with detail, without compressing or clipping them, as we do now. When an HDR master is created, metadata is stored with the file that tells the display device that the signal is an HDR signal and to turn on the necessary circuitry. That metadata is carried over HDMI. Therefore, every device in the playback chain must be HDR-capable.

HDR also means more hardware to work with it accurately. Although you may have grading software that accommodates HDR – and you have a 500 nits display, like those in an iMac Pro – you can’t effectively see HDR in order to properly grade it. That still requires proper capture/playback hardware from Blackmagic Design or AJA, along with a studio-grade, external HDR monitor.

Unfortunately, there’s one dirty little secret with HDR. Monitors and TV sets cannot display a full screen image at maximum brightness. You can’t display a total white background at 1,000 nits on a 1,000 nits display. These displays employ gain circuitry to darken the image in those cases. The responsiveness of any given display model will vary widely depending on how much of the screen is at full brightness and for how long. No two models will be at exactly the same brightness for any given percentage at peak level.

Today HDR is still the “wild west” and standards will evolve as the market settles in on a preference. The good news is that cameras have been delivering content that is “HDR-ready” for several years. This brings us to camera raw and log encoding, which will be covered in Part 2.

Originally written for RedShark News.

©2018 Oliver Peters

Advertisements

Wild Wild Country

Sometimes real life is far stranger than fiction. Such is the tale of the Rajneeshees – disciples of the Indian guru Bhagwan Shree Rajneesh – who moved to Wasco County, Oregon in the 1980s. Their goal was to establish a self-contained, sustainable, utopian community of spiritual followers, but the story quickly took a dark turn. Conflicts with the local Oregon community escalated, including the first and single, largest bioterror attack in the United States, when a group of followers poisoned 751 guests at ten local restaurants through intentional salmonella contamination. 

Additional criminal activities included attempted murder, conspiracy to assassinate the U. S. Attorney for the District of Oregon, arson, and wiretapping. The community was largely controlled by Bhagwan Shree Rajneesh’s personal secretary, Sheela Silverman (Ma Anand Sheela), who served 29 months in federal prison on related charges. She moved to Switzerland upon her release. Although the Rajneeshpuram community is no more and its namesake is now deceased, the community of followers lives on as the Osho International Foundation. This slice of history has now been chronicled in the six-part Netflix documentary Wild Wild Country, directed by Chapman and Maclain Way.

Documentaries are truly an editor’s medium. More so than any other cinematic genre, the final draft of the script is written in the cutting room. I recently interviewed Wild Wild Country’s editor, Neil Meiklejohn, about putting this fascinating tale together.

Treasure in the archives

Neil Meiklejohn explains, “I had worked with the directors before to help them get The Battered Bastards of Baseball ready for Sundance. That is also an Oregon story. While doing their research at the Oregon Historical Society, the archivist turned them on to this story and the footage available. The 1980s was an interesting time in local broadcast news, because that was a transition from film to video. Often stories were shot on film and then transferred to videotape for editing and airing. Many times stations would simply erase the tape after broadcast and reuse the stock. The film would be destroyed. But in this case, the local stations realized that they had something of value and held onto the footage. Eventually it was donated to the historical society.”

“The Rajneeshees on the ranch were also very proud of what they were doing – farming and building a utopian city – so, they would constantly invite visitors and media organizations onto the ranch. They also had their own film crews documenting this, although we didn’t have as much access to that material. Ultimately, we accumulated approximately 300 hours of archival media in all manner of formats, including Beta-SP videotape, ripped DVDs, and the internet. It also came in different frame rates, since some of the sources were international. On top of the archival footage, the Ways also recorded another 100 hours of new interviews with many of the principals involved on both sides of this story. That was RED Dragon 6K footage, shot in two-camera, multi-cam set-ups. So, pretty much every combination you can think of went into this series. We just embraced the aesthetic defects and differences – creating an interesting visual texture.”

Balancing both sides of the story

“Documentaries are an editor’s time to shine,” continues Meiklejohn. “We started by wanting to tell the story of the battle between the cult and the local community without picking sides. This really meant that each scene had to be edited twice. Once from each perspective. Then those two would be combined to show both sides as point-counterpoint. Originally we thought about jumping around in time. But, it quickly became apparent that the best way to tell the story was as a linear progression, so that viewers could see why people did what they did. We avoided getting tricky.”

“In order to determine a structure to our episodes, we first decided the ‘ins’ and ‘outs’ for each and then the story points to hit within. Once that was established, we could look for ‘extra gold’ that might be added to an episode. We would share edits with our executive producers and Netflix. On a large research-based project like this, their input was crucial to making sure that the story had clarity.”

Managing the post production

Meiklejohn normally works as an editor at LA post facility Rock Paper Scissors. For Wild Wild Country, he spent ten months in 2017 at an ad hoc cutting room located at the offices of the film’s executive producers, Jay and Mark Duplass. His set-up included Apple iMacs running Adobe Creative Cloud software, connected to an Avid ISIS shared storage network. Premiere Pro was the editing tool of choice.

Meiklejohn says, “The crew was largely the directors and myself. Assistant editors helped at the front end to get all of the media organized and loaded, and then again when it came time to export files for final mastering. They also helped to take my temp motion graphics – done in Premiere – and then polish them in After Effects. These were then linked back into the timeline using Dynamic Link between Premiere and After Effects. Chapman and Maclain [Way] were very hands-on throughout, including scanning in stills and prepping them in Photoshop for the edit. We would discuss each new segment to sort out the best direction the story was taking and to help set the tone for each scene.”

“Premiere Pro was the ideal tool for this project, because we had so many different formats to deal with. It dealt well with the mess. All of the archival footage was imported and used natively – no transcoding. The 6K RED interview footage was transcoded to ProRes for the ‘offline’ editing phase. A lot of temp mixing and color correction was done within Premiere, because we always wanted the rough cuts to look smooth with all of the different archival footage. Nothing should be jarring. For the ‘online’ edit, the assistants would relink to the full-resolution RED raw files. The archival footage was already linked at its native resolution, because I had been cutting with that all along. Then the Premiere sequences were exported as DPX image sequences with notched EDLs and sent to E-Film, where color correction was handled by Mitch Paulson. Unbridled Sound handled the sound design and mix – and then Encore handled mastering and 1080p deliverables.”

Working with 400 hours of material and six hour-long episodes in Premiere might be a concern for some, but it was flawless for Meiklejohn. He continues, “We worked the whole series as one large project, so that at any given time, we could go back to scenes from an earlier episode and review and compare. The archival material was organized by topic and story order, with corresponding ‘selects’ sequences. As the project became bigger, I would pare it down by deleting unnecessary sequences and saving a newer, updated version. So, no real issue by keeping everything in a single project.”

As with any real-life event, where many of the people involved are still alive, opinions will vary as to how balanced the storytelling is. Former Rajneeshees have both praised and criticized the focus of the story. Meiklejohn says, “Sheela is one of our main interview subjects and in many ways, she is both the hero and the villain of this story. So, it was interesting to see how well she has been received on social media and in the public screenings we’ve done.”

Wild Wild Country shares a pointed look into one of the most bizarre clashes in the past few decades. Meiklejohn says, “Our creative process was really focused on the standoff between these two groups and the big inflection points. I tried to let the raw emotions that you see in these interviews come through and linger a bit on-screen to help inform the events that were unfolding. The story is sensational in and of itself, and I didn’t want to distract from that.”

Originally written for CreativePlanetNetwork.

©2018 Oliver Peters

Editing the FX Series Atlanta

Atlanta just wrapped its second season on the FX Network. The brainchild of actor/writer/producer/director Donald Glover, Atlanta is the story of Earn Marks, a Princeton drop-out who returns home to Atlanta, where he decides to manage his cousin’s rap career. The show is very textural and plot is secondary. It loosely follows Earn and the people in his life – specifically his cousin, Paper Boi – an up and coming rapper – and his friend and posse-mate, Darrius.

The visual architect of the show is director Hiro Murai, who has directed the majority of the episodes. He has set an absurdist tone for much of the story. Any given episode can be wildly different from the episodes that come on either side of it. The episodes taken as a whole make up what the series is about.

I recently had a chance to interview the show’s editors, Kyle Reiter and Isaac Hagy, about working on Atlanta and their use of Adobe Premiere Pro CC to edit the series.

Isaac Hagy: “I have been collaborating with Hiro for years. We went to college together and ever since then, we’ve been making short films and music videos. I started out doing no-budget music videos, then eventually moved into documentaries and commercials, and now television. A few years ago, we made a short film called Clapping for the Wrong Reasons, starring Donald. That became kind of an aesthetic precursor that we used in pitching this show. It served as a template for the tone of Atlanta.”

“I’ve used pretty much every editing software under the sun – cutting short films in high school on iMovie, then Avid in college when I went to film school at USC. Once I started doing short film projects, I found Final Cut Pro to be more conducive to quick turnarounds than Avid. I used that for five or six years, but then they stopped updating it, so I needed to switch over to a more professional alternative. Premiere Pro was the easiest transition from Final Cut Pro and, at that time, Premiere was starting to be accepted as a professional platform. A lot of people on the show come from a very DIY background, where we do everything ourselves. Like with the early music videos – I would color and Hiro would do effects in After Effects. So, Premiere was a much more natural fit. I am on a show using [Avid] Media Composer right now and it feels like a step backwards.”

With a nod to their DIY ethos, post-production for Atlanta also follows a small, collective approach. 

Kyle Reiter: “We rent a post facility that is just a single-story house. We have a DIY server called a NAS that one of our assistants built and all the media is stored there. It’s just a tower. We brought in our own desktop iMacs with dual monitors that we connect to the server over Ethernet. The show is shot with ARRI Amira cameras in a cinema 2K format. Then that is transcoded to proxy media for editing, which makes it easy to manage. The color correction is done in Resolve. Our assistant editors online it for the colorist, so there’s no grading in-house.” Atlanta airs on the FX Network in the 720p format.

The structure and schedule of this production make it possible to use a simple team approach. Projects aren’t typically shared among multiple editors and assistants, so a more elaborate infrastructure isn’t required to get the job done. 

Isaac Hagy: “It’s a pretty small team. There’s Kyle and myself. We each have an assistant editor. We just split the episodes, so I took half of the season and Kyle the other half. We were pretty self-contained, but because there were an odd number of episodes, we ended up sharing the load on one of them. I did the first cut of that episode and Kyle took it through the director’s cut. But other than that, we each had our individual episodes.”

Kyle Reiter: “They’re in Atlanta for several months shooting. We’ll spend five to seven days doing our cut and then typically move on to the next thing, before we’re finished. That’s just because they’re out of town for several months shooting and then they’ll come back and continue to work. So, it’s actually quite a bit of time calendar-wise, but not a lot of time in actual work hours. We’ll start by pulling selects and marking takes. I do a lot of logging within Premiere. A lot of comments and a lot of markers about stuff that will make it easy to find later. It’s just breaking it down to manageable pieces. Then from there, going scene-by-scene, and putting it all together.”

Many scripted television series that are edited on Avid Media Composer rely on Avid’s script integration features. This led me to wonder whether Reiter and Hagy missed such tools in Premiere Pro.

Isaac Hagy: “We’re lucky that the way in which the DP [Christian Sprenger] and the director shoot the series is very controlled. The projects are never terribly unwieldy, so really simple organizing usually does the trick.”

Kyle Reiter: “They’re never doing more than a handful of takes and there aren’t more than a handful of set-ups, so it’s really easy to keep track of everything. I’ve worked with editors that used markers and just mark every line and then designate a line number; but, we don’t on this show. These episodes are very economical in how they are written and shot, so that sort of thing is not needed. It would be nice to have an Avid ScriptSync type of thing within Premiere Pro. However, we don’t get an unwieldy amount of footage, so frankly it’s almost not necessary. If it were on a different sort of show, where I needed that, then absolutely I would do it. But this is the sort of show I can get away with not doing it.”

Kyle Reiter: “I’m on a show right now being cut on Media Composer, where there are 20 to 25 takes of everything. Having ScriptSync is a real lifesaver on that one.”

Both editors are fans of Premiere Pro’s advanced features, including the ability to use it with After Effects, along with the new sound tools added in recent versions.

Isaac Hagy: “In the offline, we create some temp visual effects to set the concepts. Some of the simpler effects do make it into the show. We’ll mock it up in Premiere and then the AE’s will bring it into After Effects and polish the effect. Then it will be Dynamic Link-ed back into the Premiere timeline.”

“We probably go deeper on the sound than any other technical aspect of the show. In fact, a lot of the sound that we temp for the editor’s cut will make it to the final mix stage. We not only try to source sounds that are appropriate for a scene, but we also try to do light mixing ourselves – whether it’s adding reverb or putting the sound within the space – just giving it some realism. We definitely use the sound tools in Premiere quite a bit. Personally, I’ve had scenes where I was using 30 tracks just for sound effects.”

“I definitely feel more comfortable working in sound in Premiere than in Media Composer -and even than I felt in Final Cut. It’s way easier working with filters, mixing, panning, and controlling multiple tracks at once. This season we experimented with the Essential Sound Panel quite a bit. It was actually very good in putting a song into the background or putting sound effects outside of a room – just creating spaces.”

When a television series or film is about the music industry, the music in the series plays a principal role. Sometimes that is achieved with a composed score and on other shows, the soundtrack is built from popular music.

Kyle Reiter: “There’s no score on the show that’s not diegetic music, so we don’t have a composer. We had one episode this year where we did have score. Flying Lotus and Thundercat are two music friends of Donald’s that scored the episode. But other than that, everything else is just pop songs that we put into the show.”

Isaac Hagy: “The decision of which music to use is very collaborative. Some of the songs are written in the script. A lot are choices that Kyle and I make. Hiro will add some. Donald will add some. We also have two great music supervisors. We’re really lucky that we get nearly 90% of the music that we fall in love with cleared. But when we don’t, our music supervisors recommend some great alternatives. We’re looking for an authenticity to the world, so we try to rely on tracks that exist in the real world.”

Atlanta provides an interesting look of the city’s hip-hop culture on the fringe. A series that has included an alligator and Donald Glover in weird prosthetic make-up – and where Hiro Murai takes inspiration from The Shining certainly isn’t your run-of-the-mill television series. It definitely leaves fans wanting more, but to date, a third season has not yet been announced.

This interview was recorded using the Apogee MetaRecorder for iOS application and transcribed thanks to Digital Heaven’s SpeedScriber.

Originally written for CreativePlanetNetwork.

©2018 Oliver Peters

Premiere Pro Multicam Editing

Over the years, a lot of the projects that I’ve edited have been based on real-person interviews. This includes documentaries, commercials, and corporate video. As the cost of camera gear has come down and DSLRs became capable of delivering quality video, interview-based production now almost always utilizes multiple cameras. Directors will typically record these sections with two or more cameras at various tangents to the subject, which makes it easy to edit for content without visible jump-cuts (hopefully). In addition, if they also shoot in 4K for an HD delivery, then you have the additional ability to cleanly punch-in for even more framing options.

While having a specific multicam feature in your NLE isn’t required for cutting these types of productions, it sure speeds up the process. Under the best of circumstances, you can play the sequence in real-time and cut between camera angles in the multicam viewer, much like a director calls camera switches in a live telecast. Since you are working within an NLE, you can also make these camera angle cuts at a slower or faster pace and, of course, trim the cuts for greater timing precision. Premiere Pro is my primary NLE these days and its multi-camera editing routines are a joy to use.

Prepping for multi-camera

Synchronization is the main requirement for productive multicam. That starts at the time of the original recording. You can either sync by common timecode, common audio, or a marked in-point.

Ideally, your production crew should use a Lockit Sync Box to generate timecode and sync to all cameras and any external sound recorder. That will only work with professional products, not DSLRs. Lacking that, the next best thing is old school – a common slate with a clap-stick or even just your subject clapping hands at the start, while in view on all cameras. This will allow the editor to mark a common in-point.

The last sync method is to match the common audio across all sources. Of course, that only works if the production crew has supplied quality audio to all cameras and external recorders. It has to be at least good enough so that the human editor and/or the audio analysis of the software can discern a match. Sometimes this method will suffer from a minor amount of delay – either, because of the inherent offset of the audio recording circuitry within the camera electronics – or, because an onboard camera mic was used and the distance to the subject results in a slight delay, compared to a lav mic on the subject.

In addition to synchronization, you obviously need to record high-quality audio. This can be a mixer feed or direct mic input to one or all of the camera tracks, or to a separate external audio recorder. A typical set-up is to feed a lav and a boom mic signal to audio input channels 1 and 2 of the camera. When a mixer and an external recorder are used, the sound recordist will often also record a mix. Another option, though not as desirable, is to record individual microphone signals onto different cameras. The reason this isn’t preferred, is that sometimes when these two sources are mixed in post (rather than only one source used at a time), audio phasing can occur.

Synching in Premiere Pro

To synchronize multicam clips in Premiere Pro, simply select the matching sources in the browser/bin, right-click, and choose “Create New Multi-Camera Source Sequence”. You will be presented with several options for sync, based on timecode, audio, or marked points. You may also opt to have the clips moved to a “Processed Clips” bin. If synchronization is successful, you’ll then end up with a multicam source clip that you can now cut to a standard sequence.

A multicam source clip is actually a modified, nested sequence. You can open the clip – same as a nested sequence – and make adjustments or apply filters to the clips within.

You can also create multicam clips without going through the aforementioned process. For example, let’s say that none of the three sync methods exist. You have a freewheeling interview with two or more cameras, but only one has any audio. There’s no clap and no common timecode. In fact, if all the cameras were DSLRs, then every clip arbitrarily starts at 00:00:00:00. The way to tackle this is to edit these cameras to separate video tracks of a new sequence. Sync the video by slipping the clips’ positions on the tracks. Select those clips on the timeline and create a nest. Once the nest is created, this can then be turned into a multicam source clip, which enables you to work with the multicam viewer.

One step I follow is to place the multicam source clip onto a sequence and replace the audio with the best original source. The standard multicam routine means that audio is also nested, which is something I dislike. I don’t want all of the camera audio tracks there, even if they are muted. So I will typically match-frame the source until I get back to the original audio that I intend to use, and then overwrite the multicam clip’s audio with the original on this working timeline. On the other hand, if the manual multicam creation method is used, then I would only nest the video tracks, which automatically leaves me with the clean audio that I desire.

Autosequence

One simple approach is to use an additional utility to create multicam sequences, such as Autosequence from software developer VideoToolShed. To use Autosequence, your clips must have matching timecode. First separate all of your clips into separate folders on your media hard drive – A-CAM, B-CAM, SOUND, and so on. Launch Autosequence and set the matching frame rate for your media. Then import each folder of clips separately. If you are using double-system sound you can choose whether or not to include the camera sound. Then generate an XML file.

Now, import the XML file into Premiere Pro. This will import the source media into bins, along with a sequence of clips where each camera is on a separate track. If your clips are broken into consecutive recordings with stops and starts in-between, then each recorded set will appear further down on the same timeline. To turn this sequence into one with multicam clips, just follow my explanation for working with a manual process, described above.

Multicam cutting

At this point, I dupe the sequence(s) and start a reductive process of shaping the interview. I usually don’t worry too much about changing camera angles, until I have the story fleshed out. When you are ready for that, right-click into the viewer, and change the display mode to multicam.

As you play, cut between cameras in the viewer by clicking on the corresponding section of the viewer. The timeline will update to show these on-the-fly edits when you stop playback. Or you can simply “blade” the clip and then right-click that portion of the clip to select the camera to be shown. Remember than any effects or color corrections you apply in the timeline are applicable to that visible angle, but do not follow it. So, if you change your mind and switch to a different angle, the effects and corrections do not change with it. Therefore, adjustments will be required to the effect or correction for that new camera angle.

Once I’m happy with the cutting, I will then go through and make a color correction pass. If the lighting has stayed consistent, I can usually grade each angle for one clip only and then copy that correction and paste it to each instance of that same angle on the timeline. Then repeat the procedure for the other camera angles.

When I’m ready to deliver the final product, I will dupe the sequence and clean it up. This means flattening all multicam clips, cleaning up unused clips on my timeline, deleting empty tracks, and usually, collapsing the clips down to the fewest number of tracks.

©2018 Oliver Peters

Audio Mixing with Premiere Pro

When budgets permit and project needs dictate, I will send my mixes out-of-house to one of a few regular mixers. Typically that means sending them an OMF or AAF to mix in Pro Tools. Then I get the mix and split-tracks back, drop them into my Premiere Pro timeline, and generate master files.

On the other hand, a lot of my work is cutting simple commercials and corporate presentations for in-house use or the web, and these are often less demanding  – 2 to 8 tracks of dialogue, limited sound effects, and music. It’s easy to do the mix inside of the NLE. Bear in mind that I can – and often have – done such a mix in Apple Logic Pro X or Adobe Audition, but the tools inside Premiere Pro are solid enough that I often just keep everything – mix included – inside my editing application. Let’s walk though that process.

Dealing with multiple channels on source clips

Start with your camera files or double-system audio recordings. Depending on the camera model, Premiere Pro will see these source clips as having either stereo (e.g. a Canon C100) or multi-channel mono (e.g. ARRI Alexa) channels. If you recorded a boom mic on channel 1 and a lavaliere mic on channel 2, then these will drop onto your stereo timeline either as two separate mono tracks (Alexa) – or as a single stereo track (C100), with the boom coming out of the left speaker and the lav out of the right. Which one it is will strictly depend on the device used to generate the original recordings.

First, when dual-mic recordings appear as stereo, you have to understand how Premiere Pro deals with stereo sources. Panning in Premiere Pro doesn’t “shift” the audio left, right, or center. Instead, it increases or decreases the relative volume of the left or right half of this stereo field. In our dual-mic scenario, panning the clip or track full left means that we only hear the boom coming out of the left speaker, but nothing out of the right. There are two ways to fix this – either by changing the channel configuration of the source in the browser – or by changing it after the fact in the timeline. Browser changes will not alter the configuration of clips already edited to the timeline. You can change one or more source clips from stereo to dual-mono in the browser, but you can’t make that same type of change to a clip already in your sequence.

Let’s assume that you aren’t going to make any browser changes and instead just want to work in your sequence. If your source clip is treated as dual-mono, then the boom and lav will cut over to track 1 and 2 of your sequence – and the sound will be summed in mono on the output to your speaks. However, if the clip is treated as stereo, then it will only cut over to track 1 of your sequence – and the sound will stay left and right on the output to your speakers. When it’s dual-mono, you can listen to one track versus the other, determine which mic sounds the best, and disable the clip with the other mic. Or you can blend the two using clip volume levels.

If the source clip ends up in the sequence as a stereo clip, then you will want to determine which one of the two mics you want to use for the best sound. To pick only one mic, you will need to change the clip’s audio configuration. When you do that, it’s still a stereo clip, however, both “sides” can be supplied by either one of the two source channels. So, both left and right output will either be the boom or the lav, but not both. If you want to blend both mics together, then you will need to duplicate (option-drag) the audio clip onto an adjacent timeline track, and change the audio channel configuration for both clips. One would be set to the boom for both channels and the other set to only the lav for its two channels. Then adjust clip volume for the two timeline clips.

Configuring your timeline

Like most editors, while I’m working through the stages of rough cutting on the way to an approved final copy, I will have a somewhat messy timeline. I may have multiple music cues on several tracks with only one enabled – just so I can preview alternates for the client. I will have multiple dialogue clips on a few tracks with some disabled, depending on microphone or take options. But when I’m ready to move to the finishing stage, I will duplicate that sequence to create a “final version” and clean that one up. This means getting rid of any disabled clips, collapsing my audio and video clips to the fewest number of tracks, and using Premiere’s track creation/deletion feature to delete all empty tracks – all so I can have the least amount of visual clutter. 

In other blog posts, I’ve discussed working with additional submix buses to create split-track exports; but, for most of these smaller jobs, I will only add one submix bus. (I will explain its purpose in a moment.) Once created, you will need to open the track mixer panel and route the timeline channels from the master to the submix bus and then the output of the submix bus back to the master.

Plug-ins

Premiere Pro CC comes with a nice set of audio plug-ins, which can be augmented with plenty of third-party audio effects filters. I am partial to Waves and iZotope, but these aren’t essential. However, there are several that I do use quite frequently. These three third-party filters will help improve any vocal-heavy piece.

The first two are Vocal Rider and MV2 from Waves and are designed specifically for vocal performances, like voice-overs and interviews. These can be pricey, but Waves has frequent sales, so I was able to pick these up for a fraction of their retail price. Vocal Rider is a real-time, automatic volume adjustment tool. Set the bottom and top parameters and let Vocal Rider do the rest, by automatically pushing the volume up or down on-the-fly. MV2 is similar, but it achieves this through compression on the top and bottom ends of the range. While they operate in a similar fashion, they do produce a different sound. I tend to pick MV2 for voice-overs and Vocal Rider for interviews.

We all know location audio isn’t perfect, which is where my third filter comes in. FxFactory is knows primarily for video plug-ins, but their partnership with Crumplepop has added a nice set of audio filters to their catalog. I find AudioDenoise to be quite helpful and fast in fixing annoying location sounds, like background air conditioning noise. It’s real-time and good-sounding, but like all audio noise reduction, you have to be careful not to overdo it, or everything will sound like it’s underwater.

For my other mix needs, I’ll stick to Premiere’s built-in effects, like EQ, compressors, etc. One that’s useful for music is the stereo imager. If you have a music cue that sounds too monaural, this will let you “expand” the track’s stereo signal so that it is spread more left and right. This often helps when you want the voice-over to cut through the mix a bit better. 

My last plug-in is a broadcast limiter that is placed onto the master bus. I will adjust this tight with a hard limit for broadcast delivery, but much higher (louder allowed) for web files. Be aware that Premiere’s plug-in architecture allows you to have the filter take affect either pre or post-fader. In the case of the master bus, this will also affect the VU display. In other words, if you place a limiter post-fader, then the result will be heard, but not visible through the levels displayed on the VU meters.

Mixing

I have used different mixing strategies over the years with Premiere Pro. I like using the write function of the track mixer to write fader automation. However, I have lately stopped using it – instead going back to manual keyframes within the clips. The reason is probably that my projects tend to get revised often in ways that change timing. Since track automation is based on absolute timeline position, keyframes don’t move when a clip is shifted, like they would when clip-based volume keyframes are used.

Likewise, Adobe has recently added Audition’s ducking for music to Premiere Pro. This uses Adobe’s Sensei artificial intelligence. Unfortunately I don’t find to be “intelligent” enough. Although sometimes it can provide a starting point. For me, it’s simply too coarse and doesn’t intelligently adjust for areas within a music clip that swell or change volume internally. Therefore, I stick with minor manual adjustments to compensate for music changes and to make the vocal parts easy to understand in the mix. Then I will use the track mixer to set overall levels for each track to get the right balance of voice, sound effects, and music.

Once I have a decent balance to my ears, I will temporarily drop the TC Electronic (included with Premiere Pro) Radar loudness plug-in to make sure my mix is CALM-compliant. This is where the submix bus comes in. If I like the overall balance, but I need to bring everything down, it’s an easy matter to simply lower the submix level and remeasure.

Likewise, it’s customary to deliver web versions with louder volume levels than the broadcast mix. Again the submix bus will help, because you cannot raise the volume on the master – only lower it. If you simply want to raise the overall volume of the broadcast mix for web delivery, simply raise the submix fader. Note that when I say louder, I’m NOT talking about slamming the VUs all the way to the top. Typically, a mix that hits -6 is plenty loud for the web. So, for web delivery, I will set a hard limit at -6, but adjust the mix for an average of about -10.

Hopefully this short explanation has provided some insight into mixing within Premiere Pro and will help you make sure that your next project sounds great.

©2018 Oliver Peters

FCPX Color Wheels Take 2

Prior to version 10.4, the color correction tools within Final Cut Pro X were very basic. You could get a lot of work done with the color board, but it just didn’t offer tools competitive with other NLEs – not to mention color plug-ins or a dedicated grading app like DaVinci Resolve. With the release of 10.4, Apple upped the game by adding color wheels and a very nice curves implementation. However, for those of us who have been doing color correction for some time, it quickly became apparent that something wasn’t quite right in the math or color science behind these new FCPX color wheels. I described those anomalies in this January post.

To summarize that post, the color wheels tool seems to have been designed according to the lift/gamma/gain (LGG) correction model. The standard behavior for LGG is evident with a black-to-white gradient image. On a waveform display, this appears as a diagonal line from 0 to 100. If you adjust the highlight control (gain), the line appears to be pinned at the bottom with the higher end pivoting up or down as you shift the slider. Likewise, the shadow control (lift) leaves the line pinned at the top with the bottom half pivoting. The midrange control (gamma) bends the middle section of the line inward or outward, with no affect on the two ends, which stay pinned at 0 and 100, respectively. In addition to luminance value, when you shift the hue offset to an extreme edge – like moving the midrange puck completely to yellow – you should still see some remaining black and white at the two ends of the gradient.

That’s how LGG is supposed to work. In FCPX version 10.4, each color wheel control also altered the levels of everything else. When you adjusted midrange, it also elevated the shadow and highlight ranges. In the hue offset example, shifting the midrange control to full-on yellow tinted the entire image to yellow, leaving no hint of black or white. As a result, the color wheels correction tool was unpredictable and difficult to use, unless you were doing only very minor adjustments. You ended up chasing your tail, because when one correction was made, you’d have to go back and re-adjust one of the other wheels to compensate for the unwanted changes made by the first adjustment.

With the release of FCPX 10.4.1 this April, Apple engineers have changed the way the color wheels tool behaves. Corrections now correspond to the behavior that everyone accepts as standard LGG functionality. In other words, the controls mostly only affect their part of the image without also adjusting all other levels. This means that the shadows (lift) control adjusts the bottom, highlights (gain) will adjust the top end, and midrange (gamma) will lighten or darken the middle portion of the image. Likewise, hue offsets don’t completely contaminate the entire image.

One important thing to note is that existing FCPX Libraries created or promoted under 10.4 will now be promoted again when opened in 10.4.1. In order that your color wheel corrections don’t change to something unexpected when promoted, Projects in these Libraries will behave according to the previous FCPX 10.4 color model. This means that the look of clips where color wheels were used – and their color wheel values – haven’t changed. More importantly, the behavior of the wheels when inside those Libraries will also be according to the “old” way, should you make any further corrections. The new color wheels behavior will only begin within new Libraries created under 10.4.1.

These images clarify how the 10.4.1 adjustments now work (click to see enlarged and expanded views).

©2018 Oliver Peters

Viva Las Vegas – NAB 2018

As more and more folks get all of their information through internet sources, the running question is whether or not trade shows still have value. A show like the annual NAB (National Association of Broadcasters) Show in Las Vegas is both fun and grueling, typified by sensory overload and folks in business attire with sneakers. Although some announcements are made before the exhibits officially open – and nearly all are pretty widely known before the week ends – there still is nothing quite like being there in person.

For some, other shows have taken the place of NAB. The annual HPA Tech Retreat in the Palm Springs area is a gathering of technical specialists, researchers, and creatives that many consider the TED Talks for our industry. For others, the Cine Gear Expo in LA is the prime showcase for grip, lighting, and camera offerings. RED Camera has focused on Cine Gear instead of NAB for the last couple of years. And then, of course, there’s IBC in Amsterdam – the more humane version of NAB in a more pleasant setting. But for me, NAB is still the main event.

First of all, the NAB Show isn’t merely about the exhibit floor at the sprawling Las Vegas Convention Center. Actual NAB members can attend various sessions and workshops related to broadcasting and regulations. There are countless sidebar events specific to various parts of the industry. For editors that includes Avid Connect – a two-day series of Avid presentations in the weekend leading into NAB; Post Production World – a series of workshops, training sessions, and presentations managed by Future Media Concepts; as well as a number of keynote presentations and artist gatherings, including SuperMeet, FCPexchange, and the FCPX Guru Gathering. These are places where you’ll rub shoulders with some well-known editors, colorists, artists, and mixers, learn about new technologies like HDR (high dynamic range imagery), and occasionally see some new product features from vendors who might not officially be on the show floor with a booth, like Apple.

One of the biggest benefits I find in going to NAB is simply walking the floor, checking out the companies and products who might not get a lot of attention. These newcomers often have the most innovative technologies and it’s these new things that you find, which were never on the radar prior to that week.

The second benefit is connection. I meet up again in person with friends that I’ve made over the years – both other users, as well as vendors. Often it’s a chance to meet people that you might only know through the internet (forums, blogs, etc.) and to get to know them just a bit better. A bit more of that might make the internet more friendly, too!

Here are some of my random thoughts and observations from Las Vegas.

__________________________________

Editing hardware and software – four As and a B

Apple uncharacteristically pre-announced their new features just prior to the show, culminating with App Store availability on Monday when the NAB exhibits opened. This includes new Final Cut Pro X/Motion/Compressor updates and the official number of 2.5 million FCPX users. That’s a growth of 500,000 users in 2017, the biggest year to date for Final Cut. The key new feature in FCPX is a captioning function to author, edit, and export both closed and embedded (open) captions. There aren’t many great solutions for captioning and the best to date have been expensive. I found that the Apple approach was now the best and easiest to use that I’ve seen. It’s well-designed and should save time and money for those who need to create captions for their productions – even if you are using another brand of NLE. Best of all, if you own FCPX, you already have that feature. When you don’t have a script to start out, then manual or automatic transcription is required as a starting point. There is now a tie-in between Speedscriber (also updated this week) and FCPX that will expedite the speech-to-text function.

The second part of Apple’s announcement was the introduction of a new camera raw codec family – ProResRAW and ProResRAW HQ. These are acquisition codecs designed to record the raw sensor data from Bayer-pattern sensors (prior to debayering the signal into RGB information) and make that available in post, just like RED’s REDCODE RAW or CinemaDNG. Since this is an acquisition codec and NOT a post or intermediate codec, it requires a partnership on the production side of the equation. Initially this includes Atomos and DJI. Atomos supplies an external recorder, which can record the raw output from various cameras that offer the ability to record raw data externally. This currently includes their Shogun Inferno and Sumo 19 models. As this is camera-specific, Atomos must then create the correct profile by camera to remap that sensor data into ProResRAW. At the show, this included several Canon, Sony, and Panasonic cameras. DJI does this in-camera on the Inspire 2.

The advantage with FCPX, is that ProResRAW is optimized for post, thus allowing for more streams in real-time. ProResRAW data rates (variable) fall between that of ProRes and ProResHQ, while the less compressed ProResRAW HQ rates are between ProRes HQ and ProRes 4444. It’s very early with this new codec, so additional camera and post vendors will likely add ProResRAW support over the coming year. It is currently unknown whether or not any other NLEs can support ProResRAW decode and playback yet.

As always, the Avid booth was quite crowded and, from what I heard, Avid Connect was well attended with enthused Avid users. The Avid offerings are quite broad and hard to encapsulate into any single blog post. Most, these days, are very enterprise-centric. But this year, with a new CEO at the helm, Avid’s creative tools have been reorganized into three strata – First, standard, and Ultimate. This applies to Sibelius, Pro Tools, and Media Composer. In the case of Media Composer, there’s Media Composer | First – a fully functioning free version, with minimal restrictions; Media Composer; and Media Composer | Ultimate – includes all options, such as PhraseFind, ScriptSync, NewsCutter, and Symphony. The big difference is that project sharing has been decoupled from Media Composer. This means that if you get the “standard” version (just named Media Composer) it will not be enabled for collaboration on a shared storage network. That will require Media Composer | Ultimate. So Media Composer (standard) is designed for the individual editor. There is also a new subscription pricing structure, which places Media Composer at about the same annual cost as Adobe Premiere Pro CC (single app license). The push is clearly towards subscription, however, you can still purchase and/or maintain support for perpetual licenses, but it’s a little harder to find that info on Avid’s store website.

Though not as big news, Avid is also launching the Avid DNxID capture/export unit. It is custom-designed by Blackmagic Design for Avid and uses a small form factor. It was created for file-base acquisition, supports 4K, and includes embedded DNx codecs for onboard encoding. Connections are via component analog, HDMI, as well as an SD card slot.

The traffic around Adobe’s booth was thick the entire week. The booth featured interesting demos that were front and center in the middle of one of the South Hall’s main thoroughfares, generally creating a bit of a bottleneck. The newest Creative Cloud updates had preceded the show, but were certainly new to anyone not already using the Adobe apps. Big news for Premiere Pro users was the addition of automatic ducking that was brought over from Audition, and a new shot matching function within the Lumetri color panel. Both are examples of Adobe’s use of their Sensei AI technology. Not to be left out, Audition can now also directly open sequences from Premiere Pro. Character Animator had been in beta form, but is now a full-fledged CC product. And for puppet control Adobe also introduced the Advanced Puppet Engine for After Effects. This is a deformation tool to better bend, twist, and control elements.

Of course when it comes to NLEs, the biggest buzz has been over Blackmagic Design’s DaVinci Resolve 15. The company has an extensive track record of buying up older products whose companies weren’t doing so well, reinvigorating the design, reducing the cost, and breathing new life into them – often to a new, wider customer base. This is no more evident than Resolve, which has now grown from a leading color correction system to a powerful, all-in-one edit/mix/effects/color solution. We had previously seen the integration of the Fairlight audio mixing engine. This year Fusion visual effects were added. As before, each one of these disparate tools appears on its own page with a specific UI optimized for that task.

A number of folks have quipped that someone had finally resurrected Avid DS. Although all-in-ones like DS and Smoke haven’t been hugely successful in the past, Resolve’s price point is considerably more attractive. The Fusion integration means that you now have a subset of Fusion running inside of Resolve. This is a node-based compositor, which makes it easy for a Resolve user to understand, since it, too, already uses nodes in the color page. At least for now, Blackmagic Design intends to also maintain a standalone version of Fusion, which will offer more functions for visual effects compositing. Resolve also gained new editorial features, including tabbed sequences, a pancake timeline view, captioning, and improvements in the Fairlight audio page.

Other Blackmagic Design news includes updates to their various mini-converters, updates to the Cintel Scanner, and the announcement of a 4K Pocket Cinema Camera (due in September). They have also redesigned and modularized the Fairlight console mixing panels. These are now more cost-effective to manufacture and can be combined in various configurations.

This was the year for a number of milestone anniversaries, such as the 100th for Panasonic and the 25th for AJA. There were a lot of new product announcements at the AJA booth, but a big one was the push for more OpenGear-compatible cards. OpenGear is an open source hardware rack standard that was developed by Ross and embraced by many manufacturers. You can purchase any OpenGear version of a manufacturer’s product and then mix and match a variety of OpenGear cards into any OpenGear rack enclosure. AJA’s cards also offer Dashboard support, which is a software tool to configure and control the cards. There are new KONA SDI and HDMI cards, HDR support in the IO 4K Plus, and HDR capture and playback with the KiPro Ultra Plus.

HDR

It’s fair to say that we are all learning about HDR, but from what I observed on the floor, AJA is one of the only companies with a number of hardware product offerings that will allow you to handle HDR. This is thanks to their partnership with ColorFront, who is handling the color science in these products. This includes the FS | HDR – an up/down/cross, SDR/HDR synchronizer/converter. It also includes support for the Tangent Element Kb panel. The FS | HDR was a tech preview last year, but a product now. This year the tech preview product is the HDR Image Analyzer, which offers waveform and histogram monitoring at up to 4K/60fps.

Speaking of HDR (high dynamic range) and SDR (standard dynamic range), I had a chance to sit in on Robbie Carman’s (colorist at DC Color, Mixing Light) Post Production World HDR overview. Carman has graded numerous HDR projects and from his HDR presentation – coupled with exhibits on the floor – it’s quite clear that HDR is the wild, wild west right now. There is much confusion about color space and dynamic range, not to mention what current hardware is capable of versus the maximums expressed in the tech standards. For example, the BT 2020 spec doesn’t inherently mean that the image is HDR. Or the fact that you must be working in 4K to also have HDR and the set must accept the HDMI 2.0 standard.

High dynamic range grading absolutely requires HDR-compatible hardware, such as the proper i/o device and a display with the ability to receive metadata that turns on and sets its target HDR values. This means investing in a device like AJA’s IO 4K Plus or Blackmagic’s UltraStudio 4K Extreme 3. It also means purchasing a true grading monitor costing tens of thousands of dollars, like one from Sony, Canon, or Flanders. You CANNOT properly grade HDR based on the image of ANY computer display. So while the latest version of FCPX can handle HDR, and an iMac Pro screen features a high nits rating, you cannot rely on this screen to see proper HDR.

LG was a sponsor of the show and LG displays were visible in many of the exhibits. Many of their newest products qualify at the minimum HDR spec, but for the most part, the images shown on the floor were simply bright and not HDR – no matter what the sales reps in the booths were saying.

One interesting fact that Carman pointed out was that HDR displays cannot be driven across the full screen at the highest value. You cannot display a full screen of white at 1,000 nits on a 1,000 nits display without causing damage. Therefore, automatic gain adjustments are used in the set’s electronics to dim the screen. Only a smaller percentage of the image (20% maybe?) can be driven at full value before dimming occurs. Another point Carman made was that standard lift/gamma/gain controls may be too coarse to grade HDR images with finesse. His preference is to use Resolve’s log grading controls, because you can make more precise adjustments to highlight and shadow values.

Cameras

I’m not a camera guy, but there was notable camera news at the show. Many folks really like the Panasonic colorimetry for which the Varicam products are known. For people who want a full-featured camera in a small form factor, look no further than the Panasonics AU-EVA-1. It’s a 4K, Super35, handheld cinema camera featuring dual ISOs. Panasonic claims 14 stops of latitude. It will take EF lenses and can output camera raw data. When paired with an Atmos recorder it will be able to record ProResRAW.

Another new camera is Canon’s EOS C700 FF. This is a new full-frame model in both EF and PL lens mount versions. As with the standard C700, this is a 4K, Super35 cinema camera that records ProRes or X-AVC at up to 4K resolution onboard to CFast cards. The full-frame sensor offers higher resolution and a shallower depth of field.

Storage

Storage is of interest to many. As costs come down, collaboration is easier than ever. The direct-attached vendors, like G-Tech, LaCie, OWC, Promise, and others were all there with new products. So were the traditional shared storage vendors like Avid, Facilis, Tiger, 1 Beyond, and EditShare. But three of the newer companies had my interest.

In my editing day job, I work extensively with QNAP, which currently offers the best price/performance ratio of any system. It’s reliable, cost-effective, and provides reasonable JKL response cutting HD media with Premiere Pro in a shared editing installation. But it’s not the most responsive and it struggles with 4K media, in spite of plenty of bandwidth  – especially when the editors are all banging away. This has me looking at both Lumaforge and OpenDrives.

Lumaforge is known to many of the Final Cut Pro X editors, because the developers have optimized the system for FCPX and have had early successes with many key installations. Since then they have also pushed into more Premiere-based installations. Because these units are engineered for video-centric facilities, as opposed to data-centric, they promise a better shared storage, video editing experience.

Likewise, OpenDrives made its name as the provider for high-profile film and TV projects cut on Premiere Pro. Last year they came to the show with their highest performance, all-SSD systems. These units are pricey and, therefore, don’t have a broad appeal. This year they brought a few of the systems that are more applicable to a broader user base. These include spinning disk and hybrid products. All are truly optimized for Premiere Pro.

The cloud

In other storage news, “the cloud” garners a ton of interest. The biggest vendors are Microsoft, Google, IBM, and Amazon. While each of these offers relatively easy ways to use cloud-based services for back-up and archiving, if you want a full cloud-based installation for all of your media needs, then actual off-the-shelf solutions are not readily available. The truth of the matter is that each of these companies offers APIs, which are then handed off to other vendors – often for totally custom solutions.

Avid and Sony seem to have the most complete offerings, with Sony Ci being the best one-size-fits-all answer for customer-facing services. Of course, if review-and-approval is your only need, then Frame.io leads and will have new features rolled out during the year. IBM/Aspera is a great option for standard archiving, because fast Aspera up and down transfers are included. You get your choice of IBM or other (Google, Amazon, etc.) cloud storage. They even offer a trial period using IBM storage for 30 days at up to 100GB free. Backblaze is a competing archive solution with many partnering applications. For example, you can tie it in with Archiware’s P5 Suite of tools for back-up, archiving, and server synchronization to the cloud.

Naturally, when you talk of the “cloud”, many people interpret that to mean software that runs in the cloud – SaaS (software as a service). In most cases, that is nowhere close to happening. However, the exception is The Foundry, which was showing Athera, a suite of its virtualized applications, like Nuke, running on the Google Cloud Platform. They demo’ed it running inside the Chrome browser, thanks to this partnership with Google. The Foundry had a pod in the Google partners pavilion.

In short, you can connect to the internet with a laptop, activate a license of the tool or tools that you need, and then all media, processing, and rendering is handled in the cloud, using Google’s services and hardware. Since all of this happens on Google’s servers, only an updated UI image needs to be pushed back to the connected computer’s display. This concept is ideal for the visual effects world, where the work is generally done on an individual shot basis without a lot of media being moved in real-time. The target is the Nuke-centric shop that may need to add on a few freelancers quickly, and who may or may not be able to work on-premises.

Interesting newcomers

As I mentioned at the beginning, part of the joy of NAB is discovering the small vendors who seek out NAB to make their mark. One example this year is Lumberjack Systems, a venture by Philip Hodgetts and Greg Clarke of Intelligent Assistance. They were in the Lumaforge suite demonstrating Lumberjack Builder, which is a text-based NLE. In the simplest of explanations, your transcription or scripted text is connected to media. As you re-arrange or trim the text, the associated picture is edited accordingly. Newly-written text for voiceovers turns into spoken word media courtesy of the computer’s internal audio system and system voice. Once your text-based rough cut is complete, an FCPXML is sent to Final Cut Pro X, for further finesse and final editing.

Another new vendor I encountered was Quine, co-founded by Norwegian DoP Grunleik Groven. Their QuineBox IoT device attaches to the back of a camera, where it can record and upload “conformable” dailies (ProRes, DNxHD) to your SAN, as well as proxies to the cloud via its internal wi-fi system. Script notes can also be incorporated. The unit has already been battle-test on the Netflix/NRK production of “Norsemen”.

Closing thoughts

It’s always interesting to see, year over year, which companies are not at the show. This isn’t necessarily indicative of a company’s health, but can signal a change in their direction or that of the industry. Sometimes companies opt for smaller suites at an area hotel in lieu of the show floor (Autodesk). Or they are a smaller part of a reseller or partner’s booth (RED). But often, they are simply gone. For instance, in past years drones were all the rage, with a lot of different manufacturers exhibiting. DJI has largely captured that market for both vehicles and camera systems. While there were a few other drone vendors besides DJI, GoPro and Freefly weren’t at the show at all.

Another surprise change for me was the absence of SAM (Snell Advanced Media) – the hybrid company formed out of Snell & Wilcox and Quantel. SAM products are now part of Grass Valley, which, in turn, is owned by Belden (the cable manufacturer). Separate Snell products appear to have been absorbed into the broader Grass Valley product line. Quantel’s Go and Rio editors continue in Grass Valley’s editing line, alongside Edius – as simple, middle, and advanced NLE products. A bit sad actually. And very ironic. Here we are in the world of software and file-based video, but the company that still has money to make acquisitions is the one with a heavy investment in copper (I know, not just copper, but you get the point).

Speaking of “putting a fork in it”, I would have to say that stereo 3D and 360 VR are pretty much dead in the film and video space. I understand that there is a market – potentially quite large – in gaming, education, simulation, engineering, training, etc. But for more traditional entertainment projects, it’s just not there. Vendors were down to a few, and even though the leading NLEs have ways of working with 360 VR projects, the image quality still looks awful. When you view a 4K image within even the best goggles, the qualitative experience is like watching a 1970s-era TV set from a few inches away. For now, it continues to be a novelty looking for a reason to exist.

A few final points… It’s always fun to see what computers were being used in the booths. Apple is again a clear winner, with plenty of MacBook Pros and iMac Pros all over the LVCC when used for any sort of creative products or demos. eGPUs are of interest, with Sonnet being the main vendor. However, eGPUs are not a solution that solves every problem. For example, you will see more benefit by adding an eGPU to a lesser-powered machine, like a 13” MacBook Pro than one with more horsepower, like an iMac Pro. Each eGPU takes one Thunderbolt 3 bus, so realistically, you are likely to only add one additional eGPU to a computer. None of the NLE vendors could really tell me how much of a boost their application would have with an eGPU. Finally, if you are looking for some great-looking, large, OLED displays that are pretty darned accurate and won’t break the bank, then LG is the place to look.

©2018 Oliver Peters