Luca Visual FX builds Mystery & Suspense

For most editors, creating custom music scores tends to fall into the “above my pay grade” category. If you are a whizz with GarageBand or Logic Pro X, then you might dip into Apple’s loop resources. But most commercials and corporate videos are easily serviced by the myriad of stock music sites, like Premium Beat and Music Bed. Some music customization is also possible with tracks from companies like SmartSound.

Yet, none of the go-to music library sites offer curated, genre-based, packages of tracks and elements that make it easy to build up a functional score for longer dramatic productions. Such projects are usually the work of composers or a specific music supervisor, sound designer, or music editor doing a lot of searching and piecing together from a wide range of resources.

Enter Luca Visual FX – a developer best known for visual effects plug-ins, such as Light Kit 2.0. It turns out that Luca Bonomo is also a composer. The first offering is the Mystery & Suspense Music and Sound Library, which is a collection of 500 clips, comprising music themes, atmospheres, drones, loops, and sound effects. This is a complete toolkit designed to make it easy to combine elements, in order to create a custom score for dramatic productions in the mystery or suspense genre.

These tracks are available for purchase as a single library through the LucaVFX website. They are downloaded as uncompressed, stereo AIF files in a 24-bit/96kHz resolution. This means they are of top quality and compatible with any Mac or PC NLE or DAW application. Best yet, is the awesome price of $79. The package is licensed for a single user and may be used for any audio or video production, including for commercial purposes.

Thanks to LucaVFX, I was able to download and test out the Library on a recent short film. The story is a suspense drama in the style of a Twilight Zone episode, so creating a non-specific, ethereal score fits perfectly. Drones, dissonance, and other suspenseful sounds are completely in line, which is where this collection shines.

Although I could have used any application to build this, I opted for Apple’s Final Cut Pro X. Because of its unique keyword structure, it made sense to first set up a separate FCPX library for only the Mystery & Suspense package. During import, I let FCPX create keyword collections based on the Finder folders. This keeps the Mystery & Suspense FCPX library organized in the same way as they are originally grouped. Doing so, facilitates fast and easy sorting and previewing of any of the 500 clips within the music library. Then I created a separate FCPX library for the production itself. With both FCPX libraries open, I could quickly preview and place clips from my music library to the edit sequence for the film, located within the other FCPX library.

Final Cut uses Connected Clips instead of tracks. This means that you can quickly build up and align overlapping atmospheres, transitions, loops, and themes for a densely layered music score in a very freeform manner. I was able to build up a convincing score for a half-hour-long piece in less that an afternoon. Granted, this isn’t mixed yet, but at least I now have the musical elements that I want and where I want them. I feel that style of working is definitely faster in Final Cut Pro X – and more conducive to creative experimentation – but it would certain work just as well in other applications.

The Mystery & Suspense Library is definitely a winner, although I do have a few minor quibbles. First, the music and effects are in keeping with the genre, but don’t go beyond it. When creating a score for this kind of production, you also need some “normal” or “lighter” moods for certain scenes or transitions. I felt that was missing and I would still have to step outside of this package to complete the score. Secondly, many of the clips have a synthesized or electronic tone to them, thanks to the instruments used to create the music. That’s not out of character with the genre, but I still would have liked some of these to include more natural instruments than they do. In fairness to LucaVFX, if the Mystery & Suspense Library is successful, then the company will create more libraries in other genres, including lighter fare.

In conclusion, this is a high quality library perfectly in keeping with its intended genre. Using it is fast and flexible, making it possible for even the most musically-challenged editor to develop a convincing, custom score without breaking the bank.

©2018 Oliver Peters

Advertisements

HDR and RAW Demystified, Part 2

(Part 1 of this series is linked here.) One of the surprises of NAB 2018 was the announcement of Apple ProRes RAW. This brought camera raw video to the forefront for many who had previously discounted it. To understand the ‘what’ and ‘why’ about raw, we first have to understand camera sensors.

For quite some years now, cameras have been engineering with a single, CMOS sensor. Most of these sensors use a Bayer-pattern array of photosites. Bayer – named for Bryce Bayer, a Kodak color scientist who developed the system. Photosites are the light-receiving elements of a sensor. The Bayer pattern is a checkerboard filter that separates light according to red/blue/green wavelengths. Each photosite captures light as monochrome data that has been separated according to color components. In doing so, the camera captures a wide exposure latitude as linear data. This is greater than what can be squeezed into standard video in this native form. There is a correlation between physical photosite size and resolution. With smaller photosites, more can fit on the sensor, yielding greater native resolution. But, with fewer, larger photosites, the sensor has better low-light capabilities. In short, resolution and exposure latitude are a trade-off in sensor design.

Log encoding

Typically, raw data is converted into RGB video by the internal electronics of the camera. It is then subsequently converted into component digital video and recorded using a compressed or uncompressed codec and one of the various color sampling schemes (4:4:4, 4:2:2, 4:1:1, 4:2:0). These numbers express a ratio that represents YCrCb – where Y = luminance (the first number) and CrCb = two difference signals (the second two numbers) used to derive color information. You may also see this written as YUV, Y/R-Y/B-Y or other forms. In the conversion, sampling, and compression process, some information is lost. For instance, a 4:4:4 codec preserves twice as much color information than a 4:2:2 codec. Two methods are used to preserve wide-color gamuts and extended dynamic range: log encoding and camera raw capture.

Most camera manufacturers offer some form of logarithmic video encoding, but the best-known is ARRI’s Log-C. Log encoding applies a logarithm to linear sensor data in order to compress that data into a “curve”, which will fit into the available video signal “bucket”. Log-C video, when left uncorrected and viewed in Rec. 709, will appear to lack contrast and saturation. To correct the image, a LUT (color look-up table) must be applied, which is the mathematic inverse of the process used to encode the Log-C signal. Once restored, the image can be graded to use and/or discard as much of the data as needed, depending on whether you are working in an SDR or HDR mode.

Remember that the conversion from a flat, log image to full color will only look good when you have bit-depth precision. This means that if you are working with log material in an 8-bit system, you only have 256 steps between black and white. That may not be enough and the grade from log to full color may result in banding. If you work in a 10-bit system, then you have 1024 steps instead of only 256 between the same black and white points. This greater precision yields a smoother transition in gradients and, therefore, no banding. If you work with ProRes recordings, then according to Apple, “Apple ProRes 4444 XQ and Apple ProRes 4444 support image sources up to 12 bits and preserve alpha sample depths up to 16 bits. All Apple ProRes 422 codecs support up to 10-bit image sources, though the best 10-bit quality is obtained with the higher-bit-rate family members – Apple ProRes 422 and Apple ProRes 422 HQ.”

Camera raw

RAW is not an acronym. It’s simply shorthand for camera raw information. Before video, camera raw was first used in photography, typified by Canon raw (.cr2) and Adobe’s Digital Negative (.dng) formats. The latter was released as an open standard and is widely used in video as Cinema DNG.

Camera raw in video cameras made its first practical introduction when RED Digital Cinema introduced their RED ONE cameras equipped with REDCODE RAW. While not the first with raw, RED’s innovation was to record a compressed data stream as a movie file (.r3d), which made post-production significantly easier. The key difference between raw workflows and non-raw workflows, is that with raw, the conversion into video no longer takes place in the camera or an external recorder. This conversion happens in post. Since the final color and dynamic range data is not “baked” into the file, the post-production process used can be improved in future years, making an even better result possible with an updated software version.

Camera raw data is usually proprietary to each manufacturer. In order for any photographic or video application to properly decode a camera raw signal, it must have a plug-in from that particular manufacturer. Some of these are included with a host application and some require that you download and install a camera-specific add-on. Such add-ons or plug-ins are considered to be a software “black box”. The decoding process is hidden from the host application, but the camera supplier will enable certain control points that an editor or colorist can adjust. For example, with RED’s raw module, you have access to exposure, the demosaicing (de-Bayering) resolution, RED’s color science method, and color temperature/tint. Other camera manufacturers will offer less.

Apple ProRes RAW

The release of ProRes RAW gives Apple a raw codec that is optimized for multi-stream playback performance in Final Cut Pro X and on the newest Apple hardware. This is an acquisition codec, so don’t expect to see the ability to export a timeline from your NLE and record it into ProRes RAW. Although I wouldn’t count out a transcode from another raw format into ProRes RAW, or possibly an export from FCPX when your timeline only consists of ProRes RAW content. In any case, that’s not possible today. In fact, you can only play ProRes RAW files in Final Cut Pro X or Apple Motion, but only FCPX displays the correct color information at default settings.

Currently ProRes RAW has only been licensed by Apple to Atomos and DJI. The Atomos Inferno and Sumo 19 units are equipped with ProRes RAW. This is only active with certain Canon, Panasonic, and Sony camera models that can send their raw signal out over an SDI cable. Then the Atomos unit will remap the camera’s raw values to ProRes RAW and encode the file. DJI’s Zenmuse X7 gimbal camera has also been updated to support ProRes RAW. With DJI, the acquisition occurs in-camera, rather than via an external recorder.

Like RED’s RECODE, Apple ProRes RAW is a variable bit-rate, compressed codec with different quality settings. ProRes RAW and ProRes RAW HQ fall in line similar to the data rates of ProRes and ProRes HQ. Unlike RED, no controls are exposed within Final Cut Pro X to access specific raw controls. Therefore, Final Cut Pro X’s color processing controls may or may not take affect prior to the conversion from raw to video. At this point that’s an unknown.

(Read more about ProRes RAW here.)

Conclusion

The main advantage of the shift to using movie file formats for camera raw – instead of image sequence files – is that processing is faster and the formats are conducive to working natively in most editing applications.

It can be argued whether or not there is really much difference in starting with a log-encoded versus a camera raw file. Leading feature films presented at the highest resolutions have originated both ways. Nevertheless, both methods empower you with extensive creative control in post when grading the image. Both accommodate a move into HDR and wider color gamuts. Clearly log and raw workflows future-proof your productions for little or no additional investment.

Originally written for RedShark News.

©2018 Oliver Peters

HDR and RAW Demystified, Part 1

Two buzzwords have been the highlight of many tech shows within this past year – HDR and RAW. In this first part, I will attempt to clarify some of the concepts surrounding video signals, including High Dynamic Range (HDR). In part 2, I’ll cover more about camera raw recordings.

Color space

Four things define the modern video signal: color space (aka color gamut), white point, gamma curve, and dynamic range. The easiest way to explain color space is with the standard triangular plot of the color spectrum, known as a chromaticity diagram. This chart defines the maximum colors visible to most humans when visualized on an x,y grid. Within it are numerous ranges that define a less-than-full range of colors for various standards. These represent the technical color spaces that cameras and display systems can achieve. On most charts, the most restrictive ranges are sRGB and Rec. 709. The former is what many computer displays have used until recently, while Rec. 709 is the color space standard for high definition TV. (These recommendations were developed by the International Telecommunications Union, so Rec. 709 is simply shorthand for ITU-R Recommendation BT.709.)

Next out is P3, a standard adopted for digital cinema projection and more recently, new computer displays, like those on the Apple iMac Pro. While P3 doesn’t display substantially more color than Rec. 709, colors at the extremes of the range do appear different. For example, the P3 color space will render more vibrant reds with a more accurate hue than Rec. 709 or sRGB. With UHD/4K becoming mainstream, there’s also a push for “better pixels”, which has brought about the Rec. 2020 standard for 4K video. This standard covers about 75% of the visible spectrum, although, it’s perfectly acceptable to deliver 4K content that was graded in a Rec. 709 color space. That’s because most current displays that are Rec. 2020 compatible can’t actually display 100% of the colors defined in this standard, yet.

The center point of the chromaticity diagram is white. However, different systems consider a slightly different color temperature to be white. Color temperature is measured in Kelvin degrees. Displays are a direct illumination source and for those, 6500-degrees (more accurately 6504) is considered pure white. This is commonly referred to as D-65. Digital cinema, which is a projected image, uses 6300-degrees as its white point. Therefore, when delivering something intended for P3, it is important to specify whether that is P3 D-65 or P3 DCI (digital cinema).

Dynamic range

Color space doesn’t live on its own, because the brightness of the image also defines what we see. Brightness (and contrast) are expressed as dynamic range. Up until the advent of UHD/4K we have been viewing displays in SDR (standard dynamic range). If you think of the chromaticity diagram as lying flat and dynamic range as a column that extends upward from the chart on the z-axis, you can quickly see that the concept can be thought of as a volumetric combination of color space and dynamic range. With SDR, that “column” goes from 0 IRE up to 100 IRE (also expressed as 0-100 percent).

Gamma is the function that changes linear brightness values into the weighted values that are translated to our screens. It defines numerical pixel value to its actual brightness. By increasing or decreasing gamma values, you are in effect, bending that straight-line between darkest and lightest values into a curve. This changes the midtone of the displayed image, making the image appear darker or lighter. Gamma values are applied to both the original image and to the display system. When they don’t match, then you run into situations where the image will look vastly different when viewed on one system versus another.

With the advent of UHD/4K, users have also been introduced to HDR (high dynamic range), which allows us to display brighter images and recover the overshoot elements in a frame, like bright lights and reflections. It is important to understand that HDR video is not the same as HDR photography. HDR photos are created by capturing several bracketed exposures of the same image and then blending those into a composite – either in-camera or via software, like Photoshop or Lightroom. HDR photos often yield hyper-real results, such as when high-contrast sky and landscape elements are combined.

HDR video is quite different. HDR photography is designed to work with existing technology, whereas HDR video actually takes advantage of the extended brightness range made possible in new displays. It is also only visible with the newest breed of UHD/4K TV sets that are HDR-capable. Display illumination is measured in nits. One nit equals one candela per square meter – in other words, the light of a single candle spread over a square meter. SDR displays have been capable of up to 100 nits. Modern computer displays, monitors, and consumer television sets can now display brightness in the range of 500 to 1,000 nits and even brighter. Anything over 1,000 nits is considered HDR. But that’s not the end of the story, as there are currently four competing standards: Dolby Vision, HDR10, HDR10+, and HLG. I won’t get into the weeds about the specifics of each, but they all apply different peak brightness levels and methods. Their nit levels range from 1,000 up to Dolby Vision’s theoretical limit of 10,000 nits.

Just because you own a high-nits display doesn’t mean you are seeing HDR. It isn’t simply turning up the brightness “to 11”, but rather providing the headroom to extend the parts of the image that exceed the normal range. These peaks can now be displayed with detail, without compressing or clipping them, as we do now. When an HDR master is created, metadata is stored with the file that tells the display device that the signal is an HDR signal and to turn on the necessary circuitry. That metadata is carried over HDMI. Therefore, every device in the playback chain must be HDR-capable.

HDR also means more hardware to work with it accurately. Although you may have grading software that accommodates HDR – and you have a 500 nits display, like those in an iMac Pro – you can’t effectively see HDR in order to properly grade it. That still requires proper capture/playback hardware from Blackmagic Design or AJA, along with a studio-grade, external HDR monitor.

Unfortunately, there’s one dirty little secret with HDR. Monitors and TV sets cannot display a full screen image at maximum brightness. You can’t display a total white background at 1,000 nits on a 1,000 nits display. These displays employ gain circuitry to darken the image in those cases. The responsiveness of any given display model will vary widely depending on how much of the screen is at full brightness and for how long. No two models will be at exactly the same brightness for any given percentage at peak level.

Today HDR is still the “wild west” and standards will evolve as the market settles in on a preference. The good news is that cameras have been delivering content that is “HDR-ready” for several years. This brings us to camera raw and log encoding, which will be covered in Part 2.

(Here is some additional information from SpectraCal and AVForums.)

Originally written for RedShark News.

©2018 Oliver Peters

Wild Wild Country

Sometimes real life is far stranger than fiction. Such is the tale of the Rajneeshees – disciples of the Indian guru Bhagwan Shree Rajneesh – who moved to Wasco County, Oregon in the 1980s. Their goal was to establish a self-contained, sustainable, utopian community of spiritual followers, but the story quickly took a dark turn. Conflicts with the local Oregon community escalated, including the first and single, largest bioterror attack in the United States, when a group of followers poisoned 751 guests at ten local restaurants through intentional salmonella contamination. 

Additional criminal activities included attempted murder, conspiracy to assassinate the U. S. Attorney for the District of Oregon, arson, and wiretapping. The community was largely controlled by Bhagwan Shree Rajneesh’s personal secretary, Sheela Silverman (Ma Anand Sheela), who served 29 months in federal prison on related charges. She moved to Switzerland upon her release. Although the Rajneeshpuram community is no more and its namesake is now deceased, the community of followers lives on as the Osho International Foundation. This slice of history has now been chronicled in the six-part Netflix documentary Wild Wild Country, directed by Chapman and Maclain Way.

Documentaries are truly an editor’s medium. More so than any other cinematic genre, the final draft of the script is written in the cutting room. I recently interviewed Wild Wild Country’s editor, Neil Meiklejohn, about putting this fascinating tale together.

Treasure in the archives

Neil Meiklejohn explains, “I had worked with the directors before to help them get The Battered Bastards of Baseball ready for Sundance. That is also an Oregon story. While doing their research at the Oregon Historical Society, the archivist turned them on to this story and the footage available. The 1980s was an interesting time in local broadcast news, because that was a transition from film to video. Often stories were shot on film and then transferred to videotape for editing and airing. Many times stations would simply erase the tape after broadcast and reuse the stock. The film would be destroyed. But in this case, the local stations realized that they had something of value and held onto the footage. Eventually it was donated to the historical society.”

“The Rajneeshees on the ranch were also very proud of what they were doing – farming and building a utopian city – so, they would constantly invite visitors and media organizations onto the ranch. They also had their own film crews documenting this, although we didn’t have as much access to that material. Ultimately, we accumulated approximately 300 hours of archival media in all manner of formats, including Beta-SP videotape, ripped DVDs, and the internet. It also came in different frame rates, since some of the sources were international. On top of the archival footage, the Ways also recorded another 100 hours of new interviews with many of the principals involved on both sides of this story. That was RED Dragon 6K footage, shot in two-camera, multi-cam set-ups. So, pretty much every combination you can think of went into this series. We just embraced the aesthetic defects and differences – creating an interesting visual texture.”

Balancing both sides of the story

“Documentaries are an editor’s time to shine,” continues Meiklejohn. “We started by wanting to tell the story of the battle between the cult and the local community without picking sides. This really meant that each scene had to be edited twice. Once from each perspective. Then those two would be combined to show both sides as point-counterpoint. Originally we thought about jumping around in time. But, it quickly became apparent that the best way to tell the story was as a linear progression, so that viewers could see why people did what they did. We avoided getting tricky.”

“In order to determine a structure to our episodes, we first decided the ‘ins’ and ‘outs’ for each and then the story points to hit within. Once that was established, we could look for ‘extra gold’ that might be added to an episode. We would share edits with our executive producers and Netflix. On a large research-based project like this, their input was crucial to making sure that the story had clarity.”

Managing the post production

Meiklejohn normally works as an editor at LA post facility Rock Paper Scissors. For Wild Wild Country, he spent ten months in 2017 at an ad hoc cutting room located at the offices of the film’s executive producers, Jay and Mark Duplass. His set-up included Apple iMacs running Adobe Creative Cloud software, connected to an Avid ISIS shared storage network. Premiere Pro was the editing tool of choice.

Meiklejohn says, “The crew was largely the directors and myself. Assistant editors helped at the front end to get all of the media organized and loaded, and then again when it came time to export files for final mastering. They also helped to take my temp motion graphics – done in Premiere – and then polish them in After Effects. These were then linked back into the timeline using Dynamic Link between Premiere and After Effects. Chapman and Maclain [Way] were very hands-on throughout, including scanning in stills and prepping them in Photoshop for the edit. We would discuss each new segment to sort out the best direction the story was taking and to help set the tone for each scene.”

“Premiere Pro was the ideal tool for this project, because we had so many different formats to deal with. It dealt well with the mess. All of the archival footage was imported and used natively – no transcoding. The 6K RED interview footage was transcoded to ProRes for the ‘offline’ editing phase. A lot of temp mixing and color correction was done within Premiere, because we always wanted the rough cuts to look smooth with all of the different archival footage. Nothing should be jarring. For the ‘online’ edit, the assistants would relink to the full-resolution RED raw files. The archival footage was already linked at its native resolution, because I had been cutting with that all along. Then the Premiere sequences were exported as DPX image sequences with notched EDLs and sent to E-Film, where color correction was handled by Mitch Paulson. Unbridled Sound handled the sound design and mix – and then Encore handled mastering and 1080p deliverables.”

Working with 400 hours of material and six hour-long episodes in Premiere might be a concern for some, but it was flawless for Meiklejohn. He continues, “We worked the whole series as one large project, so that at any given time, we could go back to scenes from an earlier episode and review and compare. The archival material was organized by topic and story order, with corresponding ‘selects’ sequences. As the project became bigger, I would pare it down by deleting unnecessary sequences and saving a newer, updated version. So, no real issue by keeping everything in a single project.”

As with any real-life event, where many of the people involved are still alive, opinions will vary as to how balanced the storytelling is. Former Rajneeshees have both praised and criticized the focus of the story. Meiklejohn says, “Sheela is one of our main interview subjects and in many ways, she is both the hero and the villain of this story. So, it was interesting to see how well she has been received on social media and in the public screenings we’ve done.”

Wild Wild Country shares a pointed look into one of the most bizarre clashes in the past few decades. Meiklejohn says, “Our creative process was really focused on the standoff between these two groups and the big inflection points. I tried to let the raw emotions that you see in these interviews come through and linger a bit on-screen to help inform the events that were unfolding. The story is sensational in and of itself, and I didn’t want to distract from that.”

For more information, check out Steve Hullfish’s interview at Art of the Cut.

Originally written for CreativePlanetNetwork.

©2018 Oliver Peters

Editing the FX Series Atlanta

Atlanta just wrapped its second season on the FX Network. The brainchild of actor/writer/producer/director Donald Glover, Atlanta is the story of Earn Marks, a Princeton drop-out who returns home to Atlanta, where he decides to manage his cousin’s rap career. The show is very textural and plot is secondary. It loosely follows Earn and the people in his life – specifically his cousin, Paper Boi – an up and coming rapper – and his friend and posse-mate, Darrius.

The visual architect of the show is director Hiro Murai, who has directed the majority of the episodes. He has set an absurdist tone for much of the story. Any given episode can be wildly different from the episodes that come on either side of it. The episodes taken as a whole make up what the series is about.

I recently had a chance to interview the show’s editors, Kyle Reiter and Isaac Hagy, about working on Atlanta and their use of Adobe Premiere Pro CC to edit the series.

Isaac Hagy: “I have been collaborating with Hiro for years. We went to college together and ever since then, we’ve been making short films and music videos. I started out doing no-budget music videos, then eventually moved into documentaries and commercials, and now television. A few years ago, we made a short film called Clapping for the Wrong Reasons, starring Donald. That became kind of an aesthetic precursor that we used in pitching this show. It served as a template for the tone of Atlanta.”

“I’ve used pretty much every editing software under the sun – cutting short films in high school on iMovie, then Avid in college when I went to film school at USC. Once I started doing short film projects, I found Final Cut Pro to be more conducive to quick turnarounds than Avid. I used that for five or six years, but then they stopped updating it, so I needed to switch over to a more professional alternative. Premiere Pro was the easiest transition from Final Cut Pro and, at that time, Premiere was starting to be accepted as a professional platform. A lot of people on the show come from a very DIY background, where we do everything ourselves. Like with the early music videos – I would color and Hiro would do effects in After Effects. So, Premiere was a much more natural fit. I am on a show using [Avid] Media Composer right now and it feels like a step backwards.”

With a nod to their DIY ethos, post-production for Atlanta also follows a small, collective approach. 

Kyle Reiter: “We rent a post facility that is just a single-story house. We have a DIY server called a NAS that one of our assistants built and all the media is stored there. It’s just a tower. We brought in our own desktop iMacs with dual monitors that we connect to the server over Ethernet. The show is shot with ARRI Amira cameras in a cinema 2K format. Then that is transcoded to proxy media for editing, which makes it easy to manage. The color correction is done in Resolve. Our assistant editors online it for the colorist, so there’s no grading in-house.” Atlanta airs on the FX Network in the 720p format.

The structure and schedule of this production make it possible to use a simple team approach. Projects aren’t typically shared among multiple editors and assistants, so a more elaborate infrastructure isn’t required to get the job done. 

Isaac Hagy: “It’s a pretty small team. There’s Kyle and myself. We each have an assistant editor. We just split the episodes, so I took half of the season and Kyle the other half. We were pretty self-contained, but because there were an odd number of episodes, we ended up sharing the load on one of them. I did the first cut of that episode and Kyle took it through the director’s cut. But other than that, we each had our individual episodes.”

Kyle Reiter: “They’re in Atlanta for several months shooting. We’ll spend five to seven days doing our cut and then typically move on to the next thing, before we’re finished. That’s just because they’re out of town for several months shooting and then they’ll come back and continue to work. So, it’s actually quite a bit of time calendar-wise, but not a lot of time in actual work hours. We’ll start by pulling selects and marking takes. I do a lot of logging within Premiere. A lot of comments and a lot of markers about stuff that will make it easy to find later. It’s just breaking it down to manageable pieces. Then from there, going scene-by-scene, and putting it all together.”

Many scripted television series that are edited on Avid Media Composer rely on Avid’s script integration features. This led me to wonder whether Reiter and Hagy missed such tools in Premiere Pro.

Isaac Hagy: “We’re lucky that the way in which the DP [Christian Sprenger] and the director shoot the series is very controlled. The projects are never terribly unwieldy, so really simple organizing usually does the trick.”

Kyle Reiter: “They’re never doing more than a handful of takes and there aren’t more than a handful of set-ups, so it’s really easy to keep track of everything. I’ve worked with editors that used markers and just mark every line and then designate a line number; but, we don’t on this show. These episodes are very economical in how they are written and shot, so that sort of thing is not needed. It would be nice to have an Avid ScriptSync type of thing within Premiere Pro. However, we don’t get an unwieldy amount of footage, so frankly it’s almost not necessary. If it were on a different sort of show, where I needed that, then absolutely I would do it. But this is the sort of show I can get away with not doing it.”

Kyle Reiter: “I’m on a show right now being cut on Media Composer, where there are 20 to 25 takes of everything. Having ScriptSync is a real lifesaver on that one.”

Both editors are fans of Premiere Pro’s advanced features, including the ability to use it with After Effects, along with the new sound tools added in recent versions.

Isaac Hagy: “In the offline, we create some temp visual effects to set the concepts. Some of the simpler effects do make it into the show. We’ll mock it up in Premiere and then the AE’s will bring it into After Effects and polish the effect. Then it will be Dynamic Link-ed back into the Premiere timeline.”

“We probably go deeper on the sound than any other technical aspect of the show. In fact, a lot of the sound that we temp for the editor’s cut will make it to the final mix stage. We not only try to source sounds that are appropriate for a scene, but we also try to do light mixing ourselves – whether it’s adding reverb or putting the sound within the space – just giving it some realism. We definitely use the sound tools in Premiere quite a bit. Personally, I’ve had scenes where I was using 30 tracks just for sound effects.”

“I definitely feel more comfortable working in sound in Premiere than in Media Composer -and even than I felt in Final Cut. It’s way easier working with filters, mixing, panning, and controlling multiple tracks at once. This season we experimented with the Essential Sound Panel quite a bit. It was actually very good in putting a song into the background or putting sound effects outside of a room – just creating spaces.”

When a television series or film is about the music industry, the music in the series plays a principal role. Sometimes that is achieved with a composed score and on other shows, the soundtrack is built from popular music.

Kyle Reiter: “There’s no score on the show that’s not diegetic music, so we don’t have a composer. We had one episode this year where we did have score. Flying Lotus and Thundercat are two music friends of Donald’s that scored the episode. But other than that, everything else is just pop songs that we put into the show.”

Isaac Hagy: “The decision of which music to use is very collaborative. Some of the songs are written in the script. A lot are choices that Kyle and I make. Hiro will add some. Donald will add some. We also have two great music supervisors. We’re really lucky that we get nearly 90% of the music that we fall in love with cleared. But when we don’t, our music supervisors recommend some great alternatives. We’re looking for an authenticity to the world, so we try to rely on tracks that exist in the real world.”

Atlanta provides an interesting look of the city’s hip-hop culture on the fringe. A series that has included an alligator and Donald Glover in weird prosthetic make-up – and where Hiro Murai takes inspiration from The Shining certainly isn’t your run-of-the-mill television series. It definitely leaves fans wanting more, but to date, a third season has not yet been announced.

This interview was recorded using the Apogee MetaRecorder for iOS application and transcribed thanks to Digital Heaven’s SpeedScriber.

Originally written for CreativePlanetNetwork.

©2018 Oliver Peters