Affinity Publisher

The software market offers numerous alternatives to Adobe Photoshop, but few companies have taken on the challenge to go further and create a competitive suite of graphics tools – until now. Serif has completed the circle with the release of Affinity Publisher, a full-featured, desktop publishing application. This adds to the toolkit that already includes Affinity Photo (an image editor) and Affinity Designer (a vector-based illustration app). All three applications support Windows and macOS, but Photo and Designer are also available as full-fledged pro applications for the iPad. This graphic design toolkit collectively constitutes an alternative to Adobe Photoshop, Illustrator, and InDesign.

Personas and StudioLink

The core user interface feature of the Affinity applications is that various modules are presented as Personas, which are accessed by the icons in the upper left corner of the interface. For example, in Affinity Photo basic image manipulation happens in the Photo Persona, but for mesh deformations, you need to shift to the Liquify Persona.

Affinity Publisher starts with the Publisher Persona. That’s where you set up page layouts, import and arrange images, create text blocks, and handle print specs and soft proofs. However, with Publisher, Affinity has taken Personas a step further through a technology they call StudioLink. If you also have the Photo and Designer applications installed on the same machine, then a subset of these applications is directly accessible within Publisher as the Photo and/or Designer Persona. If you have both Photo and Designer installed, then the controls for both Personas are functional in Publisher; but, if you only have one of the others installed, then just that Persona offers additional controls.

Users of Adobe InDesign know that to edit an image within a document you have to “open in Photoshop,” which launches the full Photoshop application where you would make the changes and then roundtrip back to InDesign. However, with Affinity Publisher the process is more straightforward, because the Photo Persona is right there. Just select the image within the document and click on the Photo Persona button in the upper left, which then shifts the UI to display the image processing tools. Likewise, clicking on the Designer Persona will display vector-based drawing tools. Effectivity Serif has done with Affinity Publisher what Blackmagic Design has done with the various pages in DaVinci Resolve. Click a button and shift to the function specifically designed for the task at hand without the need to change to a completely different application.

Document handling

All of the Affinity apps are layer-based, so while you are working in any of the three Personas within Publisher, you can see the layer order on the right to let you know where you are in the document. Affinity Photo offers superb compatibility with layered Photoshop PSD files, which means that your interchange with outside designers – who may use Adobe Photoshop – will be quite good.

Affinity Publisher documents are based on Master Pages and Pages. This is similar to the approach taken by many website design applications. When you create a document, you can set up a Master Page to define a uniform style template for that document. From there you would build individual Pages. Any changes made to a Master Page will then change and update the altered design elements for all of the Pages in the rest of that document. Since Affinity Publisher is designed for desktop publishing, single and multi-page document creation and export settings are both web and print-friendly. Publisher also offers a split-view display, which presents your document in a vector view on the left and as a rasterized pixel view on the right.

Getting started

Any complex application can be daunting at first, but I find the Affinity applications offer a very logical layout that makes it easy to get up to speed. In addition, when you start any of these applications you will first see a launch page that offers a direct link to various tutorials, sample documents and/or layered images. A beginner can quickly download these samples in order to dissect the layers and see exactly how they were created. Aside from these links to the tutorials, you can simply go to the website where you’ll find extensive, detailed video tutorials for each step of the process for any of these three applications.

If you are seeking to shake off subscriptions or simply not bound to using Adobe’s design tools for work, then these Affinity applications offer a great alternative. Affinity Publisher, Photo, and Designer are standalone applications, but the combination of the three forms a comprehensive image and design collection. Whether you are a professional designer or just someone who needs to generate the occasional print document, Affinity Publisher is a solid addition to your software tools.

©2019 Oliver Peters

Advertisements

Free Solo

Every now and then a documentary comes along that simply blows away the fictional super-hero feats of action films. Free Solo is a testament to the breathtaking challenges real life can offer. This documentary chronicles the first free solo climb (no ropes) by Alex Honnold of El Capitan’s 3,000-feet-high sheer rock face. This was the first and so far only successful free solo climb of the mountain.

Free Solo was produced by the filmmaking team of Elizabeth Chai Vasarhelyi and Jimmy Chin, who is renowned as both an action-adventure cinematographer/photographer and mountaineer. Free Solo was produced in partnership with National Geographic Documentary Films and has garnered numerous awards, including OSCAR and BAFTA awards for best documentary, as well as an ACE award for its editor Bob Eisenhardt, ACE. Free Solo enjoyed IMAX and regular theatrical distribution and can now be seen on the National Geographic Television streaming service.

Bob Eisenhardt is a well-known documentary film editor with over 60 films to his credit. Along with his ACE award for Free Solo, Eisenhardt is currently an editing nominee in this year’s EMMY Awards for his work in cutting the documentary. I recently had a chance to speak with Bob Eisenhardt and what follows is that conversation.

_________________________________________

[OP] You have a long history in the New York documentary film scene. Please tell me a bit about your background.

[BE] I’ve done a lot of different kinds of films. The majority is cinema vérité work, but some films use a lot of archival footage and some are interview-driven. I’ve worked on numerous films with the Maysles, Barbara Kopple, Matt Tyrnauer, a couple of Alex Gibney’s films – and I often did more than one film with people. I also teach in the documentary program at the New York Film Academy, which is interesting and challenging. It’s really critiquing their thesis projects and discussing some general editing principles. I went to architecture school. Architectural design is taught by critique, so I understand that way of teaching.

[OP] It’s interesting that you studied architecture. I know that a lot of editors come from a musical background or are amateur musicians and that influences their approach to cutting. How do you think architecture affects your editing style?

[BE] They say architecture is frozen music, so that’s how I was taught to design. I’m very much into structure – thinking about the structure of the film and solving problems. Architecture is basically problem solving and that’s what editing is, too. How do I best tell this story with these materials that I have or a little bit of other material that I can get? What is the essence of that and how do I go about it?

[OP] What led to you working on Free Solo?

[BE] This is the second film I’ve made with Chai and Jimmy. The first was Meru. So we had some experience together and it’s the second film about climbing. I did learn about the challenges of climbing the first time and was familiar with the process – what the climbing involved and how you use the ropes. 

Meru was very successful, so we immediately began discussing Free Solo. But the filming took about a year-and-a-half. That was partly due to accidents and injuries Alex had. It went into a second season and then a third season of climbing and you just have to follow along. That’s what documentaries are all about. You hitch your wagon to this person and you have to go where they take you. And so, it became a much longer project than initially thought. I began editing six months before Alex made the final climb. At that point they had been filming for about a year. So I came on in January and he made the climb in June – at which point I was well into the process of editing.

[OP] There’s a point in Free Solo, where Alex had started the ascent once and then stopped, because he wasn’t feeling good about it. Then it was unclear whether or not he would even attempt it again. Was that the six-month point when you joined the production?

[BE] Yes, that’s it. It’s very much the climbers’ philosophy that you have to feel it, or you don’t do it. That’s very true of free soloing. We wanted him to signal the action, “This is what I plan to do.” And he wouldn’t do it – ever – because that’s against the mentality of climbing. “If I feel it, I may do it. Otherwise, not.” It’s great for climbing, but not so good for film production.

[OP] Unlike any other film project, failure in this case would have meant Alex’s death. In that event you would have had a completely different film. That was touched on in the film, but what was the behind-the-scenes thinking about the possibility of such as catastrophe? Any Plan B?

[BE] In these vérité documentaries you never know what’s going to happen, but this is an extreme example. He was either going to do it and succeed, decide he wasn’t going to do it, or die trying, and that’s quite a range. So we didn’t know what film we were making when I started editing. We were going to go with the idea of him succeeding and then we’d reconsider if something else happened. That was our mentality, although in the back of our minds we knew this could be quite different.

When they started, it wasn’t with the intention of making this film. Jimmy knew Alex for 10 years. They were old friends and had done a lot of filming together. He thought Alex would be a great subject for a documentary. That’s what they proposed to Nat Geo – just a portrait of Alex – and Alex said, “If you are going to do that, then I’ve got to do something worthwhile. I’m going to try to free solo El Cap.” He told that to Chai while Jimmy wasn’t there. Chai is not a climber and she thought, “Great, that sounds like it will be a good film.” Jimmy completely freaked out when he found out, because he knew what it meant.

It’s an outrageous concept even to climbers. They actually backed off and had to reconsider whether this was something they wanted to get involved in. Do you really want to see your friend jeopardize his life for this? Would the filming add additional pressure on Alex? They had to deal with this even before they started shooting, which is why that was part of the film. I felt it was a very important idea to get across. Alex is taciturn, so you needed ways to understand him and what he was doing. The crew as a character really helped us do that. They were people Alex could interact with and the audience could identify with.

The other element that I felt was very important, was Sanni [McCandless, Alex Honnold’s girlfriend], who suddenly came onto the scene after the filming began. This felt like a very important way to get to know Alex. It also became another challenge for Alex – whether he would be able not only to climb this mountain, but whether he would be able to have a relationship with this woman. And aren’t those two diametrically opposed? Being able to open yourself up emotionally to someone, but also control your emotions enough to be able to hang by your fingertips 2,000 feet in the air on the side of a cliff.

[OP] Sanni definitely added a lot of humanity to him. Before the climb they discuss the possibility of his falling to his death and Alex’s point of view is that’s OK. “If I die, I die.” I’m not sure he really believed that deep inside. Or did he?

[BE] Alex is very purposeful and lives every day with intention. That’s what’s so intriguing. He knows any minute on the wall could be his last and he’s comfortable with that. He felt like he was going to succeed. He didn’t think he was going to fall. And if he didn’t feel that way he wasn’t going to do it. Seeing the whole thing through Sanni’s eyes allowed us as the audience to get closer to and identify with Alex. We call that moment the ‘Take me into consideration’ scene, which I felt was vitally important.

[OP] Did you have any audience screenings of the rough cuts? If so, how did that inform your editing choices?

[BE] We did do some screenings and it’s a tricky thing. Nat Geo was a great partner throughout. Most companies wouldn’t be able to deal with this going on for a year-and-a-half. It’s in Nat Geo’s DNA to fund exploration and make exploratory films. They were completely supportive, but they did decide they wanted to get into Sundance and we were a month from the deadline. We brought in three other editors (Keiko Deguchi, Jay Freund, and Brad Fuller) to jump in and try to make it. Even though we got an extension and we did a great job, we didn’t get in. The others left and I had another six months to work on the film and make it better. Because of all of this, the screenings were probably too early. The audience had trouble understanding Alex, understanding what he’s trying to do – so the first couple screenings were difficult.

We knew when we saw the initial climbing footage that the climb itself was going to be amazing. By the time we showed it to an audience, we were completely immune to any tension from the climb – I mean, we’d seen it 200 times. It was no longer as scary to us as it had been the first time we saw it. In editing you have to remember the initial reaction you had to the footage so that you can bring it to bear later on. It was a real struggle to make the rest of the story as strong as possible to keep you engaged, until we got to the climb. So we were pleasantly surprised to see that people were so involved and very tense during the climb. We had underestimated that.

We also figured that everyone would already know how this thing ends. It was well-publicized that he successfully climbed El Cap. The film had to be strong enough that people could forget they knew what happened. Although I’ve had people tell me they could not have watched the climb if they hadn’t known the outcome.

[OP] Did you end up emphasizing some aspects over others as a result of the screenings?

[BE] The main question to the audience is, “Do you understand what we are trying to say?” And then, “What do you think of him or her as a character?” That’s interesting information that you get from an audience. We really had to clarify what his goal was. He never says at the beginning, “I’m going to do this thing.” In fact, I couldn’t get him to say it after he did it. So it was difficult to set up his intention. And then it was also difficult to make clear what the steps were. Obviously we couldn’t cover the whole 3,000 feet of El Capitan, so they had to concentrate on certain areas.

We decided to cover five or six of the most critical pitches – sections of the climb – to concentrate on those and really cover them properly during the filming. These were challenging to explain and it took a lot of effort to make that clear. People ask, “How did you manage to cut the final climb – it was amazing.” Well, it worked because of the second act that explains what he is trying to do. We didn’t have to say anything in the third act. You just watch because you understand. 

When we started people didn’t understand what free soloing is. At first we were calling the film Solo. The nomenclature of climbing is confusing. Soloing is actually climbing with a rope, but only for protection. Then we’d have to explain what free soloing was as opposed to soloing. However, Hans Solo came along and stole our title, so it was much easier to call it Free Solo. Explaining the mentality of climbing, the history of climbing, the history of El Capitan, and then what exactly the steps were for him to accomplish what he was trying to do – all that took a long time to get right and a lot of that came out of good feedback from the audience.

Then, “Do you understand the character?” At one point we didn’t have enough of Sanni and then we had too much of Sanni. It became this love story and you forgot that he was going to climb. So the balancing was tricky.

[OP] Since you were editing before the final outcome and production was still in progress, did you have an opportunity to request more footage or that something in particular be filmed that you were missing in the edit?

[BE] That was the big advantage to starting the edit before the filming was done. I often end up coming into projects that are about 80-90% shot on average. So they have the ability to get pick-ups if people are alive or if the event can still be filmed in some way. This one was more ‘in progress.’ For instance, he practiced a specific move a lot for the most difficult pitch and I kept asking for more of that. We wanted to show how many times he practiced it in order to get the feel of it.

[OP] Let’s switch gears and talk about the technical side. Which edit system did you use to cut Free Solo?

[BE] We were using Avid Media Composer 8.8.5 with Nexis shared storage. Avid is my first choice for editing. I’ve done about four films on the old Final Cut – Meru being one of them – but, I much prefer Avid. I’ve often inherited projects that were started on something else, so you are stuck. On this one we knew going in that we would do it on Avid. Their ScriptSync feature is terrific. Any long discussions or sit-down interviews were transcribed. We could then word-search them, which was invaluable. My associate editor, Simona Ferrari, set up everything and was also there for the output.

[OP] Did you handle the finishing – color correction and sound post – in-house or go outside to another facility?

[BE] We up-rezzed in the office on [Blackmagic Design DaVinci] Resolve and then took that to Company 3 for finishing and color correction. Deborah Wallach did a great job sound editing and we mixed with Tommy Fleischman [Hugo, The Wolf of Wall Street, BlacKkKlansman]. They shot this on about every camera, aspect ratio, and frame rate imaginable. But if they’re hanging 2,000 feet in the air and didn’t happen to hit the right button for the frame rate – you really can’t complain too much! So there was an incredible wide range and Simona managed to handle all that in the finishing. There wasn’t a lot of archival footage, but there were photos for the backstory of the family.

The other big graphic element was the mountain itself. We needed to be able to trace his route up the mountain and that took forever. It wasn’t just to show his climb, but also to connect the pitches that we had concentrated on, since there wasn’t much coverage between them. Making this graphic became very complicated. We tried one house and they couldn’t do it. Finally, Big Star, who was doing the other graphics – photomontages and titles – took this on. It was the very last thing done and was dropped in during the color correction session.

For the longest time in the screenings, the audience was watching a drawing that I had shot off of the cutting room wall and traced in red. It was pretty lame. For the screenings, it was a shot of the mountain and then I would dissolve through to get the line moving. After a while we had some decent in and out shots, but nothing in-between, except this temporary graphic that I created. 

[OP] I caught Free Solo on the plane to Las Vegas for NAB and it had me on the edge of my seat. I know the film was also released in IMAX, so I can only image what that experience was like.

[BE] The film wasn’t made for IMAX – that opportunity came up later. It’s a different film on IMAX. Although there is incredible high-angle photography, it’s an intimate story. So it worked well on a moderately big screen. But in IMAX it becomes a spectacle, because you can really see all those details in the high-angle shots. I have cut an IMAX film before and you do pace them different, because of the ability to look around. However, there wasn’t a different version of Free Solo made for IMAX – we didn’t have the freedom to do that. Of course, the whole film is largely handheld, so we did stabilize a few shots. IMAX merely used their algorithm to bump it up to their format. I was shocked – it was beautiful.

[OP] Let’s talk a bit about your process as an editor. For instance, music. Different editors approach music differently. Do you cut with temp music or wait until the very end to introduce the score?

[BE] Marco Beltrami [Fantastic Four, Logan, Velvet Buzzsaw] was our composer, but I use temp music from very early on. I assemble a huge library of scratch music – from other films or from the potential composers’ past films. I use that until we get the right feel for the music and that’s what we show to the composer. It gives us something to talk about. It’s much easier to say, “We like what the music is doing here, but it’s the wrong instrumentation.” Or, “This is the right instrument, but the wrong tempo.” It’s a baseline.

[OP] How do you tackle the footage at the very beginning? Do you create selects or Kem rolls or some other approach?

[BE] I create a road map to know where I’m going. I go through all the dailies and pull the stuff that I think might be useful. Everything from the good-looking shots to a taste of something that I may never use, but I want to remember. Then I screen selects reels. I try to do that with the director. Sometimes we can schedule that and sometimes not. On Free Solo there was over 700 hours of footage, so it’s hard to get your arms around that. By the time you get through looking at the 700th hour you’ve forgotten the first one. That’s why the selecting process is so important to me. The selects amount to maybe a third of the dailies footage. After screening the selects, I can start to see the story and how to tell it. 

I make index cards for every scene and storyboard the whole thing. By that I mean arrange the cards on a wall. They are color-coded for places, years, or characters. It allows me to stand back and see the flow of the film, to think about the structure, and the points that I have to hit. I basically cut to that. Of course, if it doesn’t work, I re-arrange the index cards (laugh).

A few years ago, I did a film about the Dixie Chicks [Shut Up & Sing] at the time they got into trouble for comments they had made about President Bush. We inherited half of the footage and shot half. The Dixie Chicks went on to produce a concert and an album based upon their feelings about the whole experience. It was kind of present and past, so there were basically two different colors to the cards. It was not cut in chronological order, so you could see very quickly whether you were in the past or the present just by looking at the wall. There were four editors working on Shut Up & Sing and we could look at the wall, discuss, and decide if the story was working or not. If we moved this block of cards, what would be the consequences of telling the story in a different order?

[OP] Were Jimmy or Chai very hands-on as directors during the edit – in the room with you every day at the end?

[BE] Chai and Jimmy are co-directors and so Jimmy tended to be more in the field and Chai more in the edit room. Since we had worked together before, we had built a common language and a trust. I would propose ideas to Chai and try them and she would take a look. My feeling is that the director is very close to it and not able to see the dailies with fresh eyes. I have the fresh perspective. I like to take advantage of that and let them step back a little. By the end, I’m the one that’s too close to it and they have a little distance if they pace themselves properly.

[OP] To wrap it up, what advice would you have for young editors tackling a documentary project like this?

[BE] Well, don’t climb El Cap – you probably won’t make it (laugh)! I always preach this to my students: I encourage them to make an outline and work towards it. You can make index cards like I do, you can make a Word document, a spreadsheet; but try to figure out what your intentions are and how you are going to use the material. Otherwise, you are just going to get lost. You may be cutting things that are lovely, but then don’t fit into the overall structure. That’s my big encouragement.

Sometimes with vérité projects there’s a written synopsis, but for Free Solo there was nothing on paper at the beginning. They went in with one idea and came out with a different film. You have to figure out what the story is and that’s all part of the editing process. This goes back to the Maysles’ approach. Go out and capture what happened and then figure out the story. The meaning is found in the cutting room.

Images courtesy of National Geographic and Bob Eisenhardt.

©2019 Oliver Peters

Black Mirror: Bandersnatch

Bandersnatch was initially conceived as an interactive episode within the popular Black Mirror anthology series on Netflix. Instead, Netflix decided to release it as a standalone, spin-off film in December 2018. It’s the story of programmer Stefan Butler (Fionn Whitehead) as he adapts a choose-your-own-adventure novel into a video game. Set in 1984, the viewers get to make decisions for Butler’s actions, which then determine the next branch of the story shown to the viewer. They can go back though Bandersnatch and opt for different decisions, in order to experience other versions of the story.

Bandersnatch was written by show creator Charlie Brooker (Black Mirror, Cunk on Britain, Cunk on Shakespeare), directed by David Slade (American Gods, Hannibal, The Twilight Saga: Eclipse), and edited by Tony Kearns (The Lodgers, Cardboard Gangsters, Moon Dogs). I recently had a chance to interview Kearns about the experience of working on such a unique production.

__________________________________________________

[OP] Please tell me a little about your editing background leading up to cutting Bandersnatch.

[TK] I started out almost 30 years ago editing music videos in London. I did that full-time for about 15 years working for record companies and directors. At the tail end of that a lot of the directors I was working with moved into doing commercials, so I started editing commercials more and more in Dublin and London. In Dublin I started working on long form, feature film projects and cut about 10 projects that were UK or European co-productions with the Irish Film Board.

In 2017 I got a call from Black Mirror to edit the Metalhead episode, which was directed by David Slade. He was someone I had worked with on music videos and commercials 15 years previously, before he had moved to the United States. That was a nice circularity. We were together working again, but on a completely different type of project – drama, on a really cool series, like Black Mirror. It went very well, so David and I were asked to get involved with Bandersnatch, which we jumped at, because it was such an amazing, different kind of project. It was unlike anything either of us – or anyone else, for that matter – has ever done to that level of complexity.

[OP] Other attempts at interactive storytelling – with the exception of the video game genre – have been a hit-or-miss. What were your initial thoughts when you read the script for the first time?

[TK] I really enjoyed the script. It was written like a conventional script, but with software called Twine, so you could click on it and go down different paths. Initially I was overwhelmed at the complexity of the story and the structure. It wasn’t that I was like a deer in the headlights, but it gave me a sense of scale of the project and [writer/show runner] Charlie Brooker’s ambition to take the interactive story to so many layers.

On my own time I broke down the script and created spreadsheets for each of the eight sections in the script and wrote descriptions of every possible permutation, just to give me a sense of what was involved and to get it in my head what was going on. There are so many different narrative paths – it was helpful to have that in my brain. When we started editing, that would also help me to keep a clear eye at any point.

[OP] How long of a schedule did you have to post Bandersnatch?

[TK] 17 weeks was the official edit time, which isn’t much longer than on a low-budget feature. When I mentioned that to people, they felt that was a really short amount of time; but, we did a couple of weekends, we were really efficient, and we knew what we were doing.

[OP] Were you under any running length constraints, in the same way that a TV show or a feature film editor often wrestles with on a conventional linear program?

[TK] Not at all. This is the difference – linear doesn’t exist. The length depends on the choices that are made. The only direction was for it not to be a sprawling 15-hour epic – that there would be some sort of ball park time. We weren’t constrained, just that each segment had to feel right – tight, but not rushed.

[OP] With that in mind, what sort of process did you do through to get it to feel right?

[TK] Part of each edit review was to make it as tight or as lean as it needed to be. Netflix developed their own software, called Branch Manager, which allowed people to review the cut interactively by selecting the choice points. My amazing assistant editor, John Weeks, is also a coder, so he acquired an extra job, which was to take the exports and do the coding in order to have everything work in Branch Manager. He’s a very robust person, but I think we almost broke him (laughs), because there were up to 100 Branch Manager versions by the end. The coding was hanging on by a thread. He was a bit like Scotty in Star Trek, “The engines can’t hold it anymore, Captain!”

By using Branch Manager, people could choose a path and view it and give notes. So I would take the notes, make the changes, and it would be re-exported. Some segments might have five cuts while others would be up to 13 or 14. Some scenes were very straightforward, but others were more difficult to repurpose.

Originally there were more segments in the script, but after the first viewings it was felt that there were too many in there. It was on the borderline of being off-putting for viewers. So we combined a few, but I made sure to keep track of that so it was in the system. There was a lot of reviewing, making notes, updating spreadsheets, and then making sure John had the right version for the next Branch Manager creation. It was quite an involved process.

[OP] How were you able to keep all of this straight? Did you use the common technique of scenes cards on the wall or something different?

[TK] If you looked at flowcharts your head would explode, because it would be like looking at the wiring diagram of an old-fashioned telephone exchange. There wouldn’t have been enough room on the wall. For us, it would just be on paper – notebooks and spreadsheets. It was more in our heads – our own sense of what was happening – that made it less confusing. If you had the whole thing as a picture, you just wouldn’t know where to look.

[OP] In a conventional production an editor always has to be mindful that when something is removed, it may have ramifications to the story later on. In this case, I would imagine that those revisions affected the story in either direction. How were you able to deal with that?

[TK] I have been asked about how did we know that each path would have a sense of a narrative arc. We couldn’t think of it as one, total narrative arc. That’s impossible. You’d have to be a genius to know that it’s all going to work. We felt the performances were great, the story was strong, but it doesn’t have a conventional flow. There are choice points, which act as a propellant into the next part of the film thus creating an unconventional experience to the straight story arc of conventional films or episodes. Although there wasn’t a traditional arc, it still had to feel like a well-told story. And that you would have empathy and a sense of engagement – that it wasn’t a gimmick.

[OP] How did the crew and actors mange to keep the story straight in their minds as scenes were filmed?

[TK] As with any production, the first few days are finding out what you’ve let yourself in for. This was a steep learning curve in that respect. Only three weeks of the seven-week shoot was in the same studio complex where I was working, so I wasn’t present. But there was a sense that they needed to make it easier for the actors and the crew. The script supervisor, Marilyn Kirby, was amazing. She was the oracle for the whole shoot. She kept the whole show on the road, even when it was quite complicated. The actors got into the swing of it quickly, because I had no issues with the rushes. They were fantastic.

[OP] What camera formats were used and what is your preparation process for this footage prior to editing?

[TK] It’s the most variety of camera formats I’ve ever worked on. ARRI Alexa 65 and RED, but also 1980s Ikegami TV cameras, Super 8mm, 35mm, 16mm, and VHS. Plus, all of the print stills were shot on black-and-white film. The data lab handled the huge job to keep this all organized and provide us with the rushes. So, when I got them, they were ready to go. The look was obviously different between the sources, but otherwise it was the same as a regular film. Each morning there was a set of ProRes Proxy rushes ready for us. John synced and organized them and handed them over. And then I started cutting. Considering all the prep the DIT and the data lab had to go through, I think I was in a privileged position!

[OP] What is your method when first starting to edit a scene?

[TK] I watch all of the rushes and can quickly see which take might be the bedrock framing for a scene – which is best for a given line. At that point I don’t just slap things together on a timeline. I try to get a first assembly to be as good as possible, because it just helps anyone who sees it. If you show a director or a show runner a sloppy cut, they’ll get anxious and I don’t want that to happen. I don’t want to give the wrong impression.

When I start a scene, I usually put the wide down end-to-end, so I know I have the whole scene. Then I’ll play it and see what I have in the different framings for each line – and then the next line and the next and so on. Finally, I go back and take out angles where I think I may be repeating a shot too much, extend others, and so on. It’s a built-it-up process in an effort to get to a semi-fine cut as quickly as possible.

[OP] Were you able to work with circle takes and director’s notes on Bandersnatch?

[TK] I did get circle takes, but no director’s notes. David and I have an intuitive understanding, which I hope to fulfill each time – that when I watch the footage he shoots, that I’ll get what he’s looking for in the scene. With circles takes, I have to find out very quickly whether the script supervisor is any good or not. Marilyn is brilliant so whenever she’s doing that, I know that take is the one. David is a very efficient director, so there weren’t a massive number of takes – usually two or three takes for each set-up. Everything was shot with two cameras, so I had plenty of coverage. I understand what David is looking for and he trusts me to get close to that.

[OP] With all of the various formats, what sort of shooting ratio did you encounter? Plus, you had mentioned two-camera scenes. What is your approach to that in your edit application?

[TK] I believe the various story paths totaled about four-and-a-half hours of finished material. There was a 3:1 shooting ratio, times two cameras – so maybe 6:1 or even 9:1. I never really got a final total of what was shot, but it wasn’t as big as you’d expect. 

When I have two-camera coverage I deal with it as two individual cameras. I can just type in the same timecode for the other matching angle. I just get more confused with what’s there when I use multi-cam. I prefer to think of it as that’s the clip from the clip. I hope I’m not displaying an anti-technology thing, but I’m used to it this way from doing music videos. I used to use group clips in Avid and found that I could think about each camera angle more clearly by dealing with them separately.

[OP] I understand that you edited Bandersnatch on Adobe Premiere Pro. Is that your preferred editing software?

[TK] I’ve used Premiere Pro on two feature films, which I cut in Dublin, and a number of shorts and TV commercials. If I am working where I can set up my own cutting room, then I’m working with Premiere. I use both Avid and Adobe, but I find I’m faster on Premiere Pro than on Media Composer. The tools are tuned to help me work faster.

The big thing on this job was that you can have multiple sequences open at the same time in Premiere. That was going to be the crunch thing for me. I didn’t know about Branch Manager when I specified Premiere Pro, so I figured that would be the way we work need to review the segments – simply click on a sequence tab and play it as a rudimentary way to review a story path. The company that supplied the gear wasn’t as familiar with Premiere [as they were with Avid], so there were some issues, but it was definitely the right choice.

[OP] Media Composer’s strength is in multi-editor workflows. How did you handle edit collaboration in Premiere Pro?

[TK] We used Adobe’s shared projects feature, which worked, but wasn’t as efficient as working with Avid in that version of Premiere. It also wasn’t ideal that we were working from Avid Nexis as the shared storage platform. In the last couple of months I’ve been in contact with the people at Adobe and I believe they are sorting out some of the issues we were having in order to make it more efficient. I’m keen for that to happen.

In the UK and London in particular, the big player is Avid and that’s what people know, so anything different, like Premiere Pro, is seen with a degree of suspicion. When someone like me comes in and requests something different, I guess I’m viewed as a bit of a pain in the ass. But, there shouldn’t just be one behemoth. If you had worked on the old Final Cut Pro, then Premiere Pro is a natural fit – only more advanced and supported by a company that didn’t want to make smart phones and tablets.

[OP] Since Adobe Creative Cloud offers a suite of compatible software tools, did you tap into After Effects or other tools for your edit?

[TK] That was another real advantage – the interaction with the graphics user interface and with After Effects. When we mocked up the first choice points, it was so easy to create, import, and adjust. That was a huge advantage. Our VFX editor was able to build temp VFX in After Effects and we could integrate that really easily. He wasn’t just using an edit system’s effects tool, but actual VFX software, which seamlessly integrated with Premiere. Although these weren’t final effects at full 4K resolution, he was able to do some very complex things, so that everyone could go, “Yes, that’s it.”

[OP] In closing, what take-away would you offer an editor interested in tackling an interactive story as compared to a conventional linear film?

[TK] I learned to love spreadsheets (laugh). I realized I had to be really, really organized. When I saw the script I knew I had to go through it with a fine-tooth comb and get a sense of it. I also realized you had to unlearn some things you knew about conventional episodic TV. You can’t think of some things in the same way. A practical thing for the team is that you have to have someone who knows coding, if you are using a similar tool to Branch Manager. It’s the only way you will be able to see it properly.

It’s a different kind of storytelling pressure that you have to deal with, mostly because you have to trust your instincts even more that it will work as a coherent story across all the narrative paths. You also have to be prepared to unlearn some of the normal methods you might use. One example is that you have to cut the opening of different segments differently to work with the last shot of the previous choice point, so you can’t just go for one option, you have to think more carefully what the options are. The thing is not to walk in thinking it’s going to be the same as any other production, because it ain’t.

For more on Bandersnatch, check out these links: postPerspective, an Art of the Guillotine interview with Tony Kearns, and a scene analysis at This Guy Edits.

Images courtesy of Netflix and Tony Kearns.

©2019 Oliver Peters

Mind your TCO

TCO = total cost of ownership.

When fans argue PCs versus Macs, the argument tends to only focus on the purchase price of the hardware. But owning a computer is also about the total operating cost or TCO over its lifespan. In the corporate world, IBM has already concluded Mac deployment has been cheaper for the IT Department. For video editors, a significant part of the equation is the software we run. Here, all things are not equal, since there are options for the Mac that aren’t available to PC users. Yes, I know that Avid, Adobe, and Blackmagic Design offer cross-platform tools, but this post is a thought exercise, so bear with me.

If you are a PC user, odds are that you will be using Adobe Creative Cloud software, which is only available in the form of a subscription. Sure, you could be using Media Composer or Edius, but likely it will be Premiere Pro and the rest of the Creative Cloud tools, such as Photoshop and After Effects. Avid offers both perpetual and subscription plans, but the perpetual licenses require an annual support renewal to stay current with the software. The operating cost between Avid and Adobe end up in a very similar place over time.

Mac users could use the same tools, of course, but they do have significant alternatives in non-subscription software, like Apple’s own Pro Applications. In addition, macOS includes additional productivity software that PC users would have to purchase at additional cost. The bottom line is that you have to factor in the cost of the subscription over the lifespan of the PC, which adds to its TCO*.

For this exercise, I selected two 15″ laptops – a Dell and a MacBook Pro. I configured each as close to the other as possible, with the exception that Dell only offers a 6-core CPU, whereas new MacBook Pros use 8-core chips. That comes to $2395 for the Dell and $3950 for the Apple – a pretty big difference. But now let’s add software tools.

For the PC’s suite of tools, I have included the full Adobe Creative Cloud bundle, along with a copy of Microsoft Office. Adobe’s current subscription rate for individuals comes to $636/year (when paid annually, up front). You would also have to add Microsoft Office to get Word, Excel, and Powerpoint. Even though Microsoft is moving to subscriptions, you can still buy Office outright. A home/small business license is $250.

You could, of course, make the same choices for the Mac, but that’s not the point of this exercise. I’m also not trying to make the argument that one set of tools is superior to the other. This post is strictly about comparing cost. If you decide to add alternative software to the Mac in order to parallel the Adobe Creative Cloud bundle, you would have to purchase Final Cut Pro X, Motion, Compressor, and Logic Pro X. To cover Photoshop/Illustrator/InDesign tasks, add Affinity Photo, Designer, and Publisher. You can decide for yourself whether or not macOS Photos is a viable substitute for Lightroom; but, for sake of argument, let’s add ON1 Photo RAW to our alternative software package. Some Adobe tools, like Character Animator, won’t have an equal, but that’s an application that most editors have probably never touched anyway. Of course, macOS comes with Pages, Numbers, and Keynote, so no requirement to add Microsoft Office for the MacBook Pro. Total all of this together and the ballpark sum comes to $820. But you have purchased perpetual licenses and do not require annual subscription payments.

In the first year of ownership, PC users clearly have the edge. In fact, up until year three, the TCO is cheaper for PC owners. Odds are you’ll own your laptop longer than three years. I’m typing this on a mid-2014 15″ MacBook Pro, which is also my primary editing machine for any work I do at home. Once you cross into the fourth year and longer, the Mac is cheaper to own and operate, purely because of the differences in software license models.

Remember this is a simple thought exercise and you can mix and match software combinations any way you would like. These are worthwhile considerations when comparing products. It’s just not as simple as saying one hardware product is cheaper than the other, which is why a TCO analysis can be very important.

*Totals shown have been rounded for simplicity.

©2019 Oliver Peters

Accusonus ERA4

It’s fair to say that most video editors, podcast producers, and audio enthusiasts don’t possess the level of intimate understanding of audio filters that a seasoned recording engineer does. Yet each still has the need to present the cleanest, most professional-sounding audio as part of their productions. Some software developers have sought to serve this market with plug-ins that combine the controls into a single knob or slider. The first of these was Waves Audio with their OneKnob series. Another company using the single-knob approach is a relative newcomer, Accusonus.

I first became aware of Accusonus through Avid. Media Composer license owners have been offered loyalty add-ons, such as plug-ins, which recently included the Accusonus ERA3 Voice Leveler plug-in. I found this to be a very useful tool and so when Accusonus offered to send the new ERA4 Bundle for my evaluation, I was more than happy to give the rest of the package a test run. ERA4 was released at the end of June in a Standard and Pro bundle along with discounted, introductory pricing, available until the end of July. You may also purchase each of these filters individually.

Audio plug-ins typically come in one of four formats: AU (Mac only), VST (Mac or Windows), VST3 (Mac or Windows), and AAX (for Avid Media Composer and Pro Tools). When you purchase audio filters, they don’t necessarily come in all flavors. Sometimes, plug-ins will be AU and VST/VST3, but leave out AAX. Or will only be for AAX. Or only AU. Accusonic plug-ins are installed as all four types on a Mac, which means that a single purchase covers most common DAWs and NLEs (check their site for supported hosts). For example, my Macs include Final Cut Pro X, Logic Pro X, Audition, Premiere Pro, and Media Composer. The ERA4 plug-ins work in all of these.

I ran into some issues with Resolve. The plug-ins worked fine on the Fairlight page of Resolve 16 Studio Beta. That’s on my home machine. However, the Macs at work are running the Mac App Store version of Resolve 15 Studio. There, only the VST versions could be applied and I had to re-enter each filter’s activation code and relaunch. I would conclude from this that Resolve is fine as a host, although there may be some conflicts in the Mac App Store version. That’s likely due to some differences between it and the software you download directly from Blackmagic Design.

Another benefit is that Accusonus permits each license key to be used on up to three machines. If a user has both a laptop and a desktop computer, the plug-in can be installed and activated on each without the need to swap authorizations through an online license server or move an iLok dongle between machines. The ERA4 installers include all of the tools in the bundle, even if you only purchased one. You can ignore the others, uninstall them, or test them out in a trial mode. The complete bundle is available and fully functional for a 14-day free trial.

ERA4 Bundles

I mentioned the Waves One Knob filters at the top, but there’s actually little overlap between these two offerings. The One Knob series is focused on EQ and compression tasks, whereas the ERA4 effects are designed for audio repair. As such, they fill a similar need as the iZotope’s RX series.

The ERA4 Standard bundle includes six audio plug-ins: Noise, Reverb, and Plosive Removers, De-Esser, De-Clipper, and the Voice Leveler. The Pro bundle adds two more: the more comprehensive De-Esser Pro and ERA-D, which is a combined noise and reverb filter for more advanced processing than the two individual filters. If you primarily work with well-recorded studio voice-overs or location dialogue, then most likely the Standard bundle will be all you need. However, the two extra filters in the Pro bundle come in handy with more problematic audio. Even productions with high values occasionally get stuck with recordings done in challenging environments and last-minute VOs done on iPhones. It’s certainly worth checking out the full package as a trial.

While Accusonus does use a single-control approach, but it’s a bit simplistic to say that you are tied to only one control knob. Some of the plug-ins do offer more depth so you can tailor your settings.  For instance, the Noise Remover filter offers five preset curves to determine the frequencies that are affected. Each filter includes additional controls for the task at hand.

In use

Accusonus ERA4 filters are designed to be easy to use and work well in real-time. When all I need to do is improve audio that isn’t a basket case, then the ERA filters at their default settings do a wonderful job. For example, a VO recording might require a combination of Voice Leveler (smooth out dynamics), De-Esser (reduce sibilance), and Plosive Remover (clean up popping “p” sounds). Using the default control level (40%) or even backing off a little improved the sound.

It was the more problematic audio where ERA4 was good, but not necessarily always the best tool. In one case I tested a very, heavily clipped VO recording. When I used ERA4 De-Clipper in Final Cut Pro X, I was able to get similar results to the same tool from iZotope RX6. However, doing the same comparison in Audition yielded different results. Audition is designed to preview an effect and then apply it. The RX plug-in at its extreme setting crackled in real-time playback, but yielded superior results compared with the ERA4 De-Clipper after the effect was applied (rendered). Unfortunately FCPX has no equivalent “apply,” “render and replace,” or audio bounce function, so audio has to stay real-time, which gives Accusonus a performance edge in FCPX. For most standard audio repair tasks, Accusonus’ plug-ins were equal or better than most other options, especially those built into the host application.

I started out talking about the Voice Leveler plug-in, because that’s an audio function I perform often, especially with voice-overs. It helps to make the VO stand out in the mix against music and sound effects. This is an intelligent compressor, which means it tries to bring up all audio and then compress peaks over a threshold. But learn the controls before diving in. For example, it includes a breath control. Engaging this will prevent the audio from pumping up in volume each time the announcer takes a breath. As with all of the ERA4 filters, there is a small, scrolling waveform in the plug-in’s control panel. Areas that were adjusted by the filter are highlighted, so you can see when it is active.

Voice Leveler is a good VO tool, but that type is one of the more subjective audio filters. Some editors or audio engineers compress, some limit, and others prefer to adjust levels only manually. My all-time favorite is Wave’s Vocal Rider. Unlike a compressor, it dynamically raises and lowers audio levels between two target points. To my ears, this method yields a more open sound than heavy compression. But when its normal MSRP is pretty expensive. I also like the Logic Pro X Compressor, which is available in Final Cut Pro X. It mimics various vintage compressors, like Focusrite Red or the DBX 160X. I feel that it’s one of the nicest sounding compressors, but is only available in the Apple pro apps. Adobe users – you are out of luck on that one.

From my point-of-view, the more tools the better. You never know when you might need one. The Accusonus ERA4 bundle offers a great toolset for anyone who has to turn around a good-sounding mix quickly. Each bundle is easy to install and activate and even easier to use. Operation is real-time, even when you stack several together. Accusonus’ current introductory price for the bundles is about what some individual plug-ins cost from competing companies, plus the 14-day trial is a great way to check them out. If you need to build up your audio toolbox, this is a solid set to start out with.

Check out Accusonus’ blog for tips on using the ERA plug-ins.

©2019 Oliver Peters

The 2019 Mac Pro Truck

In 2010 Steve Jobs famously provided us with the analogy that traditional computers are like trucks in the modern era. Not that trucks were going away, but simply were no longer a necessity for most of us, now that the majority of the populace wasn’t engaged in farming. While trucks would continue to be purchased and used, far fewer people actually needed them, because the car covered their needs. The same was true, he felt, of traditional computers.

Jobs is often characterized as being a consumer market-driven guy, but I believe the story is more nuanced. After all, he founded NeXT Computer, which clearly made high-end workstations. Job also became the major shareholder in Pixar Animation Studios – a company that not only needed advanced, niche computing power, but also developed some of its own specialized graphics hardware and software. So a mix of consumer and advanced computing DNA runs throughout Apple.

By the numbers

Unless you’ve been under a rock, you know that Apple revealed its new 2019 Mac Pro at the WWDC earlier this month. This year’s WWDC was an example of a stable, mature Apple firing on all cylinders. iPhone unit sales have not been growing. The revenue has, but that’s because the prices have been going up. Now it’s time to push all of the company’s businesses, including iPad, services, software, and the Mac. Numbers are hard to come by, although Apple has acknowledged that the Mac unit by itself is nearly a $25 billion business and that it would be close to being in the Fortune 100 on its own. There’s a ratio of 80/20 Mac laptops to desktops. For comparison to the rest of the PC world, Apple’s marketshare is around 7%, ranking fourth behind Lenovo, HP, and Dell, but just ahead of Acer. There are 100 million active macOS users (Oct 2018), although Windows 10 adoption alone runs eight times larger (Mar 2019).

We can surmise from this information that there are 20 million active Mac Pro, iMac, iMac Pro, and Mac mini users. It’s fair to assume that a percentage of those are in the market for a new Mac Pro. I would project that maybe 1% of all Mac users would be interested in upgrading to this machine – i.e. around 1 million prospective purchasers. I’m just spit-balling here, but at a starting price of $6,000, that’s a potential market of $6 billion in sales before factoring in any upgrade options or new macOS users!

A funny thing happened on the way to the WWDC

Apple went through a computing platform progression from the old Quadra 950 and 9600 towers to the first Intel Mac Pro towers over the course of the mid-1990s to 2006. The second generation of the older Mac Pro was released during 2009. So in a dozen-plus years, Apple customers saw seven major processor/platform changes and had come to expect a constant churn. In essence, plan on replacing your system every few years. However, from 2009 onward, customers that bought those Mac Pros had a machine that could easily last, be productive, and still be somewhat competitive ten years later. The byproduct of this was the ability to plan longer life expectancy for the hardware you buy. No longer an automatic two to three year replacement need.

Even the 2013 Mac Pro has lasted until now (six years later) and remains competitive with most machines. The miscalculation that Apple made with the 2013 Mac Pro was that pro customers would prefer external expandability versus internal hardware upgrades. Form over function. That turned out to be wrong. I’m probably one of the few who actually likes the 2013 Mac Pro under the right conditions. It’s an innovative design, but unfortunately one that can’t be readily upgraded.

The second major change in computing hardware is that now “lesser” machines are more than capable of doing the work required in media and entertainment. During those earlier days of the G3/G4/G5 PowerMacs and the early Intel Mac Pros, Apple didn’t make laptops and all-in-ones that had enough horsepower to handle video editing and the like. Remember the colorful, plastic iMacs and white eMacs? Or what about the toilet-seat-like iBook laptop? Good enough for e-mail, but not what you would want for editing.

Now, we have a wide range of both Mac and PC desktop computers and laptops that are up to the task. In the past, if you needed a performance machine, then you needed a workstation class computer. Nothing else would do. Today, a general purpose desktop PC that isn’t necessarily classed as a workstation is more than sufficient for designers, editors, and colorists. In the case of Apple, there’s a range of laptops and all-in-ones that cover those needs at many different price points.

The 2019 Mac Pro Reveal

Let me first say that I didn’t attend WWDC and I haven’t seen the new Mac Pro in person. I hope to be able to do a review at some point in the future. The bottom line is that this is purely an opinion piece for now.

There have certainly been a ton of internet comments about this machine – both positive and negative. Price is the biggest pain point. Clearly Apple intends this to be a premium product for the customer with demanding computing requirements. You can spin the numbers any way you like and people have. Various sites have speculated that a fully-loaded machines could drive the starting price from $6,000 to as high as $35K to $50K. The components that Apple defines in the early tech information do not perfectly match equivalent model numbers available on the suppliers’ websites. No one knows for sure how the specific Intel Xeon being used by Apple equates to other Xeons listed on Intel’s site. Therefore, exact price extrapolations are simply guesses for now.

In late 2009 I purchased an entry model 8-core Mac Pro. With some storage and memory upgrades, AppleCare, sales tax, and a small business discount, I paid around $4,000. The inflation difference over the decade is about 17%, so that same hardware should cost me $4,680 today. In fairness, Apple has a different design in this new machine and there are technologies not in my base 2009 machine, such as 10GigE, Thunderbolt 3, a better GPU, etc. Even though this new machine may be out of my particular budget right now, it’s still an acceptable value when compared with the older Mac Pros.

Likewise, if you compare the 2019 Mac Pro to comparable name brand workstations, like an HP Z8, you’ll quickly find that the HP will cost more. One clear difference, though is that HP also offers smaller, less costly workstation models, such as the Z2, Z4 and Z6. The PC world also offers many high quality custom solutions, such as Puget Systems, which I have reviewed.

One design decision that could have mitigated the cost a bit is the choice of CPU chips. Apple has opted to install Xeons chips in all of its Mac Pro designs. Same with the iMac Pro. However, Intel also offers very capable Core i9 CPUs. The i9 chips offer faster core speeds and high core counts. The Xeons are designed to be run flat out 24/7. However, in the case of video editing, After Effects, and so on, the Core i9 chip may well be the better solution. These apps really thrive on fast single-core speeds, so having a 12-core or 28-core CPU, where each core has a slower clock speed, may not give you the best results. Regardless of benefit, Xeons do add to Apple’s hard costs in building the machine. Xeons are more expensive that Core chips. In some direct comparisons, a Xeon can garner $1,000 over Intel’s retail price of the equivalent Core CPU.

The ultimate justification for buying a Mac Pro tower isn’t necessarily performance alone, but rather longevity and expandability. As I outlined above, customers have now been conditioned to expect the system to last and be productive for at least a decade. That isn’t necessarily true of an all-in-one or a laptop. This means that if you amortize the investment in a 2019 Mac Pro over a ten-year period, it’s actually quite reasonable.

The shame – and this is where much of the internet ire is coming from – is that Apple didn’t offer any intermediate models, like HP’s Z4 or Z6. I presume that Apple is banking on those customers buying iMacs, iMac Pros, Mac minis, or MacBook Pros instead. Couple one of these models with an external GPU and fast external storage and you will have plenty of power for your needs today. It goes without saying that comparing this Mac Pro to a custom PC build (which may be cheaper) is a non-starter. A customer for this Mac Pro will buy one, pure and simple. There is built-in price elasticity to this niche of the market. Apple knows that and the customers know it.

Nuts and bolts

The small details haven’t been fully revealed, so we probably won’t know everything about these new Mac Pros until September (the rumored release). Apple once again adopted a signature case design, which like the earlier tower case has been dubbed a “cheese grater.” Unlike the previous model, where the holes were simply holes for ventilation, the updated model (or would that be the retro model?) uses a lattice system in the case to direct the airflow. The 2019 is about the same size as its “cheese grater” predecessor, but 20 pounds lighter.

There is very little rocket science in how you build a workstation, so items like Xeon CPUs, GPU cards, RAM, and SSD system drives are well understood and relatively standard for a modern PC system.

The short hardware overview consists of:

8, 12, 16, 24, and 28-core Xeon CPU options

Memory from 32GB to 1.5TB of DDR4 ECC RAM

Up to four AMD GPU cards

1.4 kW power supply

Eight PCIe expansion slots (one used for Apple i/o card)

System storage options from 256GB to 4TB

Four Thunderbolt 3 ports (2 top and 2 back) plus two USB 3 ports (back)

(Note – more ports available with the upgraded GPU options)

Two 10Gb Ethernet ports

WiFi, Bluetooth, built-in speakers, headphone jack

So far, so good. Any modern workstation would have similar choices. There are several key unknowns and that’s where the questions come in. First, the GPU cards appear to be custom-designed AMD cards installed into a new MPX (Mac Pro expansion) module. This is a mounting/connecting cage to install and connect the hardware. However, if you wanted to add your own GPU card, would it fit into such a module? Would you have to buy a blank module from Apple for your card? Would your card simply fit into the PCIe slot and screw in like on any other tower? The last question does appear to be possible, but will there be proper Nvidia support?

The second big question relates to internal storage. The old “cheese grater” had sleds to install four internal drives. Up to six could be installed if you used the optical drive bays. The 2019 Mac Pro appears to allow up to four drives within an MPX chassis. Promise has already announced two products specifically for the Mac Pro. One would include four RAIDed 8TB drives for a 32TB capacity. 14TB HDDs are already available, so presumably this internal capacity will go up. 

The unknown is whether or not you can add drives without purchasing an MPX module. The maximum internal GPU option seems to be four cards, which are mounted inside two MPX modules. This is also the space required for internal drives. Therefore, if you have both MPX modules populated with GPU cards, then I would imagine you can’t add internal storage. But I may be wrong. As with most things tech, I predict that if blank MPX modules are required, a number of vendors will quickly offer cheaper aftermarket MPX modules for GPUs, storage, etc.

One side issue that a few blogs have commented on is the power draw. Because of the size of the power supply, the general feeling is that the Mac Pro should be plugged into a standard electrical circuit by itself, plus maybe a monitor. In other words, not a circuit with a bunch of other electrical devices, otherwise you might start blowing breakers.

Afterburner

A new hardware item from Apple is the optional Afterburner ProRes and ProRes RAW accelerator card. This uses an FGPA (field programmable gate array), which is a chip that can be programmed for various specific functions. It can potentially be updated in the future. Anyone who has worked with the RED Rocket or RED Rocket-X card in the past will be quite familiar with what the Afterburner is.

The Afterburner will decode ProRes and ProRes RAW codecs on-the-fly when this media is played in Final Cut Pro X, QuickTime Player X, and any other application re-coded to support the card. This would be especially beneficial with camera raw codecs, because it debayers the raw sensor data via hardware acceleration at full resolution, instead of using the CPU. Other camera RAW manufacturers, like RED, ARRI, Canon, and Blackmagic Design, might add support for this card to accelerate their codecs, as well. What is not known is whether the Afterburner card can also be used to offload true background functions like background exports and transcoding within Final Cut Pro X.

An FPGA card offers the promise of being future-proofed, because you can always update its function later. However, in actual practice, the hardware capabilities of any card become outstripped as the technology changes. This happened with the RED Rocket card and others. We’ll see if Apple has any better luck over time.

Performance

Having lots of cores is great, but with most media and entertainment software the GPU can be key. Apple has been at a significant disadvantage with many applications, like After Effects, because of their stance with Nvidia and CUDA acceleration. Apple prefers that a manufacturer support Metal, which is their way of leveraging the combined power of all CPUs and GPUs in the system. This all sounds great, but the reality is that it’s one proprietary technology versus another. In the benchmark tests I ran with the Puget PC workstation, the CUDA performance in After Effects easily trounced any Mac that I scored it against.

Look at Apple’s website for a chart representing the relative GPU performance of a 2013 Mac Pro, an iMac Pro, and the new 2019 Mac Pro. Each was tested with their respective top-of-the-line GPU option. The iMac Pro is 1.5x faster than the 2013 Mac Pro. The 2019 Mac Pro is twice as fast as the iMac Pro and 3x faster than the 2013 Mac Pro. While that certainly looks impressive, that 2x improvement over the iMac Pro comes thanks to two upgraded GPU cards instead of one. Well, duh! Of course, at this time we have no idea what these cards and MPX units will cost. (Note – I am not totally sure as to whether this testing used two GPUs in one MPX module or a total of four GPUs in two modules.)

We won’t know how well these really perform until the first units get out into the wild. Especially how they compare against comparable PCs with high-powered Nvidia cards. I may be going out on a limb, but I would be willing to bet that many people who buy the base configuration for $6K – thinking that they will get a huge boost in performance – are going to be very disappointed. I don’t mean to trash the entry-level machine. It’s got solid specs, but in that configuration, isn’t the best performer. At $6K, you are buying a machine that will have longevity and which can be upgraded in the future. In short, the system can grow with you over time as the workload demands increase. That’s something which has not be available to Mac owners since the end of 2012.

Software

To take the most advantage of the capabilities of this new machine, software developers (both applications and plug-ins) will have to update their code. All of the major brands like Adobe, Avid, Blackmagic Design, and others seem to be on board with this. Obviously, so are the in-house developers at Apple who create the Pro Applications. Final Cut Pro X and Logic Pro X are obvious examples. Logic is increasing the track count and number of software instruments you can run. Updates have already been released.

Final Cut Pro X has a number of things that appear in need of change. Up until now, in spite of being based around Metal, Final Cut has not taken advantage of multiple GPUs when present. If you add an eGPU to a Mac today, you must toggle a preference setting to use one GPU or the other as the primary GPU (Mojave). Judging by the activity monitor, it appears to be an either-or thing, which means the other GPU is loafing. Clearly when you have four GPUs present, you will want to tap into the combined power of all four.

With the addition of the Afterburner option, FCPX (or any other NLE) has to know that the card is present and how to offload media to the card during playback (and render?). Finally, the color pipeline in Final Cut Pro X is being updated to work in 16-bit float math, as well as optimized for fast 8K workflows.

All of this requires new code and development work. With the industry now talking about 16K video, is 8K enough? Today, 4K delivery is still years away for many editors, so 8K is yet that much further. I suspect that if and when 16K gets serious traction, Apple will be ready with appropriate hardware and software technology. In the case of the new Mac Pro, this could simply mean a new Afterburner card instead of an entirely new computer.

The Apple Pro Display XDR

In tandem with the 2019 Mac Pro, Apple has also revealed the new Pro Display XDR – a 6K  32″ Retina display. It uses a similar design aesthetic to the Mac Pro, complete with a matching ventilation lattice. This display comes calibrated and is designed for HDR with 1,000 nits fullscreen, sustained brightness, and a 1,600 nit maximum. It will be interesting to see how this actually looks. Recent Final Cut Pro X updates have added HDR capabilities, but you can never get an accurate view of it on a UI display. Furthermore, the 500 nit, P3 displays used in the iMac Pros are some of the least color-accurate UI displays of any Mac that I work with. I really hope Apple gets this one right.

To sell the industry on this display, Apple is making the cost and feature comparison between this new display and actual HDR color reference displays costing in the $30K-40K range. Think Flanders Scientific or Sony. The dirty little HDR secret is that when you display an image at the maximum nit level across the entire screen, the display will dim in order to prevent damage. Only the most expensive displays are more tolerant of this. I would presume that the Pro Display XDR will also dim when presented with a fullscreen image of 1,600 nits, which is why their spec lists 1,000 nits fullscreen. That level is the minimum HDR spec. Of course, if you are grading real world images properly, then in my opinion, you rarely should have important picture elements at such high levels. Most of the image should be in a very similar range to SDR, with the extended range used to preserve highlight information, like a bright sky.

Some colorists are challenging the physics behind some of Apple’s claims. The concern is whether or not the display will result in bloomed highlights. Apple’s own marketing video points out that the design reduces blooming, but it doesn’t say that it completely eliminates it. We’ll see. I don’t quite see how this display fits as a reference display. It only has Thunderbolt connections – no SDI or HDMI – so it won’t connect in most standard color correction facilities without additional hardware. If, like all computer displays, the user can adjust the brightness, then that goes against the concept of an HDR reference display. At 32″, it’s much too small to be used as a client display to stick on the wall.

Why did Apple make the choice to introduce this as a user interface display? If they wanted to make a great HDR reference display, then that makes some sense. Even as a great specialty display, like you often find in photography or fine print work. I understand that it will likely display accurate, fullscreen video directly from Final Cut Pro X or maybe even Premiere Pro without the need and added cost of an AJA or BMD i/o device or card. But as a general purpose computer display? That feels like it simply misses the mark, no matter how good it is. Not to mention, at a brightness level of 1,000 to 1,600 nits, that’s way too bright for most edit suites. I even find that to be the case with the iMac Pro’s 500 nit displays, when you crank them up.

This display is listed as $5K without a stand. Add another $1k if you want a matte finish. Oh, and if you want the stand, add another $1K! I don’t care how seductively Jony Ive pronounces “all-u-minium,” that’s taxing the good will of your customer. Heck, make it $5,500 and toss in the stand at cost. Remember, the stand has an articulating arm, which will probably lose its tension in a few years. I hope that a number of companies will make high-quality knock-offs for a couple of hundred bucks.

If you compare the Apple Pro Display XDR to another UI display with a similar mission, then it’s worth comparing it to the HP Dreamcolor Z31x Studio Display. This is a 32″ 4K, calibrated display with an MSRP of right at $3,200. But it doesn’t offer HDR specs, Retina density, or 6K resolution. Factor in those features and Apple’s brand premium and then the entry price isn’t that far out of line – except for that stand.

I imagine that Apple’s thought process is that if you don’t want to buy this display, then there are plenty of cheaper choices, like an LG, HP, Asus, or Dell. And speaking of LG, where’s Apple’s innovative spirit to try something different with a UI display? Maybe something like an ultra wide. LG now has a high-resolution 49″ display for about $1,400. This size enables one large canvas across the width; or two views, like having two displays side-by-side. However, maybe a high-density display (Retina) isn’t possible with such a design, which could be Apple’s hang-up.

Final thoughts

The new 2019 Mac Pro clearly demonstrates that Apple has not left the high-end user behind. I view relevant technology through the lens of my needs with video; however, this model will appeal to a wide range of design, scientific, and engineering users. It’s a big world out there. While it may not be the most cost-effective choice for the individual owner/editor, there are still plenty of editors, production companies, and facilities that will buy one.

There is a large gap between the Mac mini and this new Mac Pro. I still believe there’s a market for a machine similar to some of those concept designs for a Mac Pro. Or maybe a smaller version of this machine that starts at $3,000. But there isn’t such a model from Apple. If you like the 2013 “trash can” Mac Pro, then you can still get it – at least until the 2019 model is officially released. Naturally, iMacs and iMac Pros have been a superb option for that in-between user and will continue to be so.

If you are in the market for the 2019 Mac Pro, then don’t cut yourself short. Think of it as an investment for at least 10 years. Unless you are tight and can only afford the base model, then I would recommend budgeting in the $10K range. I don’t have an exact configuration in mind, but that will likely be a sweet spot for demanding work. Once I get a chance to properly review the 2019 Mac Pro, I’ll be more than happy come back with a real evaluation.

©2019 Oliver Peters

Good Omens

Fans of British television comedies have a new treat in Amazon Prime’s Good Omens. The six-part mini-series is a co-production of BBC Studios and Amazon Studios. It is the screen adaptation of the 1990 hit novel by the late Terry Pratchett and Neil Gaiman, entitled Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch. Just imagine if the Book of Revelation had been written by Edgar Wright or the Coen brothers. Toss in a bit of The Witches of Eastwick and I think you’ll get the picture.

The series stars Michael Sheen (Masters of Sex, The Good Fight) as Aziraphale (an angel) and David Tennant (Mary Queen of Scots, Doctor Who) as Crowley (a demon). Although on opposing sides, the two have developed a close friendship going back to the beginning of humanity. Now it’s time for the Antichrist to arrive and bring about Armageddon. Except that the two have grown fond of humans and their life on Earth, so Crowley and Aziraphale aren’t quite ready to see it all end. They form an unlikely alliance to thwart the End Times. Naturally this gets off to a bad start, when the Antichrist child is mixed up at birth and ends up misplaced with the wrong family. The series also stars an eclectic supporting cast, including Jon Hamm (Baby Driver, Mad Men), Michael McKean (Veep, Better Call Saul), and Frances McDormand (Hail, Caesar!, Fargo) as the voice of God.

Neil Gaiman (Lucifer, American Gods) was able to shepherd the production from novel to the screen by adapting the screenplay and serving as show runner. Douglas Mackinnon (Doctor Who, Sherlock) directed all six episodes. I recently had a chance to speak with Will Oswald (Doctor Who, Torchwood: Children of Earth, Sherlock) and Emma Oxley (Lair, Happy Valley), the two editors who brought the production over the finish line.

(Click any image to see an enlarged view.)

_____________________________________________________

[OP] Please tell me a bit about your editing backgrounds and how you landed this project.

[Will] I was the lead editor for Doctor Who for a while and got along well with the people. This led to Sherlock. Douglas had worked on both and gave me a call when this came up.

[Emma] I’ve been mainly editing thrillers and procedurals and was looking for a completely different script, and out of the blue I received a call from Douglas. I had worked with him as an assistant editor in 2007 on an adaptation of the Jekyll and Hyde story and I was fortunate that a couple of Douglas’s main editors were not available for Good Omens. When I read the script I thought this is a dream come true.

[OP] Had either of you read the book before?

[Will] I hadn’t, but when I got the gig, I immediately read the book. It was great, because this is a drama-comedy. How good a job is that? You are doing everything you like. It’s a bit tricky, but it’s a great atmosphere to work in.

[Emma] I was the same, but within a week I had read it. Then the scripts came through and they were pretty much word for word – you don’t expect that. But since it was six hours instead of feature length the book could remain intact.

[OP] I know that episodic series often divide up the editorial workload in many different ways. Who worked on which episode and how was that decided?

[Will] Douglas decided that I would do the first three episodes and Emma would edit the last three. The series happened to split very neatly in the middle. The first three episodes really set up the back story and the relationship between the characters and then the story shifts tone in the last three episodes.

[Emma] Normally in TV the editors would leapfrog each other. In this case, as Will said, the story split nicely into two, three-hour sections. It was a nice experience not to have to jump backwards and forwards.

[Will] The difficult thing for me in the first half is that the timeline is so complicated. In the first three episodes you have to develop the back story, which in this case goes back and forth through the centuries – literally back to the beginning of time. You also have to establish the characters’ relationship to each other. By the end of episode three, they really start falling apart, even though they do really like each other. It’s a bit like Butch Cassidy and the Sundance Kid. Of course, Emma then had to resolve all the conflicts in her episodes. But it was nice to go rocking along from one episode to the next.

[OP] What was the post-production schedule like?

[Emma] Well, we didn’t really have a schedule. That’s why it worked! (laugh) Will and I were on it from the very start and once we decided to split up the edit as two blocks of three episodes, there were days when I wouldn’t get any rushes, so could focus on getting a cut done and vice versa with Will. When Douglas came in, we had six pretty good episodes that were cut according to the script. Douglas said he wanted to treat it like a six hour film, so we did a full pass on all six episodes before Neil came in and then finally the execs. They allowed us the creative freedom to do that.

[Will] When Douglas came back, we basically had a seven and a half hour movie, which we ran in a cinema on a big screen. Then we went through and made adjustments in order. It was the first time I’ve had both the show runner and the director in with me every day. Neil had promised Terry that he would make sure it happened. Terry passed away before the production, but he had told Neil – and I’m paraphrasing here – don’t mess it up! So this was a very personal project for him. That weighed heavily on me, because when I reread the book, I wanted to make sure ‘this’ was in and ‘that’ was in as I did my cut.

[OP] What sort of changes were made as you were refining the episodes?

[Will] There were a lot of structural changes in episodes one and two that differed a lot from the script. It was a matter of working out how best to tell the story. Episode one was initially 80 minutes long. There was quite a lot of work to get down to the hourlong final version. Episode three was much easier. 

[Emma] By the time it got to episode four, the pattern had been established, so we had to deal more with visual effects challenges in the second half. We had a number of large set pieces and a limited visual effects budget. So we had to be clever about using visual effects moments without losing the impact, but still maximizing the effects we did have. And at the same time keeping it as good as we could. For example, there’s a flying saucer scene, but the plate shot didn’t match the saucer shot and it was going to take a ton of work to match everything. So we combined it with a shot intended for another part of the scene. Instead of a full screen effects shot, it’s seen through a car window. Not only did it save a lot of money, but more importantly, it ended up being a better way for the ship to land and more in the realm of Good Omens storytelling. I love that shot.

[Will] Visual effects are just storytelling points. You want to be careful not to lose the plot. For example, the Hellhound changes into a puppy dog and that transformation was originally intended to be a big visual effect. But instead, we went with a more classic approach. Just a simple cut and the camera tilts down to reveal the smaller dog. It turned out to be a much better way of doing it and makes me laugh every time I see it.

[OP] I noticed a lot of music from Queen used throughout. Any special arrangement to secure that for the series?

[Will] Queen is in the book. Every time Crowley hears music, even if it’s Mozart, it turns into Queen. Fortunately Neil knows everybody!

[Emma] And it’s one of Douglas’ favorite bands of all time, so it was a treat for him to put as much Queen music in as possible. At one point we had it over many more moments.

[Will] Also working with David Arnold [series composer] was great. There’s a lot of his music as well and he really understands what we do in editing.

[OP] Since this is a large effort and a lot of complex work involved, did you have a large team of assistant editors on the job with you?

[Emma] This is the UK. We don’t have a huge team! (laugh)

[Will] We had one assistant, Cat Gregory, and then much later on, a couple more for visual effects.

[Emma] They were great. Cat, our first assistant, had an adjoining room to us and she was our ‘take barometer.’ If you put in an alt line and she didn’t laugh, you knew it wasn’t as good. But if there was a chuckle coming out of her room, it would more often stay.

[OP] How do you work with your assistants? For example, do you let assistants assemble selects, or cut in sound effects or music?

[Will] It was such a heavy schedule with a huge amount of material, so there was a lot of work just to get that in and organized. Just giving us an honest opinion was invaluable. But music and sound effects – you really have to do that yourself.

[Emma] Me, too. I cut my own music and assemble my own rushes.

[OP] Please tell me a bit about your editorial set-up and editing styles.

[Will] We were spread over two or four upstairs/downstairs rooms at the production company’s office in Soho. These were Avid Media Composer systems with shared storage. We didn’t have the ScriptSync option. We didn’t even have Sapphire plug-ins until late in the day, although that might have been nice with some of the bigger scenes with a lot of explosions. I don’t really have an editing style, I think it’s important not to have one as an editor. Style comes out of the content. I think the biggest challenge on this show was how do you get the English humor across to an American audience.

[Emma] I wouldn’t say I have an editing style either. I come in, read the notes, and then watch the rushes with that information in my head. There wasn’t a lot of wild variation in the takes and David’s and Michael’s performances were just dreamy. So the material kind of cut itself.

[Will] The most important thing is to familiarize yourself with the material and review the selected takes. Those are the ones the director wanted. That also gives you a fixed point to start from. The great thing about software these days is that you can have multiple versions.

[OP] I know some directors like to calibrate their actors’ performances, with each take getting more extreme in emotion. Others like to have each take be very different from the one before it. What was Mackinnon’s style on this show as a director?

[Emma] In the beginning you always want to figure out what they are thinking. With Douglas it’s easy to see from the material he gives you. He’s got it all planned. He really gets the performance down to a tee in the rehearsal.

[Will] Douglas doesn’t push for a wide range in the emotion from one take to the next. As Emma mentioned, Douglas works through that in rehearsal. Someone like David and Michael work that out, too, and they’re bouncing off each other. Douglas has a fantastic visual sense. You can look at the six episodes and go, “Wow, how did you get all of that in?” It’s a lot of material and he found a way to tell that story. There’s a very natural flow to the structure.

[OP] Since both Douglas Mackinnon and Will worked on Doctor Who, and David Tennant was one of the Doctors during the series, was there a conscious effort to stay away from anything that smacked of Doctor Who in Good Omens?

[Will] It never crossed my mind. I always try to do something different, but as I said, the style comes out of the material. It has jeopardy and humor like Doctor Who, but it’s really quite different. I did 32 episodes of Doctor Who and each of those was very different from the other. David Tennent is in it, of course, but he is not even remotely playing the Doctor. Crowley is a fantastic new character for him.

[OP] Are there any final thoughts you’d like to share about working on Good Omens?

[Will] It was a pleasure to work on a world-famous book and it is very funny. To do it justice was really all we were doing. I was going back every night and reading the book marking up things. Hopefully the fans like it. I know Neil does and I hope Terry is watching it.

[Emma] I’m just proud that the fans of the book are saying that it’s one of the best adaptations they’ve ever watched on the screen. That’s a success story and it gives me a warm feeling when I think about Good Omens. I’d go back and cut it again, which I rarely say about any other job.

©2019 Oliver Peters