Affinity Publisher

The software market offers numerous alternatives to Adobe Photoshop, but few companies have taken on the challenge to go further and create a competitive suite of graphics tools – until now. Serif has completed the circle with the release of Affinity Publisher, a full-featured, desktop publishing application. This adds to the toolkit that already includes Affinity Photo (an image editor) and Affinity Designer (a vector-based illustration app). All three applications support Windows and macOS, but Photo and Designer are also available as full-fledged pro applications for the iPad. This graphic design toolkit collectively constitutes an alternative to Adobe Photoshop, Illustrator, and InDesign.

Personas and StudioLink

The core user interface feature of the Affinity applications is that various modules are presented as Personas, which are accessed by the icons in the upper left corner of the interface. For example, in Affinity Photo basic image manipulation happens in the Photo Persona, but for mesh deformations, you need to shift to the Liquify Persona.

Affinity Publisher starts with the Publisher Persona. That’s where you set up page layouts, import and arrange images, create text blocks, and handle print specs and soft proofs. However, with Publisher, Affinity has taken Personas a step further through a technology they call StudioLink. If you also have the Photo and Designer applications installed on the same machine, then a subset of these applications is directly accessible within Publisher as the Photo and/or Designer Persona. If you have both Photo and Designer installed, then the controls for both Personas are functional in Publisher; but, if you only have one of the others installed, then just that Persona offers additional controls.

Users of Adobe InDesign know that to edit an image within a document you have to “open in Photoshop,” which launches the full Photoshop application where you would make the changes and then roundtrip back to InDesign. However, with Affinity Publisher the process is more straightforward, because the Photo Persona is right there. Just select the image within the document and click on the Photo Persona button in the upper left, which then shifts the UI to display the image processing tools. Likewise, clicking on the Designer Persona will display vector-based drawing tools. Effectivity Serif has done with Affinity Publisher what Blackmagic Design has done with the various pages in DaVinci Resolve. Click a button and shift to the function specifically designed for the task at hand without the need to change to a completely different application.

Document handling

All of the Affinity apps are layer-based, so while you are working in any of the three Personas within Publisher, you can see the layer order on the right to let you know where you are in the document. Affinity Photo offers superb compatibility with layered Photoshop PSD files, which means that your interchange with outside designers – who may use Adobe Photoshop – will be quite good.

Affinity Publisher documents are based on Master Pages and Pages. This is similar to the approach taken by many website design applications. When you create a document, you can set up a Master Page to define a uniform style template for that document. From there you would build individual Pages. Any changes made to a Master Page will then change and update the altered design elements for all of the Pages in the rest of that document. Since Affinity Publisher is designed for desktop publishing, single and multi-page document creation and export settings are both web and print-friendly. Publisher also offers a split-view display, which presents your document in a vector view on the left and as a rasterized pixel view on the right.

Getting started

Any complex application can be daunting at first, but I find the Affinity applications offer a very logical layout that makes it easy to get up to speed. In addition, when you start any of these applications you will first see a launch page that offers a direct link to various tutorials, sample documents and/or layered images. A beginner can quickly download these samples in order to dissect the layers and see exactly how they were created. Aside from these links to the tutorials, you can simply go to the website where you’ll find extensive, detailed video tutorials for each step of the process for any of these three applications.

If you are seeking to shake off subscriptions or simply not bound to using Adobe’s design tools for work, then these Affinity applications offer a great alternative. Affinity Publisher, Photo, and Designer are standalone applications, but the combination of the three forms a comprehensive image and design collection. Whether you are a professional designer or just someone who needs to generate the occasional print document, Affinity Publisher is a solid addition to your software tools.

©2019 Oliver Peters

Advertisements

Black Mirror: Bandersnatch

Bandersnatch was initially conceived as an interactive episode within the popular Black Mirror anthology series on Netflix. Instead, Netflix decided to release it as a standalone, spin-off film in December 2018. It’s the story of programmer Stefan Butler (Fionn Whitehead) as he adapts a choose-your-own-adventure novel into a video game. Set in 1984, the viewers get to make decisions for Butler’s actions, which then determine the next branch of the story shown to the viewer. They can go back though Bandersnatch and opt for different decisions, in order to experience other versions of the story.

Bandersnatch was written by show creator Charlie Brooker (Black Mirror, Cunk on Britain, Cunk on Shakespeare), directed by David Slade (American Gods, Hannibal, The Twilight Saga: Eclipse), and edited by Tony Kearns (The Lodgers, Cardboard Gangsters, Moon Dogs). I recently had a chance to interview Kearns about the experience of working on such a unique production.

__________________________________________________

[OP] Please tell me a little about your editing background leading up to cutting Bandersnatch.

[TK] I started out almost 30 years ago editing music videos in London. I did that full-time for about 15 years working for record companies and directors. At the tail end of that a lot of the directors I was working with moved into doing commercials, so I started editing commercials more and more in Dublin and London. In Dublin I started working on long form, feature film projects and cut about 10 projects that were UK or European co-productions with the Irish Film Board.

In 2017 I got a call from Black Mirror to edit the Metalhead episode, which was directed by David Slade. He was someone I had worked with on music videos and commercials 15 years previously, before he had moved to the United States. That was a nice circularity. We were together working again, but on a completely different type of project – drama, on a really cool series, like Black Mirror. It went very well, so David and I were asked to get involved with Bandersnatch, which we jumped at, because it was such an amazing, different kind of project. It was unlike anything either of us – or anyone else, for that matter – has ever done to that level of complexity.

[OP] Other attempts at interactive storytelling – with the exception of the video game genre – have been a hit-or-miss. What were your initial thoughts when you read the script for the first time?

[TK] I really enjoyed the script. It was written like a conventional script, but with software called Twine, so you could click on it and go down different paths. Initially I was overwhelmed at the complexity of the story and the structure. It wasn’t that I was like a deer in the headlights, but it gave me a sense of scale of the project and [writer/show runner] Charlie Brooker’s ambition to take the interactive story to so many layers.

On my own time I broke down the script and created spreadsheets for each of the eight sections in the script and wrote descriptions of every possible permutation, just to give me a sense of what was involved and to get it in my head what was going on. There are so many different narrative paths – it was helpful to have that in my brain. When we started editing, that would also help me to keep a clear eye at any point.

[OP] How long of a schedule did you have to post Bandersnatch?

[TK] 17 weeks was the official edit time, which isn’t much longer than on a low-budget feature. When I mentioned that to people, they felt that was a really short amount of time; but, we did a couple of weekends, we were really efficient, and we knew what we were doing.

[OP] Were you under any running length constraints, in the same way that a TV show or a feature film editor often wrestles with on a conventional linear program?

[TK] Not at all. This is the difference – linear doesn’t exist. The length depends on the choices that are made. The only direction was for it not to be a sprawling 15-hour epic – that there would be some sort of ball park time. We weren’t constrained, just that each segment had to feel right – tight, but not rushed.

[OP] With that in mind, what sort of process did you do through to get it to feel right?

[TK] Part of each edit review was to make it as tight or as lean as it needed to be. Netflix developed their own software, called Branch Manager, which allowed people to review the cut interactively by selecting the choice points. My amazing assistant editor, John Weeks, is also a coder, so he acquired an extra job, which was to take the exports and do the coding in order to have everything work in Branch Manager. He’s a very robust person, but I think we almost broke him (laughs), because there were up to 100 Branch Manager versions by the end. The coding was hanging on by a thread. He was a bit like Scotty in Star Trek, “The engines can’t hold it anymore, Captain!”

By using Branch Manager, people could choose a path and view it and give notes. So I would take the notes, make the changes, and it would be re-exported. Some segments might have five cuts while others would be up to 13 or 14. Some scenes were very straightforward, but others were more difficult to repurpose.

Originally there were more segments in the script, but after the first viewings it was felt that there were too many in there. It was on the borderline of being off-putting for viewers. So we combined a few, but I made sure to keep track of that so it was in the system. There was a lot of reviewing, making notes, updating spreadsheets, and then making sure John had the right version for the next Branch Manager creation. It was quite an involved process.

[OP] How were you able to keep all of this straight? Did you use the common technique of scenes cards on the wall or something different?

[TK] If you looked at flowcharts your head would explode, because it would be like looking at the wiring diagram of an old-fashioned telephone exchange. There wouldn’t have been enough room on the wall. For us, it would just be on paper – notebooks and spreadsheets. It was more in our heads – our own sense of what was happening – that made it less confusing. If you had the whole thing as a picture, you just wouldn’t know where to look.

[OP] In a conventional production an editor always has to be mindful that when something is removed, it may have ramifications to the story later on. In this case, I would imagine that those revisions affected the story in either direction. How were you able to deal with that?

[TK] I have been asked about how did we know that each path would have a sense of a narrative arc. We couldn’t think of it as one, total narrative arc. That’s impossible. You’d have to be a genius to know that it’s all going to work. We felt the performances were great, the story was strong, but it doesn’t have a conventional flow. There are choice points, which act as a propellant into the next part of the film thus creating an unconventional experience to the straight story arc of conventional films or episodes. Although there wasn’t a traditional arc, it still had to feel like a well-told story. And that you would have empathy and a sense of engagement – that it wasn’t a gimmick.

[OP] How did the crew and actors mange to keep the story straight in their minds as scenes were filmed?

[TK] As with any production, the first few days are finding out what you’ve let yourself in for. This was a steep learning curve in that respect. Only three weeks of the seven-week shoot was in the same studio complex where I was working, so I wasn’t present. But there was a sense that they needed to make it easier for the actors and the crew. The script supervisor, Marilyn Kirby, was amazing. She was the oracle for the whole shoot. She kept the whole show on the road, even when it was quite complicated. The actors got into the swing of it quickly, because I had no issues with the rushes. They were fantastic.

[OP] What camera formats were used and what is your preparation process for this footage prior to editing?

[TK] It’s the most variety of camera formats I’ve ever worked on. ARRI Alexa 65 and RED, but also 1980s Ikegami TV cameras, Super 8mm, 35mm, 16mm, and VHS. Plus, all of the print stills were shot on black-and-white film. The data lab handled the huge job to keep this all organized and provide us with the rushes. So, when I got them, they were ready to go. The look was obviously different between the sources, but otherwise it was the same as a regular film. Each morning there was a set of ProRes Proxy rushes ready for us. John synced and organized them and handed them over. And then I started cutting. Considering all the prep the DIT and the data lab had to go through, I think I was in a privileged position!

[OP] What is your method when first starting to edit a scene?

[TK] I watch all of the rushes and can quickly see which take might be the bedrock framing for a scene – which is best for a given line. At that point I don’t just slap things together on a timeline. I try to get a first assembly to be as good as possible, because it just helps anyone who sees it. If you show a director or a show runner a sloppy cut, they’ll get anxious and I don’t want that to happen. I don’t want to give the wrong impression.

When I start a scene, I usually put the wide down end-to-end, so I know I have the whole scene. Then I’ll play it and see what I have in the different framings for each line – and then the next line and the next and so on. Finally, I go back and take out angles where I think I may be repeating a shot too much, extend others, and so on. It’s a built-it-up process in an effort to get to a semi-fine cut as quickly as possible.

[OP] Were you able to work with circle takes and director’s notes on Bandersnatch?

[TK] I did get circle takes, but no director’s notes. David and I have an intuitive understanding, which I hope to fulfill each time – that when I watch the footage he shoots, that I’ll get what he’s looking for in the scene. With circles takes, I have to find out very quickly whether the script supervisor is any good or not. Marilyn is brilliant so whenever she’s doing that, I know that take is the one. David is a very efficient director, so there weren’t a massive number of takes – usually two or three takes for each set-up. Everything was shot with two cameras, so I had plenty of coverage. I understand what David is looking for and he trusts me to get close to that.

[OP] With all of the various formats, what sort of shooting ratio did you encounter? Plus, you had mentioned two-camera scenes. What is your approach to that in your edit application?

[TK] I believe the various story paths totaled about four-and-a-half hours of finished material. There was a 3:1 shooting ratio, times two cameras – so maybe 6:1 or even 9:1. I never really got a final total of what was shot, but it wasn’t as big as you’d expect. 

When I have two-camera coverage I deal with it as two individual cameras. I can just type in the same timecode for the other matching angle. I just get more confused with what’s there when I use multi-cam. I prefer to think of it as that’s the clip from the clip. I hope I’m not displaying an anti-technology thing, but I’m used to it this way from doing music videos. I used to use group clips in Avid and found that I could think about each camera angle more clearly by dealing with them separately.

[OP] I understand that you edited Bandersnatch on Adobe Premiere Pro. Is that your preferred editing software?

[TK] I’ve used Premiere Pro on two feature films, which I cut in Dublin, and a number of shorts and TV commercials. If I am working where I can set up my own cutting room, then I’m working with Premiere. I use both Avid and Adobe, but I find I’m faster on Premiere Pro than on Media Composer. The tools are tuned to help me work faster.

The big thing on this job was that you can have multiple sequences open at the same time in Premiere. That was going to be the crunch thing for me. I didn’t know about Branch Manager when I specified Premiere Pro, so I figured that would be the way we work need to review the segments – simply click on a sequence tab and play it as a rudimentary way to review a story path. The company that supplied the gear wasn’t as familiar with Premiere [as they were with Avid], so there were some issues, but it was definitely the right choice.

[OP] Media Composer’s strength is in multi-editor workflows. How did you handle edit collaboration in Premiere Pro?

[TK] We used Adobe’s shared projects feature, which worked, but wasn’t as efficient as working with Avid in that version of Premiere. It also wasn’t ideal that we were working from Avid Nexis as the shared storage platform. In the last couple of months I’ve been in contact with the people at Adobe and I believe they are sorting out some of the issues we were having in order to make it more efficient. I’m keen for that to happen.

In the UK and London in particular, the big player is Avid and that’s what people know, so anything different, like Premiere Pro, is seen with a degree of suspicion. When someone like me comes in and requests something different, I guess I’m viewed as a bit of a pain in the ass. But, there shouldn’t just be one behemoth. If you had worked on the old Final Cut Pro, then Premiere Pro is a natural fit – only more advanced and supported by a company that didn’t want to make smart phones and tablets.

[OP] Since Adobe Creative Cloud offers a suite of compatible software tools, did you tap into After Effects or other tools for your edit?

[TK] That was another real advantage – the interaction with the graphics user interface and with After Effects. When we mocked up the first choice points, it was so easy to create, import, and adjust. That was a huge advantage. Our VFX editor was able to build temp VFX in After Effects and we could integrate that really easily. He wasn’t just using an edit system’s effects tool, but actual VFX software, which seamlessly integrated with Premiere. Although these weren’t final effects at full 4K resolution, he was able to do some very complex things, so that everyone could go, “Yes, that’s it.”

[OP] In closing, what take-away would you offer an editor interested in tackling an interactive story as compared to a conventional linear film?

[TK] I learned to love spreadsheets (laugh). I realized I had to be really, really organized. When I saw the script I knew I had to go through it with a fine-tooth comb and get a sense of it. I also realized you had to unlearn some things you knew about conventional episodic TV. You can’t think of some things in the same way. A practical thing for the team is that you have to have someone who knows coding, if you are using a similar tool to Branch Manager. It’s the only way you will be able to see it properly.

It’s a different kind of storytelling pressure that you have to deal with, mostly because you have to trust your instincts even more that it will work as a coherent story across all the narrative paths. You also have to be prepared to unlearn some of the normal methods you might use. One example is that you have to cut the opening of different segments differently to work with the last shot of the previous choice point, so you can’t just go for one option, you have to think more carefully what the options are. The thing is not to walk in thinking it’s going to be the same as any other production, because it ain’t.

For more on Bandersnatch, check out these links: postPerspective, an Art of the Guillotine interview with Tony Kearns, and a scene analysis at This Guy Edits.

Images courtesy of Netflix and Tony Kearns.

©2019 Oliver Peters

Did you pick the right camera? Part 3

Let me wrap up this three-parter with some thoughts on the media side of cameras. The switch from videotape recording to file-based recording has added complexity with not only specific file formats and codecs, but also the wrapper and container structure of the files themselves. The earliest file-based camera systems from Sony and Panasonic created a folder structure on their media cards that allowed for audio and video, clip metadata, proxies, thumbnails, and more. FAT32 formatting was adopted, so a 4GB file limit was imposed, which added the need for clip-spanning any time a recording exceeded 4GB in size.

As a result, these media cards contain a complex hierarchy of spanned files, folders, and subfolders. They often require a special plug-in for each NLE to be able to automatically interpret the files as the appropriate format of media. Some of these are automatically included with the NLE installation while others require the user to manually download and install the camera manufacturer’s software.

This became even more complicated with RED cameras, which added additional QuickTime reference files at three resolutions, so that standard media players could be used to read the REDCODE RAW files. It got even worse when digital still photo cameras added video recording capabilities, thus creating two different sets of folder paths on the card for the video and the still media. Naturally, none of these manufacturers adopted the same architecture, leaving users with a veritable Christmas tree of discovery every time they popped in one of these cards to copy/ingest/import media.

At the risk of sounding like a broken record, I am totally a fan of ARRI’s approach with the Alexa camera platform. By adopting QuickTime wrappers and the ProRes codec family (or optionally DNxHD as MXF OP1a media), Alexa recordings use a simple folder structure containing a set of uniquely-named files. These movie files include interleaved audio, video, and timecode data without the need for subfolders, sidecar files, and other extraneous information. AJA has adopted a similar approach with its KiPro products. From an editor’s point-of-view, I would much rather be handed Alexa or KiPro media files than any other camera product, simply because these are the most straight-forward to deal with in post.

I should point out that in a small percentage of productions, the incorporated metadata does have value. That’s often the case when high-end VFX are involved and information like lens data can be critical. However, in some camera systems, this is only tracked when doing camera raw recordings. Another instance is with GoPro 360-degree recordings. The front and back files and associated data files need to stay intact so that GoPro’s stitching software can properly combine the two halves into a single movie.

You can still get the benefit of the simpler Alexa-style workflow in post with other cameras if you do a bit of media management of files prior to ingesting these for the edit. My typical routine for the various Panasonic, Canon, Sony, and prosumer cameras is to rip all of the media files out of their various Clip or Private folders and move them to the root folder (usually labelled by camera roll or date). I trash all of those extra folders, because none of it is useful. (RED and GoPro 360 are the only formats to which I don’t do this.) When it’s a camera that doesn’t generate unique file names, then I will run a batch renaming application in order to generate unique file names. There are a few formats (generally drones, ‘action’ cameras, smart phones, and image sequences) that I will transcode to some flavor of ProRes. Once I’ve done this, the edit and the rest of post becomes smooth sailing.

While part of your camera buying decision should be based on its impact on post, don’t let that be a showstopper. You just have to know how to handle it and allow for the necessary prep time before starting the edit.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Part 2

HDR (high dynamic range) imagery and higher display resolutions start with the camera. Unfortunately that’s also where the misinformation starts. That’s because the terminology is based on displays and not on camera sensors and lenses.

Resolution

4K is pretty common, 8K products are here, and 16K may be around the corner. Resolution is commonly expressed as the horizontal dimension, but in fact, actual visual resolution is intended to be measured vertically. A resolution chart uses converging lines. The point at which you can no longer discern between the lines is the limit of the measurable resolution. That isn’t necessarily a pixel count.

The second point to mention is that camera sensors are built with photosites that only loosely equate to pixels. The hitch is that there is no 1:1 correlation between a sensor’s photosites and display pixels on a screen. This is made even more complicated by the design of a Bayer-pattern sensor that is used in most professional video cameras. In addition, not all 4K cameras look good when you analyze the image at 100%. For example, nearly all early and/or cheap drone and ‘action’ cameras appear substandard when you actually look at the image closely. The reasons include cheap plastic lenses and high compression levels.

The bottom line is that when a company like Netflix won’t accept an ARRI Alexa as a valid 4K camera for its original content guidelines – in spite of the number of blockbuster feature films captured using Alexas – you have to take it with a grain of salt. Ironically, if you shoot with an Alexa in its 4:3 mode (2880 x 2160) using anamorphic lenses (2:1 aspect squeeze), the expanded image results in a 5760 x 2160 (6K) frame. Trust me, this image looks great on a 4K display with plenty of room to crop left and right. Or, a great ‘scope image. Yes, there are anamorphic lens artifacts, but that’s part of the charm as to why creatives love to shoot that way in the first place.

Resolution is largely a non-issue for most camera owners these days. There are tons of 4K options and the only decision you need to make when shooting and editing is whether to record at 3840 or 4096 wide when working in a 4K mode.

Log, raw, and color correction

HDR is the ‘next big thing’ after resolution. Nearly every modern professional camera can shoot footage that can easily be graded into HDR imagery. That’s by recording the image as either camera raw or with a log color profile. This lets a colorist stretch the highlight information up to the peak luminance levels that HDR displays are capable of. Remember that HDR video is completely different from HDR photography, which can often be translated into very hyper-real photos. Of course, HDR will continue to be a moving target until one of the various competing standards gains sufficient traction in the consumer market.

It’s important to keep in mind that neither raw nor log is a panacea for all image issues. Both are ways to record the linear dynamic range that the camera ‘sees’ into a video colorspace. Log does this by applying a logarithmic curve to the video, which can then be selectively expanded again in post. Raw preserves the sensor data in the recording and pushes the transformation of that data to RGB video outside of the camera. Using either method, it is still possible to capture unrecoverable highlights in your recorded image. Or in some cases the highlights aren’t digitally clipped, but rather that there’s just no information in them other than bright whiteness. There is no substitute for proper lighting, exposure control, and shaping the image aesthetically through creative lighting design. In fact, if you carefully control the image, such as in a studio interview or a dramatic studio production, there’s no real reason to shoot log instead of Rec 709. Both are valid options.

I’ve graded camera raw (RED, Phantom, DJI) and log footage (Alexa, Canon, Panasonic, Sony) and it is my opinion that there isn’t that much magic to camera raw. Yes, you can have good iso/temp/tint latitude, but really not a lot more than with a log profile. In one, the sensor de-Bayering is done in post and in the other, it’s done in-camera. But if a shot was recorded underexposed, the raw image is still going to get noisy as you lift the iso and/or exposure settings. There’s no free lunch and I still stick to the mantra that you should ‘expose to the right’ during production. It’s easier to make a shot darker and get a nice image than going in the other direction.

Since NAB 2018, more camera raw options have hit the market with Apple’s ProRes RAW and Blackmagic RAW. While camera raw may not provide any new, magic capabilities, it does allow the camera manufacturer to record a less-compressed file at a lower data rate.  However, neither of these new codecs will have much impact on post workflows until there’s a critical mass of production users, since these are camera recording codecs and not mezzanine or mastering codecs. At the moment, only Final Cut Pro X properly handles ProRes RAW, yet there are no actual camera raw controls for it as you would find with RED camera raw settings. So in that case, there’s actually little benefit to raw over log, except for file size.

One popular raw codec has been Cinema DNG, which is recorded as an image sequence rather than a single movie file. Blackmagic Design cameras had used that until replaced by Blackmagic RAW.  Some drone cameras also use it. While I personally hate the workflow of dealing with image sequence files, there is one interesting aspect of cDNG. Because the format was originally developed by Adobe, processing is handled nicely by the Adobe Camera Raw module, which is designed for camera raw photographs. I’ve found that if you bring a cDNG sequence into After Effects (which uses the ACR module) as opposed to Resolve, you can actually dig more highlight detail out of the images in After Effects than in Resolve. Or at least with far less effort. Unfortunately, you are stuck making that setting decision on the first frame, as you import the sequence into After Effects.

The bottom line is that there is no way to make an educated decision about cameras without actually testing the images, the profile options, and the codecs with real-world footage. These have to be viewed on high quality displays at their native resolutions. Only then will you get an accurate reading of what that camera is capable of. The good news is that there are many excellent options on the market at various price points, so it’s hard to go wrong with any of the major brand name cameras.

Click here for Part 1.

Click here for Part 3.

©2019 Oliver Peters

Did you pick the right camera? Part 1

There are tons of great cameras and lenses on the market. While I am not a camera operator, I have been a videographer on some shoots in the past. Relevant production and camera logistical issues are not foreign to me. However, my main concern in evaluating cameras is how they impact me in post – workflow, editing, and color correction. First – biases on the table. Let me say from the start that I have had the good fortune to work on many productions shot with ARRI Alexas and that is my favorite camera system in regards to the three concerns offered in the introductory post. I love the image, adopting ProRes for recording was a brilliant move, and the workflow couldn’t be easier. But I also recognize that ARRI makes an expensive albeit robust product. It’s not for everyone. Let’s explore.

More camera choices – more considerations

If you are going to only shoot with a single camera system, then that simplifies the equation. As an editor, I long for the days when directors would only shoot single-camera. Productions were more organized and there was less footage to wade through. And most of that footage was useful – not cutting room fodder. But cameras have become cheaper and production timetables condensed, so I get it that having more than one angle for every recording can make up for this. What you will often see is one expensive ‘hero’ camera as the A-camera for a shoot and then cheaper/lighter/smaller cameras as the B and C-cameras. That can work, but the success comes down to the ingredients that the chef puts into the stew. Some cameras go well together and others don’t. That’s because all cameras use different color science.

Lenses are often forgotten in this discussion. If the various cameras being used don’t have a matched set of lenses, the images from even the exact same model cameras – set to the same settings – will not match perfectly. That’s because lenses have coloration to them, which will affect the recorded image. This is even more extreme with re-housed vintage glass. As we move into the era of HDR, it should be noted that various lens specialists are warning that images made with vintage glass – and which look great in SDR – might not deliver predictable results when that same recording is graded for HDR.

Find the right pairing

If you want the best match, use identical camera models and matched glass. But, that’s not practical or affordable for every company nor every production. The next best thing is to stay within the same brand. For example, Canon is a favorite among documentary producers. Projects using cameras from the EOS Cinema line (C300, C300 MkII, C500, C700) will end up with looks that match better in post between cameras. Generally the same holds true for Sony or Panasonic.

It’s when you start going between brands that matching looks becomes harder, because each manufacturer uses their own ‘secret sauce’ for color science. I’m currently color grading travelogue episodes recorded in Cuba with a mix of cameras. A and B-cameras were ARRI Alexa Minis, while the C and D-cameras were Panasonic EVA1s. Additionally Panasonic GH5, Sony A7SII, and various drones cameras were also used. Panasonic appears to use a similar color science as ARRI, although their log color space is not as aggressive (flat). With all cameras set to shoot with a log profile and the appropriate REC709 LUT applied to each in post (LogC and Vlog respectively) I was able to get a decent match between the ARRI and Panasonic cameras, including the GH5. Not so close with the Sony or drone cameras, however.

Likewise, I’ve graded a lot of Canon C300 MkII/C500 footage and it looks great. However, trying to match Canon to ARRI shots just doesn’t come out right. There is too much difference in how blues are rendered.

The hardest matches are when professional production cameras are married with prosumer DSLRs, such as a Sony FS5 and a Fujifilm camera. Not even close. And smartphone cameras – yikes! But as I said above, the GH5 does seem to provide passible results when used with other Panasonic cameras and in our case, the ARRIs. However, my experience there is limited, so I wouldn’t guarantee that in every case.

Unfortunately, there’s no way to really know when different brands will or won’t create a compatible A/B-camera combination until you start a production. Or rather, when you start color correcting the final. Then it’s too late. If you have the luxury of renting or borrowing cameras and doing a test first, that’s the best course of action. But as always, try to get the best you can afford. It may be better to get a more advanced camera, but only one. Then restructure your production to work with a single-camera methodology. At least then, all of your footage should be consistent.

Click here for the Introduction.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Intro

My first facility job after college at a hybrid production/post company included more than just editing. Our largest production effort was to produce, post, and dub weekly price-and-item retail TV commercials for a large, regional grocery chain. This included two to three days a week of studio production for product photography (product displays, as well as prepared food shots).

Early on, part of my shift included being the video shader for the studio camera being used. The video shader in a TV station operation is the engineering operator who makes sure the cameras are set up and adjusts video levels during the actual production. However, in our operation (as would be the case in any teleproduction facility of that time) this was a more creative role – more akin to a modern DIT (digital imaging technician) than a video engineer. It didn’t involve simply adjusting levels, but also ‘painting’ the image to get the best-looking product shots on screen. Under the direction of the agency producer and our lighting DP/camera operator, I would use both the RGB color balance controls of the camera, along with a built-in 6-way secondary color correction circuit, to make each shot look as stylistic – and the food as appetizing – as possible. Then I rolled tape and recorded the shot.

This was the mid-1970s when RCA dominated the broadcast camera market. Production and gear options where either NTSC, PAL, or film. We owned an RCA TK-45 studio camera and a TKP-45 ‘portable’ camera that was tethered to a motor home/mobile unit. This early RCA color correction system of RGB balance/level controls for lift/gamma/gain ranges, coupled with a 6-way secondary color correction circuit (sat/hue trim pots for RGBCMY) was used in RCA cameras and telecines. It became the basis for nearly all post-production color correction technology to follow. I still apply  those early fundamentals that I learned back then in my work today as a colorist.

Options = Complexity

In the intervening decades, the sheer number of camera vendors has blossomed and surpassed RCA, Philips, and the other few companies of the 1970s. Naturally, we are well past the simple concerns of NTSC or PAL; and film-based production is an oddity, not the norm. This has introduced a number of challenges:

1. More and cheaper options mean that productions using multiple cameras is a given.

2. Camera raw and log recording, along with modern color correction methods, give you seemingly infinite possibilities – often making it even harder to dial in the right look.

3. There is no agreement of file format/container standards, so file-based recording adds workflow complexity that never existed in the past.

In the next three blog posts, I will explore each of these items in greater depth.

©2019 Oliver Peters

Minimalism versus Complexity in Post

The prevailing wisdom is that Apple might preview the next Mac Pro at its annual WWDC event coming in a couple of weeks. Then the real product would likely be available by the end of the year. It will be interesting to see what that brings, given that the current Mac Pro was released in 2013 with no refreshes in between. And older Mac Pro towers (mid-2009-2012) are still competitive (with upgrades) against the current run of Apple’s Mac product line.

Many professional users are hoping for a user-upgradeable/expandable machine, like the older towers. But that hasn’t been Apple’s design and engineering trend. MacBooks, MacBook Pros, iMacs, and iMac Pros are more sealed and non-upgradeable than their predecessors. The eGPU and eGPU Pro units manufactured by Blackmagic Design are, in fact, largely an Apple design with Apple engineering specifications intended to meet power, noise and heat parameters. As such, you can’t simply pop in a newer, faster GPU chip, as you can with GPU cards and the Sonnet eGPU devices.

What do we really need?

Setting emotions aside, the real question is whether such expandability is needed any longer. Over the years, I’ve designed, built, and worked in a number of linear edit suites, mixing rooms, and other environments that required a ton of outboard gear. The earliest nonlinear suites (even up until recently) were hardware-intensive. But is any of this needed any longer? My own home rig had been based on a mid-2009 Mac Pro tower. Over the years, I’ve increased RAM, swapped out three GPU cards, changed the stock hard drives for two SSDs and two 7200 RPM media drives (RAID-0), as well as added PCIe cards for eSATA/USB3 and Blackmagic Design monitor display. While at the time, each of those moves was justified, I do have to wonder whether that investment in money would have been better spent for computer model upgrades.

Today that same Mac Pro sits turned off next to my desk. While still current with most of the apps and the OS (not Mojave, though), it can’t accept Thunderbolt peripherals and a few apps, like Pixelmator Pro, won’t install, because they require Metal 2 (only available with newer hardware). So my home suite has shifted to a mid-2014 Mac Book Pro. In doing so, I have adopted the outboard modular solution over the cards-in-the-tower approach. This is largely possible today because small, compact computers – such as laptops – have become sufficiently powerful to deal with today’s video needs.

I like this solution because I can easily shift from location to home by simply plugging in one Thunderbolt cable linked to my OWC dock. The dock connects my audio interface, a few drives, and my primary 27″ Dell display. An additional plus is that I no longer have to sync my personal files and applications between my two machines (I prefer to avoid cloud services for personal documents). I bought a Rain Design laptop stand and a TwelveSouth BookArc, so that under normal use (with one display), the MBP tucks behind the Dell in clamshell mode sitting in the BookArc cradle. When I need a dual-display configuration, I simply bring out the Rain stand and open up the MBP next to the Dell.

Admittedly, this solution isn’t for everyone. If I never needed a mobile machine, I certainly wouldn’t buy a laptop. And if I needed heavy horsepower at home, such as for intensive After Effects work or grading 4K and 8K feature films, then I would probably go for a tower – maybe even one of the Puget Systems PCs that I reviewed. But most of what I do at home is standard editing with some grading, which nearly any machine can handle these days.

Frankly, if I were to start from scratch today, instead of the laptop, tower, and an iPad, I would be tempted to go with a fully-loaded 13″ MacBook Pro. For home, add the eGPU Pro, an LG 5K display, dock, audio i/o and speakers, and drives as needed. This makes for a lighter, yet capable editor in the field. When you get home, one Thunderbolt 3 cable from the eGPU Pro into the laptop would connect the whole system, including power to the MBP.

Of course, I like simple and sleek designs – Frank Lloyd Wright, Bauhaus, Dieter Rams, Scandinavian furniture, and so on. So the Jobs/Ive approach to industrial design does appeal to me. Fortunately, for the most part, my experience with Apple products has been a positive one. However, it’s often hard to make that work in a commercial post facility. After all, that’s where horsepower is needed. But does that necessarily mean lots of gear attached to our computers?

How does this apply to a post facility?

At the day job, I usually work in a suite with a 2013 Mac Pro. Since I do a lot of the Resolve work, along with editing, that Mac Pro cables up to two computer displays plus two grading displays (calibrated and client), a CalDigit dock, a Sonnet 10GigE adapter, a Promise RAID, a TimeMachine drive, the 1GigE house internet, and an audio interface. Needless to say, the intended simplicity of the Mac Pro design has resulted in a lot of spaghetti hanging off of the back. Clearly the wrong design for this type of installation.

Conversely, the same Mac Pro, in a mixing room might be a better fit – audio interface, video display, Thunderbolt RAID. Much less spaghetti. Our other edit stations are based around iMacs/iMac Pros with few additional peripherals. Since our clients do nearly all of their review-and-approval online, the need for a large, client-friendly suite has been eliminated. One room is all we need for that, along with giving the rest of the editors a good working environment.

Even the Mac Pro room could be simplified, if it weren’t for the need to run Resolve and Media Composer on occasion. For example, Premiere Pro and Final Cut Pro X both send real video to an externally connected desktop display. If you have a reasonably accurate display, like a high-end consumer LED or OLED flat panel, then all editing and even some grading and graphic design can be handled without an additional, professional video display and hardware interface. Any room configured this way can easily be augmented with a roving 17″-34″ calibrated display and a mini-monitor device (AJA or BMD) for those ad hoc needs, like more intense grading sessions.

An interesting approach has been discussed by British editor Thomas Grove Carter, who cuts at London’s Trim, a commercial editorial shop. Since they are primarily doing the creative edit and not the finishing work, the suites can be simplified. For the most part, they only need to work with proxy or lighter-weight ProRes files. Thus, there are no heavy media requirements, as might be required with camera RAW or DPX image sequences. As he has discussed in interviews and podcasts (generally related to his use of Final Cut Pro X), Trim has been able to design edit rooms with a light hardware footprint. Often Trim’s editors are called upon to start editing on-site and then move back to Trim to continue the edit. So mobility is essential, which means the editors are often cutting with laptops. Moving from location or home to an edit suite at Trim is as simple as hooking up the laptop to a few cables. A large display for interface or video, plus fast, portable SSDs with all of the project’s media.

An installation built with this philosophy in mind can be further simplified through the use of a shared storage solution. Unlike in the past, when shared storage systems were complex, hard to install, and confusing to manage – today’s systems are designed with average users in mind. If you are moderately tech savvy, you can get a 10GigE system up and running without the need for an IT staff.

At the day-job shop, we are running two systems – QNAP and LumaForge Jellyfish Rack. We use both for different reasons, but either system by itself is good for nearly any installation – especially Premiere Pro shops. If you are principally an FCPX shop, then Jellyfish will be the better option for you. A single ethernet cable to each workstation from a central server ‘closet’ is all that’s required for a massive amount of media storage available to every editor. No more shuffling hard drives, except to load location footage. Remember that shared storage allows for a distributed workflow. You can set up a simple Mac mini bay for assistant editors and general media management without the need to commandeer an edit suite for basic tasks.

You don’t have to look far to see that the assumptions of the past few decades in computer development and post-production facility design aren’t entirely valid any longer. Client interactions have changed and computer capabilities have improved. The need for all the extra add-ons and do-dads we thought we had to have is no longer essential. It’s no longer the driver for the way in which computers have to be built today.

©2019 Oliver Peters