The 2019 Mac Pro Truck

In 2010 Steve Jobs famously provided us with the analogy that traditional computers are like trucks in the modern era. Not that trucks were going away, but simply were no longer a necessity for most of us, now that the majority of the populace wasn’t engaged in farming. While trucks would continue to be purchased and used, far fewer people actually needed them, because the car covered their needs. The same was true, he felt, of traditional computers.

Jobs is often characterized as being a consumer market-driven guy, but I believe the story is more nuanced. After all, he founded NeXT Computer, which clearly made high-end workstations. Job also became the major shareholder in Pixar Animation Studios – a company that not only needed advanced, niche computing power, but also developed some of its own specialized graphics hardware and software. So a mix of consumer and advanced computing DNA runs throughout Apple.

By the numbers

Unless you’ve been under a rock, you know that Apple revealed its new 2019 Mac Pro at the WWDC earlier this month. This year’s WWDC was an example of a stable, mature Apple firing on all cylinders. iPhone unit sales have not been growing. The revenue has, but that’s because the prices have been going up. Now it’s time to push all of the company’s businesses, including iPad, services, software, and the Mac. Numbers are hard to come by, although Apple has acknowledged that the Mac unit by itself is nearly a $25 billion business and that it would be close to being in the Fortune 100 on its own. There’s a ratio of 80/20 Mac laptops to desktops. For comparison to the rest of the PC world, Apple’s marketshare is around 7%, ranking fourth behind Lenovo, HP, and Dell, but just ahead of Acer. There are 100 million active macOS users (Oct 2018), although Windows 10 adoption alone runs eight times larger (Mar 2019).

We can surmise from this information that there are 20 million active Mac Pro, iMac, iMac Pro, and Mac mini users. It’s fair to assume that a percentage of those are in the market for a new Mac Pro. I would project that maybe 1% of all Mac users would be interested in upgrading to this machine – i.e. around 1 million prospective purchasers. I’m just spit-balling here, but at a starting price of $6,000, that’s a potential market of $6 billion in sales before factoring in any upgrade options or new macOS users!

A funny thing happened on the way to the WWDC

Apple went through a computing platform progression from the old Quadra 950 and 9600 towers to the first Intel Mac Pro towers over the course of the mid-1990s to 2006. The second generation of the older Mac Pro was released during 2009. So in a dozen-plus years, Apple customers saw seven major processor/platform changes and had come to expect a constant churn. In essence, plan on replacing your system every few years. However, from 2009 onward, customers that bought those Mac Pros had a machine that could easily last, be productive, and still be somewhat competitive ten years later. The byproduct of this was the ability to plan longer life expectancy for the hardware you buy. No longer an automatic two to three year replacement need.

Even the 2013 Mac Pro has lasted until now (six years later) and remains competitive with most machines. The miscalculation that Apple made with the 2013 Mac Pro was that pro customers would prefer external expandability versus internal hardware upgrades. Form over function. That turned out to be wrong. I’m probably one of the few who actually likes the 2013 Mac Pro under the right conditions. It’s an innovative design, but unfortunately one that can’t be readily upgraded.

The second major change in computing hardware is that now “lesser” machines are more than capable of doing the work required in media and entertainment. During those earlier days of the G3/G4/G5 PowerMacs and the early Intel Mac Pros, Apple didn’t make laptops and all-in-ones that had enough horsepower to handle video editing and the like. Remember the colorful, plastic iMacs and white eMacs? Or what about the toilet-seat-like iBook laptop? Good enough for e-mail, but not what you would want for editing.

Now, we have a wide range of both Mac and PC desktop computers and laptops that are up to the task. In the past, if you needed a performance machine, then you needed a workstation class computer. Nothing else would do. Today, a general purpose desktop PC that isn’t necessarily classed as a workstation is more than sufficient for designers, editors, and colorists. In the case of Apple, there’s a range of laptops and all-in-ones that cover those needs at many different price points.

The 2019 Mac Pro Reveal

Let me first say that I didn’t attend WWDC and I haven’t seen the new Mac Pro in person. I hope to be able to do a review at some point in the future. The bottom line is that this is purely an opinion piece for now.

There have certainly been a ton of internet comments about this machine – both positive and negative. Price is the biggest pain point. Clearly Apple intends this to be a premium product for the customer with demanding computing requirements. You can spin the numbers any way you like and people have. Various sites have speculated that a fully-loaded machines could drive the starting price from $6,000 to as high as $35K to $50K. The components that Apple defines in the early tech information do not perfectly match equivalent model numbers available on the suppliers’ websites. No one knows for sure how the specific Intel Xeon being used by Apple equates to other Xeons listed on Intel’s site. Therefore, exact price extrapolations are simply guesses for now.

In late 2009 I purchased an entry model 8-core Mac Pro. With some storage and memory upgrades, AppleCare, sales tax, and a small business discount, I paid around $4,000. The inflation difference over the decade is about 17%, so that same hardware should cost me $4,680 today. In fairness, Apple has a different design in this new machine and there are technologies not in my base 2009 machine, such as 10GigE, Thunderbolt 3, a better GPU, etc. Even though this new machine may be out of my particular budget right now, it’s still an acceptable value when compared with the older Mac Pros.

Likewise, if you compare the 2019 Mac Pro to comparable name brand workstations, like an HP Z8, you’ll quickly find that the HP will cost more. One clear difference, though is that HP also offers smaller, less costly workstation models, such as the Z2, Z4 and Z6. The PC world also offers many high quality custom solutions, such as Puget Systems, which I have reviewed.

One design decision that could have mitigated the cost a bit is the choice of CPU chips. Apple has opted to install Xeons chips in all of its Mac Pro designs. Same with the iMac Pro. However, Intel also offers very capable Core i9 CPUs. The i9 chips offer faster core speeds and high core counts. The Xeons are designed to be run flat out 24/7. However, in the case of video editing, After Effects, and so on, the Core i9 chip may well be the better solution. These apps really thrive on fast single-core speeds, so having a 12-core or 28-core CPU, where each core has a slower clock speed, may not give you the best results. Regardless of benefit, Xeons do add to Apple’s hard costs in building the machine. Xeons are more expensive that Core chips. In some direct comparisons, a Xeon can garner $1,000 over Intel’s retail price of the equivalent Core CPU.

The ultimate justification for buying a Mac Pro tower isn’t necessarily performance alone, but rather longevity and expandability. As I outlined above, customers have now been conditioned to expect the system to last and be productive for at least a decade. That isn’t necessarily true of an all-in-one or a laptop. This means that if you amortize the investment in a 2019 Mac Pro over a ten-year period, it’s actually quite reasonable.

The shame – and this is where much of the internet ire is coming from – is that Apple didn’t offer any intermediate models, like HP’s Z4 or Z6. I presume that Apple is banking on those customers buying iMacs, iMac Pros, Mac minis, or MacBook Pros instead. Couple one of these models with an external GPU and fast external storage and you will have plenty of power for your needs today. It goes without saying that comparing this Mac Pro to a custom PC build (which may be cheaper) is a non-starter. A customer for this Mac Pro will buy one, pure and simple. There is built-in price elasticity to this niche of the market. Apple knows that and the customers know it.

Nuts and bolts

The small details haven’t been fully revealed, so we probably won’t know everything about these new Mac Pros until September (the rumored release). Apple once again adopted a signature case design, which like the earlier tower case has been dubbed a “cheese grater.” Unlike the previous model, where the holes were simply holes for ventilation, the updated model (or would that be the retro model?) uses a lattice system in the case to direct the airflow. The 2019 is about the same size as its “cheese grater” predecessor, but 20 pounds lighter.

There is very little rocket science in how you build a workstation, so items like Xeon CPUs, GPU cards, RAM, and SSD system drives are well understood and relatively standard for a modern PC system.

The short hardware overview consists of:

8, 12, 16, 24, and 28-core Xeon CPU options

Memory from 32GB to 1.5TB of DDR4 ECC RAM

Up to four AMD GPU cards

1.4 kW power supply

Eight PCIe expansion slots (one used for Apple i/o card)

System storage options from 256GB to 4TB

Four Thunderbolt 3 ports (2 top and 2 back) plus two USB 3 ports (back)

(Note – more ports available with the upgraded GPU options)

Two 10Gb Ethernet ports

WiFi, Bluetooth, built-in speakers, headphone jack

So far, so good. Any modern workstation would have similar choices. There are several key unknowns and that’s where the questions come in. First, the GPU cards appear to be custom-designed AMD cards installed into a new MPX (Mac Pro expansion) module. This is a mounting/connecting cage to install and connect the hardware. However, if you wanted to add your own GPU card, would it fit into such a module? Would you have to buy a blank module from Apple for your card? Would your card simply fit into the PCIe slot and screw in like on any other tower? The last question does appear to be possible, but will there be proper Nvidia support?

The second big question relates to internal storage. The old “cheese grater” had sleds to install four internal drives. Up to six could be installed if you used the optical drive bays. The 2019 Mac Pro appears to allow up to four drives within an MPX chassis. Promise has already announced two products specifically for the Mac Pro. One would include four RAIDed 8TB drives for a 32TB capacity. 14TB HDDs are already available, so presumably this internal capacity will go up. 

The unknown is whether or not you can add drives without purchasing an MPX module. The maximum internal GPU option seems to be four cards, which are mounted inside two MPX modules. This is also the space required for internal drives. Therefore, if you have both MPX modules populated with GPU cards, then I would imagine you can’t add internal storage. But I may be wrong. As with most things tech, I predict that if blank MPX modules are required, a number of vendors will quickly offer cheaper aftermarket MPX modules for GPUs, storage, etc.

One side issue that a few blogs have commented on is the power draw. Because of the size of the power supply, the general feeling is that the Mac Pro should be plugged into a standard electrical circuit by itself, plus maybe a monitor. In other words, not a circuit with a bunch of other electrical devices, otherwise you might start blowing breakers.

Afterburner

A new hardware item from Apple is the optional Afterburner ProRes and ProRes RAW accelerator card. This uses an FGPA (field programmable gate array), which is a chip that can be programmed for various specific functions. It can potentially be updated in the future. Anyone who has worked with the RED Rocket or RED Rocket-X card in the past will be quite familiar with what the Afterburner is.

The Afterburner will decode ProRes and ProRes RAW codecs on-the-fly when this media is played in Final Cut Pro X, QuickTime Player X, and any other application re-coded to support the card. This would be especially beneficial with camera raw codecs, because it debayers the raw sensor data via hardware acceleration at full resolution, instead of using the CPU. Other camera RAW manufacturers, like RED, ARRI, Canon, and Blackmagic Design, might add support for this card to accelerate their codecs, as well. What is not known is whether the Afterburner card can also be used to offload true background functions like background exports and transcoding within Final Cut Pro X.

An FPGA card offers the promise of being future-proofed, because you can always update its function later. However, in actual practice, the hardware capabilities of any card become outstripped as the technology changes. This happened with the RED Rocket card and others. We’ll see if Apple has any better luck over time.

Performance

Having lots of cores is great, but with most media and entertainment software the GPU can be key. Apple has been at a significant disadvantage with many applications, like After Effects, because of their stance with Nvidia and CUDA acceleration. Apple prefers that a manufacturer support Metal, which is their way of leveraging the combined power of all CPUs and GPUs in the system. This all sounds great, but the reality is that it’s one proprietary technology versus another. In the benchmark tests I ran with the Puget PC workstation, the CUDA performance in After Effects easily trounced any Mac that I scored it against.

Look at Apple’s website for a chart representing the relative GPU performance of a 2013 Mac Pro, an iMac Pro, and the new 2019 Mac Pro. Each was tested with their respective top-of-the-line GPU option. The iMac Pro is 1.5x faster than the 2013 Mac Pro. The 2019 Mac Pro is twice as fast as the iMac Pro and 3x faster than the 2013 Mac Pro. While that certainly looks impressive, that 2x improvement over the iMac Pro comes thanks to two upgraded GPU cards instead of one. Well, duh! Of course, at this time we have no idea what these cards and MPX units will cost. (Note – I am not totally sure as to whether this testing used two GPUs in one MPX module or a total of four GPUs in two modules.)

We won’t know how well these really perform until the first units get out into the wild. Especially how they compare against comparable PCs with high-powered Nvidia cards. I may be going out on a limb, but I would be willing to bet that many people who buy the base configuration for $6K – thinking that they will get a huge boost in performance – are going to be very disappointed. I don’t mean to trash the entry-level machine. It’s got solid specs, but in that configuration, isn’t the best performer. At $6K, you are buying a machine that will have longevity and which can be upgraded in the future. In short, the system can grow with you over time as the workload demands increase. That’s something which has not be available to Mac owners since the end of 2012.

Software

To take the most advantage of the capabilities of this new machine, software developers (both applications and plug-ins) will have to update their code. All of the major brands like Adobe, Avid, Blackmagic Design, and others seem to be on board with this. Obviously, so are the in-house developers at Apple who create the Pro Applications. Final Cut Pro X and Logic Pro X are obvious examples. Logic is increasing the track count and number of software instruments you can run. Updates have already been released.

Final Cut Pro X has a number of things that appear in need of change. Up until now, in spite of being based around Metal, Final Cut has not taken advantage of multiple GPUs when present. If you add an eGPU to a Mac today, you must toggle a preference setting to use one GPU or the other as the primary GPU (Mojave). Judging by the activity monitor, it appears to be an either-or thing, which means the other GPU is loafing. Clearly when you have four GPUs present, you will want to tap into the combined power of all four.

With the addition of the Afterburner option, FCPX (or any other NLE) has to know that the card is present and how to offload media to the card during playback (and render?). Finally, the color pipeline in Final Cut Pro X is being updated to work in 16-bit float math, as well as optimized for fast 8K workflows.

All of this requires new code and development work. With the industry now talking about 16K video, is 8K enough? Today, 4K delivery is still years away for many editors, so 8K is yet that much further. I suspect that if and when 16K gets serious traction, Apple will be ready with appropriate hardware and software technology. In the case of the new Mac Pro, this could simply mean a new Afterburner card instead of an entirely new computer.

The Apple Pro Display XDR

In tandem with the 2019 Mac Pro, Apple has also revealed the new Pro Display XDR – a 6K  32″ Retina display. It uses a similar design aesthetic to the Mac Pro, complete with a matching ventilation lattice. This display comes calibrated and is designed for HDR with 1,000 nits fullscreen, sustained brightness, and a 1,600 nit maximum. It will be interesting to see how this actually looks. Recent Final Cut Pro X updates have added HDR capabilities, but you can never get an accurate view of it on a UI display. Furthermore, the 500 nit, P3 displays used in the iMac Pros are some of the least color-accurate UI displays of any Mac that I work with. I really hope Apple gets this one right.

To sell the industry on this display, Apple is making the cost and feature comparison between this new display and actual HDR color reference displays costing in the $30K-40K range. Think Flanders Scientific or Sony. The dirty little HDR secret is that when you display an image at the maximum nit level across the entire screen, the display will dim in order to prevent damage. Only the most expensive displays are more tolerant of this. I would presume that the Pro Display XDR will also dim when presented with a fullscreen image of 1,600 nits, which is why their spec lists 1,000 nits fullscreen. That level is the minimum HDR spec. Of course, if you are grading real world images properly, then in my opinion, you rarely should have important picture elements at such high levels. Most of the image should be in a very similar range to SDR, with the extended range used to preserve highlight information, like a bright sky.

Some colorists are challenging the physics behind some of Apple’s claims. The concern is whether or not the display will result in bloomed highlights. Apple’s own marketing video points out that the design reduces blooming, but it doesn’t say that it completely eliminates it. We’ll see. I don’t quite see how this display fits as a reference display. It only has Thunderbolt connections – no SDI or HDMI – so it won’t connect in most standard color correction facilities without additional hardware. If, like all computer displays, the user can adjust the brightness, then that goes against the concept of an HDR reference display. At 32″, it’s much too small to be used as a client display to stick on the wall.

Why did Apple make the choice to introduce this as a user interface display? If they wanted to make a great HDR reference display, then that makes some sense. Even as a great specialty display, like you often find in photography or fine print work. I understand that it will likely display accurate, fullscreen video directly from Final Cut Pro X or maybe even Premiere Pro without the need and added cost of an AJA or BMD i/o device or card. But as a general purpose computer display? That feels like it simply misses the mark, no matter how good it is. Not to mention, at a brightness level of 1,000 to 1,600 nits, that’s way too bright for most edit suites. I even find that to be the case with the iMac Pro’s 500 nit displays, when you crank them up.

This display is listed as $5K without a stand. Add another $1k if you want a matte finish. Oh, and if you want the stand, add another $1K! I don’t care how seductively Jony Ive pronounces “all-u-minium,” that’s taxing the good will of your customer. Heck, make it $5,500 and toss in the stand at cost. Remember, the stand has an articulating arm, which will probably lose its tension in a few years. I hope that a number of companies will make high-quality knock-offs for a couple of hundred bucks.

If you compare the Apple Pro Display XDR to another UI display with a similar mission, then it’s worth comparing it to the HP Dreamcolor Z31x Studio Display. This is a 32″ 4K, calibrated display with an MSRP of right at $3,200. But it doesn’t offer HDR specs, Retina density, or 6K resolution. Factor in those features and Apple’s brand premium and then the entry price isn’t that far out of line – except for that stand.

I imagine that Apple’s thought process is that if you don’t want to buy this display, then there are plenty of cheaper choices, like an LG, HP, Asus, or Dell. And speaking of LG, where’s Apple’s innovative spirit to try something different with a UI display? Maybe something like an ultra wide. LG now has a high-resolution 49″ display for about $1,400. This size enables one large canvas across the width; or two views, like having two displays side-by-side. However, maybe a high-density display (Retina) isn’t possible with such a design, which could be Apple’s hang-up.

Final thoughts

The new 2019 Mac Pro clearly demonstrates that Apple has not left the high-end user behind. I view relevant technology through the lens of my needs with video; however, this model will appeal to a wide range of design, scientific, and engineering users. It’s a big world out there. While it may not be the most cost-effective choice for the individual owner/editor, there are still plenty of editors, production companies, and facilities that will buy one.

There is a large gap between the Mac mini and this new Mac Pro. I still believe there’s a market for a machine similar to some of those concept designs for a Mac Pro. Or maybe a smaller version of this machine that starts at $3,000. But there isn’t such a model from Apple. If you like the 2013 “trash can” Mac Pro, then you can still get it – at least until the 2019 model is officially released. Naturally, iMacs and iMac Pros have been a superb option for that in-between user and will continue to be so.

If you are in the market for the 2019 Mac Pro, then don’t cut yourself short. Think of it as an investment for at least 10 years. Unless you are tight and can only afford the base model, then I would recommend budgeting in the $10K range. I don’t have an exact configuration in mind, but that will likely be a sweet spot for demanding work. Once I get a chance to properly review the 2019 Mac Pro, I’ll be more than happy come back with a real evaluation.

©2019 Oliver Peters

Advertisements

Good Omens

Fans of British television comedies have a new treat in Amazon Prime’s Good Omens. The six-part mini-series is a co-production of BBC Studios and Amazon Studios. It is the screen adaptation of the 1990 hit novel by the late Terry Pratchett and Neil Gaiman, entitled Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch. Just imagine if the Book of Revelation had been written by Edgar Wright or the Coen brothers. Toss in a bit of The Witches of Eastwick and I think you’ll get the picture.

The series stars Michael Sheen (Masters of Sex, The Good Fight) as Aziraphale (an angel) and David Tennant (Mary Queen of Scots, Doctor Who) as Crowley (a demon). Although on opposing sides, the two have developed a close friendship going back to the beginning of humanity. Now it’s time for the Antichrist to arrive and bring about Armageddon. Except that the two have grown fond of humans and their life on Earth, so Crowley and Aziraphale aren’t quite ready to see it all end. They form an unlikely alliance to thwart the End Times. Naturally this gets off to a bad start, when the Antichrist child is mixed up at birth and ends up misplaced with the wrong family. The series also stars an eclectic supporting cast, including Jon Hamm (Baby Driver, Mad Men), Michael McKean (Veep, Better Call Saul), and Frances McDormand (Hail, Caesar!, Fargo) as the voice of God.

Neil Gaiman (Lucifer, American Gods) was able to shepherd the production from novel to the screen by adapting the screenplay and serving as show runner. Douglas Mackinnon (Doctor Who, Sherlock) directed all six episodes. I recently had a chance to speak with Will Oswald (Doctor Who, Torchwood: Children of Earth, Sherlock) and Emma Oxley (Lair, Happy Valley), the two editors who brought the production over the finish line.

(Click any image to see an enlarged view.)

_____________________________________________________

[OP] Please tell me a bit about your editing backgrounds and how you landed this project.

[Will] I was the lead editor for Doctor Who for a while and got along well with the people. This led to Sherlock. Douglas had worked on both and gave me a call when this came up.

[Emma] I’ve been mainly editing thrillers and procedurals and was looking for a completely different script, and out of the blue I received a call from Douglas. I had worked with him as an assistant editor in 2007 on an adaptation of the Jekyll and Hyde story and I was fortunate that a couple of Douglas’s main editors were not available for Good Omens. When I read the script I thought this is a dream come true.

[OP] Had either of you read the book before?

[Will] I hadn’t, but when I got the gig, I immediately read the book. It was great, because this is a drama-comedy. How good a job is that? You are doing everything you like. It’s a bit tricky, but it’s a great atmosphere to work in.

[Emma] I was the same, but within a week I had read it. Then the scripts came through and they were pretty much word for word – you don’t expect that. But since it was six hours instead of feature length the book could remain intact.

[OP] I know that episodic series often divide up the editorial workload in many different ways. Who worked on which episode and how was that decided?

[Will] Douglas decided that I would do the first three episodes and Emma would edit the last three. The series happened to split very neatly in the middle. The first three episodes really set up the back story and the relationship between the characters and then the story shifts tone in the last three episodes.

[Emma] Normally in TV the editors would leapfrog each other. In this case, as Will said, the story split nicely into two, three-hour sections. It was a nice experience not to have to jump backwards and forwards.

[Will] The difficult thing for me in the first half is that the timeline is so complicated. In the first three episodes you have to develop the back story, which in this case goes back and forth through the centuries – literally back to the beginning of time. You also have to establish the characters’ relationship to each other. By the end of episode three, they really start falling apart, even though they do really like each other. It’s a bit like Butch Cassidy and the Sundance Kid. Of course, Emma then had to resolve all the conflicts in her episodes. But it was nice to go rocking along from one episode to the next.

[OP] What was the post-production schedule like?

[Emma] Well, we didn’t really have a schedule. That’s why it worked! (laugh) Will and I were on it from the very start and once we decided to split up the edit as two blocks of three episodes, there were days when I wouldn’t get any rushes, so could focus on getting a cut done and vice versa with Will. When Douglas came in, we had six pretty good episodes that were cut according to the script. Douglas said he wanted to treat it like a six hour film, so we did a full pass on all six episodes before Neil came in and then finally the execs. They allowed us the creative freedom to do that.

[Will] When Douglas came back, we basically had a seven and a half hour movie, which we ran in a cinema on a big screen. Then we went through and made adjustments in order. It was the first time I’ve had both the show runner and the director in with me every day. Neil had promised Terry that he would make sure it happened. Terry passed away before the production, but he had told Neil – and I’m paraphrasing here – don’t mess it up! So this was a very personal project for him. That weighed heavily on me, because when I reread the book, I wanted to make sure ‘this’ was in and ‘that’ was in as I did my cut.

[OP] What sort of changes were made as you were refining the episodes?

[Will] There were a lot of structural changes in episodes one and two that differed a lot from the script. It was a matter of working out how best to tell the story. Episode one was initially 80 minutes long. There was quite a lot of work to get down to the hourlong final version. Episode three was much easier. 

[Emma] By the time it got to episode four, the pattern had been established, so we had to deal more with visual effects challenges in the second half. We had a number of large set pieces and a limited visual effects budget. So we had to be clever about using visual effects moments without losing the impact, but still maximizing the effects we did have. And at the same time keeping it as good as we could. For example, there’s a flying saucer scene, but the plate shot didn’t match the saucer shot and it was going to take a ton of work to match everything. So we combined it with a shot intended for another part of the scene. Instead of a full screen effects shot, it’s seen through a car window. Not only did it save a lot of money, but more importantly, it ended up being a better way for the ship to land and more in the realm of Good Omens storytelling. I love that shot.

[Will] Visual effects are just storytelling points. You want to be careful not to lose the plot. For example, the Hellhound changes into a puppy dog and that transformation was originally intended to be a big visual effect. But instead, we went with a more classic approach. Just a simple cut and the camera tilts down to reveal the smaller dog. It turned out to be a much better way of doing it and makes me laugh every time I see it.

[OP] I noticed a lot of music from Queen used throughout. Any special arrangement to secure that for the series?

[Will] Queen is in the book. Every time Crowley hears music, even if it’s Mozart, it turns into Queen. Fortunately Neil knows everybody!

[Emma] And it’s one of Douglas’ favorite bands of all time, so it was a treat for him to put as much Queen music in as possible. At one point we had it over many more moments.

[Will] Also working with David Arnold [series composer] was great. There’s a lot of his music as well and he really understands what we do in editing.

[OP] Since this is a large effort and a lot of complex work involved, did you have a large team of assistant editors on the job with you?

[Emma] This is the UK. We don’t have a huge team! (laugh)

[Will] We had one assistant, Cat Gregory, and then much later on, a couple more for visual effects.

[Emma] They were great. Cat, our first assistant, had an adjoining room to us and she was our ‘take barometer.’ If you put in an alt line and she didn’t laugh, you knew it wasn’t as good. But if there was a chuckle coming out of her room, it would more often stay.

[OP] How do you work with your assistants? For example, do you let assistants assemble selects, or cut in sound effects or music?

[Will] It was such a heavy schedule with a huge amount of material, so there was a lot of work just to get that in and organized. Just giving us an honest opinion was invaluable. But music and sound effects – you really have to do that yourself.

[Emma] Me, too. I cut my own music and assemble my own rushes.

[OP] Please tell me a bit about your editorial set-up and editing styles.

[Will] We were spread over two or four upstairs/downstairs rooms at the production company’s office in Soho. These were Avid Media Composer systems with shared storage. We didn’t have the ScriptSync option. We didn’t even have Sapphire plug-ins until late in the day, although that might have been nice with some of the bigger scenes with a lot of explosions. I don’t really have an editing style, I think it’s important not to have one as an editor. Style comes out of the content. I think the biggest challenge on this show was how do you get the English humor across to an American audience.

[Emma] I wouldn’t say I have an editing style either. I come in, read the notes, and then watch the rushes with that information in my head. There wasn’t a lot of wild variation in the takes and David’s and Michael’s performances were just dreamy. So the material kind of cut itself.

[Will] The most important thing is to familiarize yourself with the material and review the selected takes. Those are the ones the director wanted. That also gives you a fixed point to start from. The great thing about software these days is that you can have multiple versions.

[OP] I know some directors like to calibrate their actors’ performances, with each take getting more extreme in emotion. Others like to have each take be very different from the one before it. What was Mackinnon’s style on this show as a director?

[Emma] In the beginning you always want to figure out what they are thinking. With Douglas it’s easy to see from the material he gives you. He’s got it all planned. He really gets the performance down to a tee in the rehearsal.

[Will] Douglas doesn’t push for a wide range in the emotion from one take to the next. As Emma mentioned, Douglas works through that in rehearsal. Someone like David and Michael work that out, too, and they’re bouncing off each other. Douglas has a fantastic visual sense. You can look at the six episodes and go, “Wow, how did you get all of that in?” It’s a lot of material and he found a way to tell that story. There’s a very natural flow to the structure.

[OP] Since both Douglas Mackinnon and Will worked on Doctor Who, and David Tennant was one of the Doctors during the series, was there a conscious effort to stay away from anything that smacked of Doctor Who in Good Omens?

[Will] It never crossed my mind. I always try to do something different, but as I said, the style comes out of the material. It has jeopardy and humor like Doctor Who, but it’s really quite different. I did 32 episodes of Doctor Who and each of those was very different from the other. David Tennent is in it, of course, but he is not even remotely playing the Doctor. Crowley is a fantastic new character for him.

[OP] Are there any final thoughts you’d like to share about working on Good Omens?

[Will] It was a pleasure to work on a world-famous book and it is very funny. To do it justice was really all we were doing. I was going back every night and reading the book marking up things. Hopefully the fans like it. I know Neil does and I hope Terry is watching it.

[Emma] I’m just proud that the fans of the book are saying that it’s one of the best adaptations they’ve ever watched on the screen. That’s a success story and it gives me a warm feeling when I think about Good Omens. I’d go back and cut it again, which I rarely say about any other job.

©2019 Oliver Peters

Did you pick the right camera? Part 3

Let me wrap up this three-parter with some thoughts on the media side of cameras. The switch from videotape recording to file-based recording has added complexity with not only specific file formats and codecs, but also the wrapper and container structure of the files themselves. The earliest file-based camera systems from Sony and Panasonic created a folder structure on their media cards that allowed for audio and video, clip metadata, proxies, thumbnails, and more. FAT32 formatting was adopted, so a 4GB file limit was imposed, which added the need for clip-spanning any time a recording exceeded 4GB in size.

As a result, these media cards contain a complex hierarchy of spanned files, folders, and subfolders. They often require a special plug-in for each NLE to be able to automatically interpret the files as the appropriate format of media. Some of these are automatically included with the NLE installation while others require the user to manually download and install the camera manufacturer’s software.

This became even more complicated with RED cameras, which added additional QuickTime reference files at three resolutions, so that standard media players could be used to read the REDCODE RAW files. It got even worse when digital still photo cameras added video recording capabilities, thus creating two different sets of folder paths on the card for the video and the still media. Naturally, none of these manufacturers adopted the same architecture, leaving users with a veritable Christmas tree of discovery every time they popped in one of these cards to copy/ingest/import media.

At the risk of sounding like a broken record, I am totally a fan of ARRI’s approach with the Alexa camera platform. By adopting QuickTime wrappers and the ProRes codec family (or optionally DNxHD as MXF OP1a media), Alexa recordings use a simple folder structure containing a set of uniquely-named files. These movie files include interleaved audio, video, and timecode data without the need for subfolders, sidecar files, and other extraneous information. AJA has adopted a similar approach with its KiPro products. From an editor’s point-of-view, I would much rather be handed Alexa or KiPro media files than any other camera product, simply because these are the most straight-forward to deal with in post.

I should point out that in a small percentage of productions, the incorporated metadata does have value. That’s often the case when high-end VFX are involved and information like lens data can be critical. However, in some camera systems, this is only tracked when doing camera raw recordings. Another instance is with GoPro 360-degree recordings. The front and back files and associated data files need to stay intact so that GoPro’s stitching software can properly combine the two halves into a single movie.

You can still get the benefit of the simpler Alexa-style workflow in post with other cameras if you do a bit of media management of files prior to ingesting these for the edit. My typical routine for the various Panasonic, Canon, Sony, and prosumer cameras is to rip all of the media files out of their various Clip or Private folders and move them to the root folder (usually labelled by camera roll or date). I trash all of those extra folders, because none of it is useful. (RED and GoPro 360 are the only formats to which I don’t do this.) When it’s a camera that doesn’t generate unique file names, then I will run a batch renaming application in order to generate unique file names. There are a few formats (generally drones, ‘action’ cameras, smart phones, and image sequences) that I will transcode to some flavor of ProRes. Once I’ve done this, the edit and the rest of post becomes smooth sailing.

While part of your camera buying decision should be based on its impact on post, don’t let that be a showstopper. You just have to know how to handle it and allow for the necessary prep time before starting the edit.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Part 2

HDR (high dynamic range) imagery and higher display resolutions start with the camera. Unfortunately that’s also where the misinformation starts. That’s because the terminology is based on displays and not on camera sensors and lenses.

Resolution

4K is pretty common, 8K products are here, and 16K may be around the corner. Resolution is commonly expressed as the horizontal dimension, but in fact, actual visual resolution is intended to be measured vertically. A resolution chart uses converging lines. The point at which you can no longer discern between the lines is the limit of the measurable resolution. That isn’t necessarily a pixel count.

The second point to mention is that camera sensors are built with photosites that only loosely equate to pixels. The hitch is that there is no 1:1 correlation between a sensor’s photosites and display pixels on a screen. This is made even more complicated by the design of a Bayer-pattern sensor that is used in most professional video cameras. In addition, not all 4K cameras look good when you analyze the image at 100%. For example, nearly all early and/or cheap drone and ‘action’ cameras appear substandard when you actually look at the image closely. The reasons include cheap plastic lenses and high compression levels.

The bottom line is that when a company like Netflix won’t accept an ARRI Alexa as a valid 4K camera for its original content guidelines – in spite of the number of blockbuster feature films captured using Alexas – you have to take it with a grain of salt. Ironically, if you shoot with an Alexa in its 4:3 mode (2880 x 2160) using anamorphic lenses (2:1 aspect squeeze), the expanded image results in a 5760 x 2160 (6K) frame. Trust me, this image looks great on a 4K display with plenty of room to crop left and right. Or, a great ‘scope image. Yes, there are anamorphic lens artifacts, but that’s part of the charm as to why creatives love to shoot that way in the first place.

Resolution is largely a non-issue for most camera owners these days. There are tons of 4K options and the only decision you need to make when shooting and editing is whether to record at 3840 or 4096 wide when working in a 4K mode.

Log, raw, and color correction

HDR is the ‘next big thing’ after resolution. Nearly every modern professional camera can shoot footage that can easily be graded into HDR imagery. That’s by recording the image as either camera raw or with a log color profile. This lets a colorist stretch the highlight information up to the peak luminance levels that HDR displays are capable of. Remember that HDR video is completely different from HDR photography, which can often be translated into very hyper-real photos. Of course, HDR will continue to be a moving target until one of the various competing standards gains sufficient traction in the consumer market.

It’s important to keep in mind that neither raw nor log is a panacea for all image issues. Both are ways to record the linear dynamic range that the camera ‘sees’ into a video colorspace. Log does this by applying a logarithmic curve to the video, which can then be selectively expanded again in post. Raw preserves the sensor data in the recording and pushes the transformation of that data to RGB video outside of the camera. Using either method, it is still possible to capture unrecoverable highlights in your recorded image. Or in some cases the highlights aren’t digitally clipped, but rather that there’s just no information in them other than bright whiteness. There is no substitute for proper lighting, exposure control, and shaping the image aesthetically through creative lighting design. In fact, if you carefully control the image, such as in a studio interview or a dramatic studio production, there’s no real reason to shoot log instead of Rec 709. Both are valid options.

I’ve graded camera raw (RED, Phantom, DJI) and log footage (Alexa, Canon, Panasonic, Sony) and it is my opinion that there isn’t that much magic to camera raw. Yes, you can have good iso/temp/tint latitude, but really not a lot more than with a log profile. In one, the sensor de-Bayering is done in post and in the other, it’s done in-camera. But if a shot was recorded underexposed, the raw image is still going to get noisy as you lift the iso and/or exposure settings. There’s no free lunch and I still stick to the mantra that you should ‘expose to the right’ during production. It’s easier to make a shot darker and get a nice image than going in the other direction.

Since NAB 2018, more camera raw options have hit the market with Apple’s ProRes RAW and Blackmagic RAW. While camera raw may not provide any new, magic capabilities, it does allow the camera manufacturer to record a less-compressed file at a lower data rate.  However, neither of these new codecs will have much impact on post workflows until there’s a critical mass of production users, since these are camera recording codecs and not mezzanine or mastering codecs. At the moment, only Final Cut Pro X properly handles ProRes RAW, yet there are no actual camera raw controls for it as you would find with RED camera raw settings. So in that case, there’s actually little benefit to raw over log, except for file size.

One popular raw codec has been Cinema DNG, which is recorded as an image sequence rather than a single movie file. Blackmagic Design cameras had used that until replaced by Blackmagic RAW.  Some drone cameras also use it. While I personally hate the workflow of dealing with image sequence files, there is one interesting aspect of cDNG. Because the format was originally developed by Adobe, processing is handled nicely by the Adobe Camera Raw module, which is designed for camera raw photographs. I’ve found that if you bring a cDNG sequence into After Effects (which uses the ACR module) as opposed to Resolve, you can actually dig more highlight detail out of the images in After Effects than in Resolve. Or at least with far less effort. Unfortunately, you are stuck making that setting decision on the first frame, as you import the sequence into After Effects.

The bottom line is that there is no way to make an educated decision about cameras without actually testing the images, the profile options, and the codecs with real-world footage. These have to be viewed on high quality displays at their native resolutions. Only then will you get an accurate reading of what that camera is capable of. The good news is that there are many excellent options on the market at various price points, so it’s hard to go wrong with any of the major brand name cameras.

Click here for Part 1.

Click here for Part 3.

©2019 Oliver Peters

Did you pick the right camera? Part 1

There are tons of great cameras and lenses on the market. While I am not a camera operator, I have been a videographer on some shoots in the past. Relevant production and camera logistical issues are not foreign to me. However, my main concern in evaluating cameras is how they impact me in post – workflow, editing, and color correction. First – biases on the table. Let me say from the start that I have had the good fortune to work on many productions shot with ARRI Alexas and that is my favorite camera system in regards to the three concerns offered in the introductory post. I love the image, adopting ProRes for recording was a brilliant move, and the workflow couldn’t be easier. But I also recognize that ARRI makes an expensive albeit robust product. It’s not for everyone. Let’s explore.

More camera choices – more considerations

If you are going to only shoot with a single camera system, then that simplifies the equation. As an editor, I long for the days when directors would only shoot single-camera. Productions were more organized and there was less footage to wade through. And most of that footage was useful – not cutting room fodder. But cameras have become cheaper and production timetables condensed, so I get it that having more than one angle for every recording can make up for this. What you will often see is one expensive ‘hero’ camera as the A-camera for a shoot and then cheaper/lighter/smaller cameras as the B and C-cameras. That can work, but the success comes down to the ingredients that the chef puts into the stew. Some cameras go well together and others don’t. That’s because all cameras use different color science.

Lenses are often forgotten in this discussion. If the various cameras being used don’t have a matched set of lenses, the images from even the exact same model cameras – set to the same settings – will not match perfectly. That’s because lenses have coloration to them, which will affect the recorded image. This is even more extreme with re-housed vintage glass. As we move into the era of HDR, it should be noted that various lens specialists are warning that images made with vintage glass – and which look great in SDR – might not deliver predictable results when that same recording is graded for HDR.

Find the right pairing

If you want the best match, use identical camera models and matched glass. But, that’s not practical or affordable for every company nor every production. The next best thing is to stay within the same brand. For example, Canon is a favorite among documentary producers. Projects using cameras from the EOS Cinema line (C300, C300 MkII, C500, C700) will end up with looks that match better in post between cameras. Generally the same holds true for Sony or Panasonic.

It’s when you start going between brands that matching looks becomes harder, because each manufacturer uses their own ‘secret sauce’ for color science. I’m currently color grading travelogue episodes recorded in Cuba with a mix of cameras. A and B-cameras were ARRI Alexa Minis, while the C and D-cameras were Panasonic EVA1s. Additionally Panasonic GH5, Sony A7SII, and various drones cameras were also used. Panasonic appears to use a similar color science as ARRI, although their log color space is not as aggressive (flat). With all cameras set to shoot with a log profile and the appropriate REC709 LUT applied to each in post (LogC and Vlog respectively) I was able to get a decent match between the ARRI and Panasonic cameras, including the GH5. Not so close with the Sony or drone cameras, however.

Likewise, I’ve graded a lot of Canon C300 MkII/C500 footage and it looks great. However, trying to match Canon to ARRI shots just doesn’t come out right. There is too much difference in how blues are rendered.

The hardest matches are when professional production cameras are married with prosumer DSLRs, such as a Sony FS5 and a Fujifilm camera. Not even close. And smartphone cameras – yikes! But as I said above, the GH5 does seem to provide passible results when used with other Panasonic cameras and in our case, the ARRIs. However, my experience there is limited, so I wouldn’t guarantee that in every case.

Unfortunately, there’s no way to really know when different brands will or won’t create a compatible A/B-camera combination until you start a production. Or rather, when you start color correcting the final. Then it’s too late. If you have the luxury of renting or borrowing cameras and doing a test first, that’s the best course of action. But as always, try to get the best you can afford. It may be better to get a more advanced camera, but only one. Then restructure your production to work with a single-camera methodology. At least then, all of your footage should be consistent.

Click here for the Introduction.

Click here for Part 2.

©2019 Oliver Peters

Did you pick the right camera? Intro

My first facility job after college at a hybrid production/post company included more than just editing. Our largest production effort was to produce, post, and dub weekly price-and-item retail TV commercials for a large, regional grocery chain. This included two to three days a week of studio production for product photography (product displays, as well as prepared food shots).

Early on, part of my shift included being the video shader for the studio camera being used. The video shader in a TV station operation is the engineering operator who makes sure the cameras are set up and adjusts video levels during the actual production. However, in our operation (as would be the case in any teleproduction facility of that time) this was a more creative role – more akin to a modern DIT (digital imaging technician) than a video engineer. It didn’t involve simply adjusting levels, but also ‘painting’ the image to get the best-looking product shots on screen. Under the direction of the agency producer and our lighting DP/camera operator, I would use both the RGB color balance controls of the camera, along with a built-in 6-way secondary color correction circuit, to make each shot look as stylistic – and the food as appetizing – as possible. Then I rolled tape and recorded the shot.

This was the mid-1970s when RCA dominated the broadcast camera market. Production and gear options where either NTSC, PAL, or film. We owned an RCA TK-45 studio camera and a TKP-45 ‘portable’ camera that was tethered to a motor home/mobile unit. This early RCA color correction system of RGB balance/level controls for lift/gamma/gain ranges, coupled with a 6-way secondary color correction circuit (sat/hue trim pots for RGBCMY) was used in RCA cameras and telecines. It became the basis for nearly all post-production color correction technology to follow. I still apply  those early fundamentals that I learned back then in my work today as a colorist.

Options = Complexity

In the intervening decades, the sheer number of camera vendors has blossomed and surpassed RCA, Philips, and the other few companies of the 1970s. Naturally, we are well past the simple concerns of NTSC or PAL; and film-based production is an oddity, not the norm. This has introduced a number of challenges:

1. More and cheaper options mean that productions using multiple cameras is a given.

2. Camera raw and log recording, along with modern color correction methods, give you seemingly infinite possibilities – often making it even harder to dial in the right look.

3. There is no agreement of file format/container standards, so file-based recording adds workflow complexity that never existed in the past.

In the next three blog posts, I will explore each of these items in greater depth.

©2019 Oliver Peters

Minimalism versus Complexity in Post

The prevailing wisdom is that Apple might preview the next Mac Pro at its annual WWDC event coming in a couple of weeks. Then the real product would likely be available by the end of the year. It will be interesting to see what that brings, given that the current Mac Pro was released in 2013 with no refreshes in between. And older Mac Pro towers (mid-2009-2012) are still competitive (with upgrades) against the current run of Apple’s Mac product line.

Many professional users are hoping for a user-upgradeable/expandable machine, like the older towers. But that hasn’t been Apple’s design and engineering trend. MacBooks, MacBook Pros, iMacs, and iMac Pros are more sealed and non-upgradeable than their predecessors. The eGPU and eGPU Pro units manufactured by Blackmagic Design are, in fact, largely an Apple design with Apple engineering specifications intended to meet power, noise and heat parameters. As such, you can’t simply pop in a newer, faster GPU chip, as you can with GPU cards and the Sonnet eGPU devices.

What do we really need?

Setting emotions aside, the real question is whether such expandability is needed any longer. Over the years, I’ve designed, built, and worked in a number of linear edit suites, mixing rooms, and other environments that required a ton of outboard gear. The earliest nonlinear suites (even up until recently) were hardware-intensive. But is any of this needed any longer? My own home rig had been based on a mid-2009 Mac Pro tower. Over the years, I’ve increased RAM, swapped out three GPU cards, changed the stock hard drives for two SSDs and two 7200 RPM media drives (RAID-0), as well as added PCIe cards for eSATA/USB3 and Blackmagic Design monitor display. While at the time, each of those moves was justified, I do have to wonder whether that investment in money would have been better spent for computer model upgrades.

Today that same Mac Pro sits turned off next to my desk. While still current with most of the apps and the OS (not Mojave, though), it can’t accept Thunderbolt peripherals and a few apps, like Pixelmator Pro, won’t install, because they require Metal 2 (only available with newer hardware). So my home suite has shifted to a mid-2014 Mac Book Pro. In doing so, I have adopted the outboard modular solution over the cards-in-the-tower approach. This is largely possible today because small, compact computers – such as laptops – have become sufficiently powerful to deal with today’s video needs.

I like this solution because I can easily shift from location to home by simply plugging in one Thunderbolt cable linked to my OWC dock. The dock connects my audio interface, a few drives, and my primary 27″ Dell display. An additional plus is that I no longer have to sync my personal files and applications between my two machines (I prefer to avoid cloud services for personal documents). I bought a Rain Design laptop stand and a TwelveSouth BookArc, so that under normal use (with one display), the MBP tucks behind the Dell in clamshell mode sitting in the BookArc cradle. When I need a dual-display configuration, I simply bring out the Rain stand and open up the MBP next to the Dell.

Admittedly, this solution isn’t for everyone. If I never needed a mobile machine, I certainly wouldn’t buy a laptop. And if I needed heavy horsepower at home, such as for intensive After Effects work or grading 4K and 8K feature films, then I would probably go for a tower – maybe even one of the Puget Systems PCs that I reviewed. But most of what I do at home is standard editing with some grading, which nearly any machine can handle these days.

Frankly, if I were to start from scratch today, instead of the laptop, tower, and an iPad, I would be tempted to go with a fully-loaded 13″ MacBook Pro. For home, add the eGPU Pro, an LG 5K display, dock, audio i/o and speakers, and drives as needed. This makes for a lighter, yet capable editor in the field. When you get home, one Thunderbolt 3 cable from the eGPU Pro into the laptop would connect the whole system, including power to the MBP.

Of course, I like simple and sleek designs – Frank Lloyd Wright, Bauhaus, Dieter Rams, Scandinavian furniture, and so on. So the Jobs/Ive approach to industrial design does appeal to me. Fortunately, for the most part, my experience with Apple products has been a positive one. However, it’s often hard to make that work in a commercial post facility. After all, that’s where horsepower is needed. But does that necessarily mean lots of gear attached to our computers?

How does this apply to a post facility?

At the day job, I usually work in a suite with a 2013 Mac Pro. Since I do a lot of the Resolve work, along with editing, that Mac Pro cables up to two computer displays plus two grading displays (calibrated and client), a CalDigit dock, a Sonnet 10GigE adapter, a Promise RAID, a TimeMachine drive, the 1GigE house internet, and an audio interface. Needless to say, the intended simplicity of the Mac Pro design has resulted in a lot of spaghetti hanging off of the back. Clearly the wrong design for this type of installation.

Conversely, the same Mac Pro, in a mixing room might be a better fit – audio interface, video display, Thunderbolt RAID. Much less spaghetti. Our other edit stations are based around iMacs/iMac Pros with few additional peripherals. Since our clients do nearly all of their review-and-approval online, the need for a large, client-friendly suite has been eliminated. One room is all we need for that, along with giving the rest of the editors a good working environment.

Even the Mac Pro room could be simplified, if it weren’t for the need to run Resolve and Media Composer on occasion. For example, Premiere Pro and Final Cut Pro X both send real video to an externally connected desktop display. If you have a reasonably accurate display, like a high-end consumer LED or OLED flat panel, then all editing and even some grading and graphic design can be handled without an additional, professional video display and hardware interface. Any room configured this way can easily be augmented with a roving 17″-34″ calibrated display and a mini-monitor device (AJA or BMD) for those ad hoc needs, like more intense grading sessions.

An interesting approach has been discussed by British editor Thomas Grove Carter, who cuts at London’s Trim, a commercial editorial shop. Since they are primarily doing the creative edit and not the finishing work, the suites can be simplified. For the most part, they only need to work with proxy or lighter-weight ProRes files. Thus, there are no heavy media requirements, as might be required with camera RAW or DPX image sequences. As he has discussed in interviews and podcasts (generally related to his use of Final Cut Pro X), Trim has been able to design edit rooms with a light hardware footprint. Often Trim’s editors are called upon to start editing on-site and then move back to Trim to continue the edit. So mobility is essential, which means the editors are often cutting with laptops. Moving from location or home to an edit suite at Trim is as simple as hooking up the laptop to a few cables. A large display for interface or video, plus fast, portable SSDs with all of the project’s media.

An installation built with this philosophy in mind can be further simplified through the use of a shared storage solution. Unlike in the past, when shared storage systems were complex, hard to install, and confusing to manage – today’s systems are designed with average users in mind. If you are moderately tech savvy, you can get a 10GigE system up and running without the need for an IT staff.

At the day-job shop, we are running two systems – QNAP and LumaForge Jellyfish Rack. We use both for different reasons, but either system by itself is good for nearly any installation – especially Premiere Pro shops. If you are principally an FCPX shop, then Jellyfish will be the better option for you. A single ethernet cable to each workstation from a central server ‘closet’ is all that’s required for a massive amount of media storage available to every editor. No more shuffling hard drives, except to load location footage. Remember that shared storage allows for a distributed workflow. You can set up a simple Mac mini bay for assistant editors and general media management without the need to commandeer an edit suite for basic tasks.

You don’t have to look far to see that the assumptions of the past few decades in computer development and post-production facility design aren’t entirely valid any longer. Client interactions have changed and computer capabilities have improved. The need for all the extra add-ons and do-dads we thought we had to have is no longer essential. It’s no longer the driver for the way in which computers have to be built today.

©2019 Oliver Peters