Frame.io Brings FiLMiC Pro to the Cloud

It’s not news that Frame.io has been pioneering camera-to-cloud (C2C) workflows. However, one of the newsworthy integrations announced last week was the addition C2C capabilities for iPhone and Android users with the Filmic Pro camera application. The update already popped up in your Filmic Pro settings if you’ve kept the app current. Frame’s C2C feature requires Filmic’s Cinematographer Kit (an in-app purchase), and a Frame.io Pro or Adobe Creative Cloud account.

Professional filming with iPhones has become common in many market sectors for both primary and secondary videography. The Filmic Pro/C2C workflow can prove worthwhile when fast turnaround and remote access become factors in your production.

Understanding the Filmic Pro C2C integration

Filmic Pro’s C2C integration is a little different than Frame’s other camera-to-cloud workflows, which are tied to Teradek devices. In those situations, the live video stream from the camera is simultaneously encoded into a low-res proxy file by the Teradek device. High-res OCF (original camera files) media is stored on the camera card and the proxies on the Teradek. The proxies are uploaded to Frame. There is some latency in triggering the proxy generation, so start and end times do not match perfectly between the OCF media and the proxies. Accurate relinks between file versions are made possible by common timecode.

An Android phone or iPhone does not require any extra hardware to handle the proxy creation or uploading. Filmic Pro encodes the proxy file after the recording is stopped, not simultaneously. Both high and low-res files are stored on the phone within Filmic’s clip library and have identical lengths and start/stop times. Filmic Pro won’t add a timecode track in this mode, so all files start from zero, albeit with unique file names. If you are shooting with multiple iPhones or double-system sound, then be sure to slate the clips so the editor can sync the files.

Testing the workflow

I have an iPhone SE and the software, so it was time to run some workflow tests in a hypothetical scenario. Here’s the premise – three members of the production team (in reality, me, of course), all located in different cities. The videographer is on site. The producer is in another city and he’s going to turn around a quick cut for story content. The editor is in yet a third location and he’ll conform the high-res camera files, add effects, graphics, color correction, and finish the mix to deliver the final product.

Click here to read the rest of the article at Pro Video Coalition

Click here for a more in-depth article about mobile filmmaking with FiLMiC Pro

©2022 Oliver Peters

Thoughts on Apple’s Spring Event

The Apple Event has come and gone and proceeded largely as expected. Yes, Air Tags, Apple TV improvements, and a purple iPhone 12. All ho-hum for me, but then I’m probably not the target market for those. More importantly, which new features or products benefit content creators?

24” M1 iMac

We now have the latest machine in the transition to Apple’s Arm-based SoC M1 processor – the new 24” M1 iMac. This is the same integrated chip used in the other M1 machines – the laptops and the Mac mini. Apple launched it with a bouquet of seven colors, harkening back to the original Bondi iMacs. Of course, the iMac itself is the natural descendant of the first Mac. Along with color options, there are color-coordinated accessories, including the mouse and a new round-edge keyboard with Touch ID.

This is the first in what is presumably a line of several new iMacs. It’s targeted at consumers; however, the M1 Macs have proven to be more than capable enough for editing, especially with Apple Final Cut Pro. This model has a 24” screen and is 11.5mm thick. That’s only slightly thicker than the 12.9” iPad Pro! It replaces the former 21” model. With slimmer bezels you get 4.5K Retina resolution in similar screen real estate to the older model. There’s an upgraded audio system (mics, speakers, and Dolby Atmos support) and a 1080p camera. According to Apple, the M1 iMac will also drive an external 6K Pro Display XDR.

I reviewed the M1 Mac mini, which has similar specs, so I would expect similar performance to that of the mini. There are two chip configurations with 8GB RAM or 16GB RAM as an option. Storage goes up to 2TB. Ports are slim with only two USB-4 (USB-C style plug) and two USB/Thunderbolt ports. USB-A ports are gone, so expect to buy adapters and/or a dock, especially if you use thumb drives or license dongles. One point to note is that this new iMac only supports 1GbE Ethernet, same as the M1 mini when it first launched. However, since then, Apple has quietly added a 10GbE option to the mini’s custom options.

An interesting design choice is the return of the puck-like power adapter. This enables a magnetic power plug, a la the older MagSafe plugs. Ethernet connects to the adapter, instead of the back of the iMac. Personally, I feel this is a poor design choice. I had to deal with those on Apple displays over the years and have been more than happy not to have them on the Intel iMacs. I’d rather see a slightly thicker iMac design without the adapter. Although, many do like this design, because the ultra-slim design has a cool factor. I can also appreciate that Apple designers wanted to get rid of the bump at the back of the iMac. It just seems to me that there might have been a middle ground that didn’t require the puck and would be equally as stunning. Either way, it’s certainly not a showstopper.

These new iMacs become available in the second half of May, but special configurations have not yet been listed on the pricing page. If predictions are correct, then later this year Apple will likely release a more powerful iMac, featuring the next iteration of M-series chip. M1X, M2, other name – who knows? Obviously this will include a larger screen (27”, 30”, 32”?), but given the additional of XDR technology to the iPad Pro, one now has to wonder whether such an iMac would also include an XDR screen. Since the iMac color scheme no longer includes space gray, will the more advanced iMac be offered in space gray? And would such a model be called iMac Pro again or maybe iMac XDR? All just speculation at this point.

iPad Pro

The iPad Pro’s A-series processor has now been upgraded to an M1 chip. Since these were already cousins, it’s not really clear to me what the performance difference will be. Apple speaks of huge gains in performance, but it’s not clear what those comparisons are based on. There are a number of enhancements, like 5G and the 12MP ultra wide camera, but I’ll focus on two production-related upgrades.

The iPad Pro does now support Thunderbolt connectivity, enabled by the M1. However, it may or may not work in the same way as connecting a a drive to a Mac. But you will be able to connect it to an external display like the XDR. You can run some iPadOS apps on a Mac with Big Sur, but I doubt that you’ll be able to run a macOS app on an iPad Pro. That may be a limitation of the more stripped-down iPadOS. It could also be because of the different design choices that have to be made for touch versus keyboard/mouse interaction.

The big improvement is in the 12.9” iPad Pro, which gains a new Liquid Retina XDR display. It’s based on the same technology as the Pro Display XDR, which should result in stunning images on a tablet. It will offer 1,000 nits of full-screen brightness and 1,600 nits of peak brightness. This will be interesting for location productions if DPs adopt the iPad Pro as a device to proof shots and create looks.

A final point to note is that Apple has successfully introduced a processor architecture that scales from tablet to desktop with the same chip. Gone are the Intel Core i3 through i9 configurations. Obviously, more powerful Macs will require more powerful chip versions; but, we’ll have to see whether these future configuration options become as varied as with Intel or AMD. I’m sure that will become clearer by the end of 2021. All in all the Spring event had some nice, new products along with some incremental updates. Let’s see how this Apple Silicon transition continues to shape up.

©2021 Oliver Peters

Trusting Apple Displays?

In the “good old days” of post, directors, cinematographers, and clients would all judge final image quality in an online edit or color correction suite using a single, calibrated reference monitor. We’ve moved away from rooms that look like the bridge of the Enterprise into more minimalist set-ups. This is coupled with the current and possibly future work-from-home and general remote post experiences. Without everyone looking at the same reference display, it becomes increasingly difficult to be sure that what everyone sees is actually the proper appearance of the image. For some, clients rarely comes into the suite anymore. Instead, they are often making critical judgements based on what they see on their home or work computers and/or devices.

The lowest common denominator

Historically, a common item in most recording studios was a set of Auratone sound cubes. These small, single speaker monitors, which some mixers dubbed “awful-tones,” were intended to provide a representation of the mix as it would sound on radios and cheaper hi-fi audio set-ups. TV show re-recording mixers would also use these to check a mix in order to hear how it would translate to home TV sets.

Today, smart phones and tablets have become the video equivalent of that cheap hi-fi set-up. Generally that means Apple iPhones or iPads. In fact, thanks to Apple’s color management, videos played back on iPads and iPhones do approximate the correct look of your master file. As editors or colorists, we often ask clients to evaluate the image on an Apple device, not because they are perfect (they aren’t), but rather because they are the best of the many options out in the consumer space. In effect, checking against an iPhone has become the modern video analog of the Auratone sound cubes.

Apple color management

Apple’s color management includes several techniques that are helpful, but can also trip you up. If you are going to recommend that your clients use an iPhone, iPad, or even on iMac to judge the material, then you also want to make sure they have correctly set up their device. This also applies to you, the editor, if you are creating videos and only making judgements on an iMac (or XDR) display, without any actual external video (not computer) display.

Apple computers enable the use of different color profiles and the ability to make adjustments according to calibration. If you have a new iMac, then you are generally better off leaving the color profile set to the default iMac setting instead of fiddling with other profiles. New Apple device displays are set to P3 D65 color with a higher brightness capacity (up to 500 nits – more with XDR). You cannot expect them to perfectly reproduce an image that looks 100% like a Rec 709 100 nits TV set. But, they do get close.

I routinely edit/grade with Media Composer, Premiere Pro, DaVinci Resolve, and Final Cut Pro on iMacs and iMac Pros. Of these four, only Final Cut Pro shows an image in the edit viewer window that is relatively close to the way that image appears on the video output to a monitor. This is thanks to Apple’s color management and the broader Apple hardware/software ecosystem. The viewer image for the other three may look darker, be more saturated, have richer reds, and/or show more contrast.

User control

Once you get past the color profile (Mac only), then most Apple devices offer two or three additional user controls (depending on OS version). Obviously there’s brightness, which can be manual or automatic. When set to automatic, the display will adjust brightness based on the ambient light. Generally auto will be fine, unless you really need to see crucial shadow detail. For example, the pluge portion of a test pattern (darkest gray patches) may not be discernible unless you crank up the brightness or are in a dark room.

The next two are gotchas. Along with the user interface dark mode, Apple introduced Night Shift and True Tone in an effort to reduce eye fatigue after long computer use. These are based on the theory that blue light from computer and device screens is fatiguing, harmful, and/or can impact sleep patterns. Such health concerns, as they relate to computer use, are not universally supported by the medical community.

Nevertheless, they do have a pleasing effect, because these features make the display warmer or cooler based on the time of day or the color temperature of the ambient light in the room. Typically the display will appear warmer at night or in a dimmer room. If you are working with a lot of white on the screen, such as working with documents, then these modes do feel more comfortable on your eyes (at least for me). However, your brain adjusts to the color temperature shift of the display when using something like True Tone. The screen doesn’t register in your mind as being obviously warm.

If you are doing anything that involves judging color, the LAST thing you want to use is True Tone or Night Shift. This applies to editing, color correction, art, photography, etc. It’s important to note that these settings only affect the way the image is displayed on the screen. They don’t actually change the image itself. Therefore, if you take a screen grab with True Tone or Night Shift set very cool or warm, the screen grab itself will still be neutral.

In my case, I leave these off for all of the computers I use, but I’m OK with leaving them on for my iPhone and iPad. However, this does mean I need to remember to turn the setting off whenever I use the iPhone or iPad to remotely judge videos. And there’s the rub. If you are telling your client to remotely judge a video using an Apple device – and color is part of that evaluation – then it’s imperative that you ask them (and maybe even teach them how) to turn off those settings. Unless they are familiar with the phenomena, the odds are that True Tone and/or Night Shift has been enabled on their device(s) and they’ve never thought twice about it simply because the mind adjusts.

QuickTime

QuickTime Player is the default media player for many professionals and end users, especially those using Macs. The way QuickTime displays a compatible file to the screen is determined by the color profile embedded into the file metadata. If I do a color correction session in Resolve, with the color management set to Rec 709 2.4 gamma (standard TV), then when I render a ProRes file, it will be encoded with a color profile of 1-2-1 (the 2 indicates 2.4 gamma).

If I export that same clip from Final Cut Pro or Premiere Pro (or re-encode the Resolve export through one of those apps) the resulting ProRes now has a profile of 1-1-1. The difference through QuickTime Player is that the Resolve clip will look darker in the shadows than the clip exported from FCP or Premiere Pro. Yet both files are exactly the same. It’s merely how QuickTime player displays it to the screen based on the metadata. If I open both clips in different players, like Switch or VLC, which don’t use this same metadata, then they will both appear the same, without any gamma shift.

Client recommendations

How should one deal with such uncertainties? Obviously, it’s a lot easier to tackle when everyone is in the same room. Unfortunately, that’s a luxury that may become totally obsolete. It already has for many. Fortunately most people aren’t as sensitive to color issues as the typical editor, colorist, or DP. In my experience, people tend to have greater issues with the mix than they do with color purity. But that doesn’t preclude you from politely educating your client and making sure certain best practices are followed.

First, make sure that features like True Tone and Night Shift are disabled, so that a neutral image is being viewed. Second, if you use a review-and-approval service, like frame.io or Vimeo, then you can upload test chart image files (color bars, grayscale, etc). These may be used whenever you need to check the image with your client. Is the grayscale a neutral gray in appearance or is it warmer or cooler? Can you see separation in the darkest and brightest patches of these charts? Or are they all uniformly black or white? Knowing the answers will give you a better idea about what the client is seeing and how to guide them to change or improve their settings for more consistent results.

Finally, if their comments seem to relate to a QuickTime issue, then suggest using a different player, such as Switch (free with watermarks will suffice) or VLC.

The brain, eyes, and glasses

Some final considerations… No two people see colors in exactly the same way. Many people suffer from mild color blindness, i.e. color vision deficiencies. This means they may be more or less sensitive to shades of some colors. Eye glasses affect your eyesight. For example, many glasses, depending on the coatings and material, will yellow over time. I cannot use polycarbonate lenses, because I see chromatic aberration on highlights wearing this material, even though most opticians and other users don’t see that at all. CR-9 (optical plastic) or glass (no longer sold) are the only eyeglass materials that work for me.

If I’m on a flight in a window seat, then the eye closest to the window is being bombarded with a different color temperature of light than the eye towards the plane’s interior. This can be exacerbated with sunglasses. After extended exposure to such a differential, I can look at something neutral and when I close one eye or the other, I will see the image with a drastically different color temperature for one eye versus the other. This eventually normalizes itself, but it’s an interested situation.

The bottom line of such anecdotes is that people view things differently. The internet dress color question is an example of this. So when a client gives you color feedback that just doesn’t make sense to you, it might just be them and not their display!

Check out my follow-up article at PVC about dealing with color management in Adobe Premiere Pro.

©2021 Oliver Peters

Time to Rethink ProRes RAW?

The Apple ProRes RAW codec has been available for several years at this point, yet we have not heard of any professional cinematography camera adding the ability to record ProRes RAW in-camera. I covered ProRes RAW with some detail in these three blog posts (HDR and RAW Demystified, Part 1 and Part 2, and More about ProRes RAW) back in 2018. But the industry has changed over the past few years. Has that changed any thoughts about ProRes RAW?

Understanding RAW

Today’s video cameras evolved their sensor design from a three CCD array for RGB into a single sensor, similar to those used in still photo cameras. Most of these sensors are built using a Bayer pattern of photosites. This pattern is an array of monochrome receptors that are filtered to receive incoming green, red, and blue wavelengths of light. Typically the green photosites cover 50% of this pattern and red and blue each cover 25%. These photosites capture linear light, which is turned into data that is then meshed and converted into RGB pixel information. Lastly, it’s recorded into a video format. Photosites do not correlate in a 1:1 relationship with output pixels. You can have more or fewer total photosite elements in the sensor than the recorded pixel resolution of the file.

The process of converting photosite data into RGB video pixels is done by the camera’s internal electronics. This process also includes scaling, gamma encoding (Rec709, Rec 2020, or log), noise reduction, image sharpening, and the application of that manufacturer’s proprietary color science. The term “color science” implies some type of neutral mathematical color conversion, but that isn’t the case. The color science that each manufacturer uses is in fact their own secret sauce. It can be neutral or skewed in favor of certain colors and saturation levels. ARRI is a prime example of this. They have done a great job in developing a color profile for their Alexa line of cameras that approximates the look of film.

All of this image processing adds cost, weight, and power demands to the design of a camera. If you offload the processing to another stage in the pipeline, then design options are opened up. Recording camera raw image data achieves that. Camera raw is the monochrome sensor data prior to the conversion into an encoded video signal. By recording a camera raw file instead of an encoded RGB video file, you defer the processing to post.

To decode this file, your operating system or application requires some type of framework, plug-in, or decoding/developing software in order to properly interpret that data into a color image. In theory, using a raw file in post provides greater control over ISO/exposure and temperature/tint values in color grading. Depending on the manufacturer, you may also apply a variety of different camera profiles. All of this is possible and still have a camera file that is of a smaller size than its encoded RGB counterpart.

In-camera recording, camera raw, and RED

Camera raw recording preceded the introduction of the RED One camera. These usually consisted of uncompressed movie files or image sequences recorded to an external recorder. RED introduced the ability to record a Wavelet-compressed, 4K camera raw signal at 24fps. This was a movie file recorded onboard the camera itself. RED was granted a number of patents around these processes, which preclude any other camera manufacturer from doing that exact same thing, unless entering into a licensing agreement with RED. So far these patents have been successfully upheld against Sony and Apple among others.

In 2007 – part way through the Final Cut Pro product run – Apple introduced its family of ProRes codecs. ProRes was Apple’s answer to Avid’s DNxHD codec, but with some improvements, like resolution independence. ProRes not only became Apple’s default intermediate codec, but also gained stature as the mastering and delivery codec of choice, regardless of which NLE you were using. (Apple was awarded an Engineering Emmy Award this year for the ProRes codecs.)

By 2010 Apple was successful in convincing ARRI to use ProRes as its internal recording codec with the introduction of the (then new) line of Alexa cameras. (ARRI camera raw recording was a secondary option using ARRIRAW and a Codex recorder.) Shooting with an Alexa, recording high-quality ProRes files, and posting those directly within FCP or any other compatible NLE created the simplest and smoothest capture-edit-deliver pipeline of any professional post workflow. That remains unchanged even today.

Despite ARRI’s success, only a few other camera manufacturers have adopted ProRes as an internal recording option. To my knowledge these include some cameras from AJA, JVC, Blackmagic Design, and RED (as a secondary file to REDCODE). The lack of widespread adoption is most likely due to Apple’s licensing arrangement, coupled with the fact that ProRes is a proprietary Apple format. It may be a de facto industry standard, but it’s not an official standard sanctioned by an industry standards committee.

The introduction of Apple’s ProRes RAW codecs has led many in the industry to wait with bated breath for cameras to also adopt ProRes RAW as their internal camera raw option. ARRI would obviously be a candidate. However, the RED patents would seem to be an impediment. But what if Apple never had that intention in the first place?

Do we have it all wrong?

When Apple introduced ProRes RAW, it did so in partnership with Atomos. Just like Sony, ARRI, and Panasonic recording their camera raw signals to an external recorder, sending a camera raw signal to an external Atomos monitor/recorder is a viable alternative to in-camera recording. Atomos’ own disagreements with RED have now been settled. Therefore, embedding the ProRes RAW codec into their products opens up that recording format to any camera manufacturer. The camera simply has to be capable of sending a compatible camera raw signal (as data) over SDI or HDMI to the connected Atomos recorder.

The desire to see ProRes RAW in-camera stems from the history of ProRes adoption by ARRI and the impact that had on high-end production and post. However, that came at a time when Apple was pushing harder into various pro film and video markets. As we’ve learned, that course was corrected by Steve Jobs, leading to the launch of Final Cut Pro X. Apple has always been about ease and democratization – targeting the middle third of a bell curve of users, not necessarily the top or bottom thirds. For better or worse, Final Cut Pro X refocused Apple’s pro video direction with that in mind.

In addition, during this past decade or more, Apple has also changed its approach to photography. Aperture was a tool developed with semi-pro and pro DSLR photographers in mind. Traditional DSLRs have lost photography market share to smart phones – especially the iPhone. Online sharing methods – Facebook, Flickr, Instagram, cloud picture libraries – have become the norm over the traditional photo album. And so, Aperture bit the dust in favor of Photos. From a corporate point-of-view, the rethinking of photography cannot be separated from Apple’s rethinking of all things video.

Final Cut Pro X is designed to be forward-thinking, while cutting the chord with many legacy workflows. I believe the same can be applied to ProRes RAW. The small form factor camera, rigged with tons of accessories including external displays, is probably more common these days than the traditional, shoulder-mounted, one-piece camcorder. By partnering with Atomos (and maybe others in the future), Apple has opened the field to a much larger group of cameras than handling the task one camera manufacturer at a time.

ProRes RAW is automatically available to cameras that were previously stuck recording highly-compressed M-JPEG or H.264/265 formats. Video-enabled DSLRs from manufacturers like Nikon and Fujifilm join Canon and Panasonic cinematography cameras. Simply send a camera raw signal over HDMI to an Atomos recorder. And yet, it doesn’t exclude a company like ARRI either. They simply need to enable Atomos to repack their existing camera raw signal into ProRes RAW.

We may never see a camera company adopt onboard ProRes RAW and it doesn’t matter. From Apple’s point-of-view and that of FCPX users, it’s all the same. Use the camera of choice, record to an Atomos, and edit as easily as with regular ProRes. Do you have the depth of options as with REDCODE RAW? No. Is your image quality as perfect in an absolute (albeit non-visible) sense as ARRIRAW? Probably not. But these concerns are for the top third of users. That’s a category that Apple is happy to have, but not crucial to their existence.

The bottom line is that you can’t apply classic Final Cut Studio/ProRes thinking to Final Cut Pro X/ProRes RAW in today’s Apple. It’s simply a different world.

____________________________________________

Addendum

The images I’ve used in this post come from Patrik Pettersson. These clips were filmed with a Nikon Z6 DSLR recording to an Atomos Ninja V. He’s made a a few sample clips available for download and testing. More at this link. This brings up an interesting issue, because most other forms of camera raw are tied to a specific camera profile. But with ProRes RAW, you can have any number of cameras. Once you bring those into Final Cut Pro X, you don’t have the correct camera profile with a color science that matches that model for each any every camera.

In the case of these clips, FCPX doesn’t offer any Nikon profiles. (Note: This was corrected with the FCPX 10.4.9 update.) I decided to decode the clip (RAW to log conversion) using a Sony profile. This gave me the best possible results for the Nikon images and effectively gives me a log clip similar to that from a Sony camera. Then for the grade I worked in Color Finale Pro 2, using its ACES workflow. To complete the ACES workflow, I used the matching SLog3 conversion to Rec709.

The result is nice and you do have a number of options. However, the workflow isn’t as straightforward as Apple would like you to believe. I think these are all solvable challenges, but 1) Apple needs to supply the proper camera profiles for each of the compatible cameras; and 2) Apple needs to publish proper workflow guides that are useful to a wide range of users.

©2020 Oliver Peters

Apple Pivots – WWDC 2020

Monday saw Apple’s first virtual WWDC keynote presentation. This was a concise, detail-packed 108 minute webcast covering the range of operating system changes affecting all of Apple’s product lines. I shared my initial thoughts and reactions here and here. If you want some in-depth info, then John Gruber’s follow-up interview with Craig Federighi and Greg Joswiak of Apple is a good place to start. With the dust settling, I see three key takeaways from WWDC for Mac users.

Apple Silicon

Apple Silicon becomes the third processor transition for the Mac. Apple has been using Intel CPUs for 15 years. This shift moves Mac computers to the same CPU family as the mobile platforms. The new CPUs are based on Arm SoC (system on chip) technology. Arm originally stood for Acorn RISC Machine and is a technology developed by Arm Holdings PLC. They license the technology to other companies who are then free to develop their own chip designs. As far as we know, the Apple Arm chips will be manufactured in foundries owned by TSMC in Taiwan. While any hardware shift can be disconcerting, the good news is that Apple already has more than a decade-long track record with these chips, thanks to the iPhone and iPad. (Click here for more details on RISC versus CISC chip architectures.)

macOS software demos were ostensibly shown on a Mac with 16GB RAM and operating on an A12Z Bionic chip – the same CPU as in the current iPad Pro. That’s an 8-core, 64-bit processor with an integrated GPU. It also includes cores for the Neural Engine, which Apple uses for machine learning functions.

Apple is making available a developer transition kit (DTK) that includes a Mac mini in this configuration. This would imply that was the type of machine used for the keynote demo. It’s hard to know if that was really the case. Nevertheless, performance appeared good – notably with Final Cut Pro X showing three streams of 4K ProRes at full resolution. Not earth-shattering, but decent if that is actually the level of machine that was used. Federighi clarified in the Gruber interview that this DTK is only to get developers comfortable with the new chip, as well as the new OS version. It is not intended to represent a machine that is even close to what will eventually be shipped. Nor will the first shipping Apple Silicon chips in Macs be A12Z CPUs.

The reason to shift to Arm-based CPUs is the promise of better performance with a lower power consumption. Lower power also means less heat, which is great for phones and tablets. That’s also great for laptops, but less critical on desktop computers. This means overclocking is also a viable option. According to Apple, the transition to Apple Silicon will take two years. Presumably that means by the end of two years, all new Macs released going forward will use Arm-based CPUs.

Aside from the CPU itself, what about Thunderbolt and the GPU? The A12Z has an integrated graphics unit, just like the Intel chip. However, for horsepower Apple has been using AMD – and in years past, NVIDIA. Will the relationship with AMD continue or will Apple Silicon also cover the extra graphics grunt? Thunderbolt is technology licensed by Intel. Will that continue and will Apple be able to integrate Thunderbolt into its new Macs? The developer Mac mini does not include Thunderbolt 3 ports. Can we expect some type of new i/o format in these upcoming Macs, like USB 4, for example? (Click here for another concise hardware overview.)

macOS Big Sur – Apple dials it to 11

The second major reveal (for Mac users) was the next macOS after Catalina – Big Sur. This is the first OS designed natively for Arm-based Macs, but is coded to be compatible with Intel, too. It will also be out by the end of the year. Big Sur is listed as OS version 11.0, thus dropping the OS X (ten) product branding. A list of compatible Macs has been posted by Apple and quite a few users are already running a beta version of Big Sur on their current Intel Macs. According to anecdotes I’ve seen and heard, most indicate that it’s stable and the majority of applications (not all) work just fine.

Apple has navigated such transitions before – particularly from PowerPC to Intel. (The PowerPC used a RISC architecture.) They are making developer tools available (Universal 2) that will allow software companies to compile apps that run natively on both Intel and/or Arm platforms. In addition, Big Sur will include Rosetta 2, a technology designed to run Intel apps under native emulation. In some cases, Apple says that Rosetta 2 will actually convert some apps into native versions upon installation. Boot Camp appears to be gone, but virtualization of Linux was mentioned. This makes sense because some Linux distributions already will run on Arm processors. We’ll have to wait and see whether virtualization will also apply to Windows. Federighi did appear to say that directly booting into a different OS would not be possible.

Naturally, users want to know if their favorite application is going to be ready on day one. Apple has been working closely with Adobe and Microsoft to ensure that Creative Cloud and Office applications will run natively. Those two are biggies. If you can’t run Photoshop or Illustrator correctly, then you’ve lost the entire design community – photographers, designers, web developers, ad agencies, etc. Likewise, if Word, Excel, or Powerpoint don’t work, you’ve lost every business enterprise. Apologies to video editors, but those two segments are more important to Mac sales than all the video apps combined.

Will the operating systems merge?

Apple has said loudly and clearly that they don’t intend to merge macOS and iOS/iPadOS. Both the Mac and mobile businesses are very profitable and those products fulfill vastly different needs. It doesn’t make any business sense to mash them together in some way. You can’t run Big Sur on an iPad Pro nor can you run iPadOS 14 on a Mac. Yet, they are more than cousins under the hood. You will be able to run iOS/iPadOS applications, such as games, on an Arm-based Mac with Big Sur. This is determined by the developer, because they will retain the option to have some apps be iOS-only at the time the application is entered into the App Store.

Technology aside, the look, feel, style, and design language is now very close between these two operating system branches. Apple has brought back widgets to the Mac. You now have notification and control centers on all platforms that function in the same manner. The macOS Finder and iPad Files windows look similar to each other. Overall, the design language has been unified as typified by a return to more color and a common chiclet design to the icons. After all, nearly ever Apple product has a similar style with rounded corners, going back to the original 1984 Mac.

Are we close to actually having true macOS on an iPad Pro? Or a Mac with touch or Apple Pencil support? Probably not. I know a lot of Final Cut Pro X users have speculated about seeing “FCPX Lite” on the iPad, especially since we have Adobe’s Rush and LumaTouch’s LumaFusion. The speculation is that some stripped-down version of FCPX on the iPad should be a no-brainer. Unfortunately iPads lack a proper computer-style file system, standard i/o, and the ability to work with external storage. So, there are still reasons to optimize operating systems and applications differently for each hardware form factor. But a few years on – who knows?

Should you buy a Mac now?

If you need a Mac to get your work done and can’t wait, then by all means do so. Especially if you are a “pro” user requiring beefier hardware and performance. On the other hand, if you can hold off, you should probably wait until the first new machines are out in the wild. Then we’ll all have a better idea of price and performance. Apple still plans to release more Intel Macs within the next two years. The prevailing wisdom is that the current state of Apple’s A-series chips will dictate lower-end consumer Macs first. Obviously a new 12″ MacBook and the MacBook Air would be prime candidates. After that, possibly a Mac mini or even a smaller iMac.

If that in fact is what will happen, then high-end MacBook Pros, iMacs, iMac Pros, and the new Mac Pro will all continue running Intel processors until comparable (or better) Apple Silicon chips become available. Apple is more willing to shake things up than any other tech company. In spite of that, they have a pretty good track record of supporting existing products for quite a few years. For example, Big Sur is supposed to be compatible with MacBook Pros and Mac Pros going back to 2013. Even though the FCPX product launch was rough, FCP7 continued to work for many years after that through several OS updates.

I just don’t agree with people who grouse that Apple abandons its existing costumers whenever one of these changes happen. History simply does not bear that out. Change is always a challenge – more for some types of users than others. But we’ve been here before. This change shouldn’t cause too much hand-wringing, especially if Apple Silicon delivers on its promise.

Click here for a nice recap of some of the other announcements from WWDC.

©2020 Oliver Peters