Photo by Paul Hudson [CC BY 2.0], via Wikimedia Commons
New Mac Pro Slated for 2019; May Be Modular
A year ago, Apple called five prominent journalists to Cupertino for a meeting in which the company acknowledged problems with the Mac Pro and provided some insight into its pro strategy (see “Maca Culpa: Apple Admits Mac Pro Missteps and Promises More Transparency,” 4 April 2017). Matthew Panzarino of TechCrunch has now returned to Apple to follow up on the company’s progress. In his most recent discussions, he learned that the revamped Mac Pro that Apple promised won’t appear in 2018 but should ship sometime in 2019. It’s likely to be modular in some way, and although what that means isn’t yet known, Apple appears to be focusing on external GPU support. For the moment, Apple is mostly watching how creative professionals work to identify the pain points in their workflows: are the problems in the application, the operating system, the drivers, or the hardware?
I’ve long wondered if the future Mac Pro and the Mac mini might have a joined future within an overall modular framework, like a set of lego you build with, adding modules with additional power/functionality depending on needs. I could see a base Mac mini module, a processor upgrade module range, a graphics card module range, an I/O module for video, an I/O module for audio, a memory module, a storage module. All stackable with a connective backbone.
And a monitor please…
They’ve always had an understanding of rack mounting, and those HDs you could slip into the cheesegrater Macs. But I think they’ve also developed a practice of sealed units, the idea of a device with a lifespan, once set not upgrade-able. I could see a modular snap-on system meeting both of those.
Building around a Mac mini is an extremely interesting concept, since it could let people start small and then add on the particular components they need for more power. The hard part, I’m guessing, with a modular system, is cooling.
This, coupled with the rumor from Gurman about Apple making its own chips for Macs has me thinking about the eventual convergence of macOS and iOS.
To borrow from John Siracusa, “on an infinite time-scale”, it does not make sense for Apple to have two operating systems; it makes more sense for them to unify them in some way at some point. Is that in a couple of years? Probably not. 5 years? More likely. 10 years? Seem reasonable. 20 years? You’d have to think almost certainly.
So, if the plan is to unify at some point, then Apple, being Apple, is more likely to do it sooner rather than later. The first step in that is the rumored Marzipan, giving Macs the ability to run iOS apps (or more likely, giving iOS apps the ability to run on a Mac, which is a subtle difference).
So, what’s a really simple way to do this? How about adding an A12X chip to a Mac laptop so that it can be an x64 and an ARM at the same time? Have macOS and iOS both run on the machine and seamlessly swap between one and the other? We already know Apple can do this, it’s how the Touch Bar works and how a lot of things on the iMac Pro work.
This also allows Apple to offload a lot of low-demand processing to the A12X (or whatever) and clock down the x64 chip to conserver battery, only kicking that processor on when it needs to. This could give you a MacBook with a 15-20 hour battery, the ability to run macOS fully and the ability to run iOS iPad apps.
This gives Apple a multi-platform device that can guide the hardware into a future AppleOS (er, appleOS?) that takes the best parts of iOS and macOS into a single operating system that is capable of scaling from a multi-5K display computer all the way down to a watch screen.
The more I think about this, the more likely I think it is that this has been the plan for at least the last 5 years, and that in the next 5 years we will see major moves along these lines.
The other very real possibility is that Apple is also going to start making its own x86 chips, moving away from Intel since Intel is really not providing what Apple wants at this point. That doesn’t change anything above though, we could easily see a Mac with an Apple x64 and an Apple ARM 64 (or several of each, honestly) and the beginnings of a unified OS.
And while Mac fans might be horrified at this, I’m looking at this and seeing a bright future at this point.
The sticking point right now is the high end-computers, but Apple could easily stick with Intel’s Xeon and still use its own x64 chips in everything else. And frankly, the Xeons are a tiny tiny percentage of Apple’s Mac market and always have been.
What do you think, sirs?
Cooling, yes, but they are separated, allowing items to get what they need individually.
Yes, I agree — and especially with the “at some point” relatively far in the future. From Apple’s perspective, macOS is a mess (look at APFS) because it has evolved so organically over so long, making it a lot harder to control than iOS and all the variants like watchOS, tvOS, and HomePod.
I was going to say that the main barrier here would be cost, but when I went to look up the price of an A11 processor, it turned out to be only $27.50. That seems low enough that Apple could indeed add it in. Who knows what Apple pays for Intel CPUs, but it’s probably more than that.
I don’t see this happening, purely because I think Apple can get the processing power it needs from its ARM-based chips and maintaining two chip architectures feels messy.
Yes, the top-of-the-line could remain different, at least until such time as Apple had its own chips at the necessary performance or compatibility level.
The trouble is that many things don’t appear to be possible with ARM, chiefly Thunderbolt 3, which I believe still requires an Intel Southbridge controller. I do not know enough about TB3 to know if adding that to Apple’s ARM designs is feasible or even possible.
Imagine a MacBook Pro with two quad core A12Xs in it and an i9 that can be ramped down to nearly 0 watts most of the time.
Ah, I didn’t realize there were hardware limitations for Apple’s ARM chips. That would require a pair of chips, which seems like less of a win. But perhaps Apple has Thunderbolt 3 support in the works for a future chip — they have to be thinking about this.
Sheesh Apple. Fix CUDA† and bring OpenGL up to date and you’re 95% there! Everything else - Industrial Design etc is gravy.
Seriously, its that ‘simple’.
† ie, allow NVidia cards back in - just kiss and make up, the OpenCL gamble didn’t work out for EVERYONE else outside of Final Cut Pro - at least two “CUDA cards” via direct PCIe would be ideal.
OpenGL up to date solves nothing, and Vulcan is stalled without a workable solution. Apple will not be letting nVidia in because they are no longer allowing third parties to write drivers. If nVidia wants in, they know what they need to do, but they have no interest in that.
Aple has already moved on from OpenGL/Vulcan with Metal, which does everything that OpenGL ever promised and never delivered.
I agree. The whole nVidia spat IMHO is Apple being petty. There are more professional uses of GPUs than video editing or games and the sooner Apple gets that through its head the better.
OpenGL is used extensively in the mainstream CAD/ArchVIS industries.
The underlying libraries that say, AutoCAD/Maya etc, use are optimised for this. There is no major incentive, thus resources, for these apps to re-architect to Metal (it may happen, but don’t hold your breath).
Meanwhile, their PC equivalence keeps slaying the Mac.
Its the same with CUDA for Adobe - Their major market is Windows PC, they “port” the PC libraries (as best they can across to Mac. Metal support is there, but a hackintosh (or 5.1 Pro) with nVidia cards will perform better. (If you stick with the 106 drivers, at the moment)
You and I may vastly prefer the Mac “cut me and I bleed in 6 colours”, but show a Managing Director a few “benchies” - This PC costs $3000 and takes 1 hour to render “x”.
This Mac costs $5500 and renders in 1.5 hours, in say, Premiere.
The MD is thinking - So I spend $27,500 on 5 Macs and have 50% worse performance, or I spend $15,000 on Win 10 equivalents, AND have 50% better performance. No brainer.
We can’t use the “But the mac OS is vastly superior” argument - Windows 10, for the ‘average’ operator, is not significantly worse, not in the Adobe suite.
In 30 years of Mac consulting to Advertising/Architecture and Production houses, I’ve never seen such a swing away as in the past 2 to 4 years. I don’t like it, but that’s how it is.
The nail in th coffin was the unexpandable 2013 Mac Pro/OpenCL, and then Windows 10 getting “good enough”, and then Pascal cards utterly destroying their predecessors.
Yes, but what matters to 99.9% of users is how graphic performance is in gaming, where OpenGL is just irrelevant.
It’s not just a petty spat. Nvidia insists that they write the drivers, Apple is not allowing anyone to write system-level drivers and has made it nearly impossible for anyone to install system-level drivers.
Context: This thread is about the Mac Pro.
Pro users need better OpenGL and CUDA support.
Pro users know this and have, reluctantly, left for Windows.
(OpenCL is well served on Apple already)
Not many gamers will be getting a Mac Pro
Apple is not going to make exceptions for macOS for the Mac Pro, certainly not at the level of subverting SIP and allowing someone else to write video drivers. There may be reasons why they should, but there are more reasons why they won’t.
All I’m saying is it is not Apple being petty, there are real reasons for this and laying the blame on Apple is ignoring half the issue.
Whatever the reasons, (wilful negligence?) Apple are leaving money on the table not addressing OpenGL and CUDA for Pro users.
Have a casual browse through the forums of, say, Octane, Vectorworks, AutoCad, SketchUp, Blender, Red User, Cinema 4d, Mudbox, Maya et al. The informed technically savvy users are all singing the same tune. (Same with the users I support).
Of course it’ll find/make a ‘market’ - small studio/youtubers perhaps? - and the apps optimised for it will find a very happy, if miniscule - a subset of a subset, of a subset - audience.
I believe you are right. I’m afraid, however, that this is just one more instance of Apple not caring much anymore about its pro Mac users. Apparently, selling $1k phones to teenagers in China is just much more of an appealing business these days.
the entire Mac business is 8% of Apple revenue. Nearly all of that comes from laptops, and the best selling Mac is the MacBook Air.
The Pro space is a tiny fraction of the Mac business, and is represented by a wildly disproportionate number of products already.
Now, what’s the percentage of Pro users who care about CUDA and OpenGL?
Apple is a business, and some of their decision are business decisions. that said, Apple is much more likely to make a non-business decision they believe in than most companies, this just isn’t one of those cases.
I agree with you. I would merely like to point out that cause and effect are easy to confuse here. After Apple alienates its pro users time and again, it’s no surprise pro users start representing less and less of their business.
But if/once the iPhone business (or if you prefer: hiphop radio station, TV/movie production studio, insert whatever other fad Apple appears keen to try out) dries up, the question then becomes what they will resort to. Pro Mac users of the past were a very loyal base rendering a sustainable business. iPhone buying teenagers are by comparison highly volatile.
But even among pro users, those who need (or care) about CUDA and OpenGL are a small fraction.
Perhaps, but with Apple neglecting such pro users, that fraction can only become even smaller, until finally there are no such pros left using a Mac at all. Looks like a self-fulfilling prophecy to me.
Kreme and I have different definitions of Pro. In my practice its the opposite: CUDA and OpenGL are the majority. (Even if the end user does not know it). Put a ‘modest’ gtx 1060 in your rig and watch Premiere Pro and After Effects fly. Ditto ALL the Pro sectors I’ve linked above.
Although, I do empathise with that point of view. because, well, I used to be that, er, “passionate” too - for the first 25 years of my Mac support career.
Things changed about 3 years ago, when I had to “roll my own” for a client that the 2013 Mac Pro was not suited for (and they were adamant about staying on the Mac, call us crazy!). Getting to grips with “chipsets”, ACPI specs, instruction sets, hexadecimal notation, “strings” “arrays”, kexts etc…
You could say I took the red pill. Not an instant epiphany, more like the slow burn of an expat who went to SE Asia, met the Girl Of His Dreams, only to find out 20 years later she’s been squirreling away ALL the money and now she’s bought a house and dumped you.
But. We. Had. A. Connection.
I think this is one of the key points to understand about this entire discussion. Lots of people have different definitions, and they’re all right. That’s part of the problem, since Apple has to (a) realize this and (b) decide if they’re going to do anything about it.
The majority of Pros are iOS developers. After that, it fractures into many people in many types of jobs.
People tend to think of “Pro” users along the lines of “well, I think I am a pro user and I do this, so obvs that is most pro users.”
I used to be a pro user, but that stopped for me quite a while back. It’s not that I changed, but that the consumer machines became more than enough to cover my needs. My primary computers are a 2012 iMac and a somewhat broken 2015 MBP that sits behind a monitor and is definitely no longer mobile.
As for what is a pro user? I’d say it is a user who produces content of any sort on their computer for money or to reduce costs. Not simply something that can be done equally well on any computer (email, reports, books. etc) but one where the performance of the computer makes enough of a difference in the work to justify the cost.
Jason Snell has an iMac Pro because in the course of editing podcasts, he runs a de-noise filter that takes a long time to run. AFAIK, this is the only thing he does on his iMac Pro that is significantly faster for him than his previous iMac (You can fact check me over at sixcolours.com where he wrote up quite a bit of this), but that is enough to make him a pro user because that one task saves him multiple hours a week, which means he has time for more podcasting!
Precisely. No gpu architecture should be mutually exclusive to any other.
Metal/CUDA/OpenCL/OpenGL should all be optimised, updated, enhanced, OOB, to take advantage of the products current underlying ‘performant’ (sorry) preference.
Once sales pick up, (lets say Autocad as example) then there is budget for the Autodesk dev team to perfect Metal support… down the road.
Metal/OpenCL is great, hope to see a bigger take up, sooner, just not the expense of deeply developed infrastructure around other toolsets.
… We’re in the same boat at home, mix of Ivy Bridge and Haswell iMacs (a little ‘tweaked’, SSD’s, i7s etc) they serve our personal/prosumer needs extremely well.
The bit in the ‘tour’ that jumped out at me was:
“but we have found an issue where that window was taking like 6 to 10 seconds to open and they’re doing that 100 times a day, right? Like ‘I can’t work on a machine like this, it’s too slow”
I see this daily too, particularly in “ArchVIS”: Architectural Visualisation - Operator is taking 3d data from, say, Sketchup, Vectorworks, Autocad et al, then passing that through a standalone render program like VRay.
Its the “mini renders” that the designer does multiple times to get light, shade and perspective right, before commiting to the much longer Final Render - that are the real bottlenecks. Mini renders are relatively low res, but take 3, 5, 10 to 15 minutes a piece. Do those 10 times a day and you have a “pain point”.
Take the same 3d data, pass it along to OTOY’s Octane with a GTX 1080ti, and it will output the same mini render in 30 seconds. Add a second 1080ti, and it drops to 15 seconds. Perfect scaling, works with ALL the major 3d packages. But. Its CUDA optimized.
For the curious, this is “mini” rendering: Relevant bit starts at 5:33, if timestamp broken?https://www.youtube.com/watch?v=6xE3J56pabk
This one package has shifted the toolset for a lot of the “ArchVIS” market.
Octane is locked out from a current off the shelf Mac. Its a Big Deal in ArchVis/Games/3d/Hollywood.
If you could put one, modern, Mac Pro equipped with just one 1080ti in, say, an architectural office with 20 PC workstations, it’ll ‘prove’ its a good corporate citizen - files being passed back and forth with ease.
That’s the beginning of convincing The Powers That Be that the Mac is, perhaps a little pricey, but its OK to get some to placate the Apple cognoscenti. In two or three product refresh cycles, (for this theoretical firm) they may pick up a few more Macs, while cascading down those from previous chipset, establishing Mac Pro’s as viable again.
If Apple have conviction, diligence. Courage?
I’m railing on here not to start a war with any one else’s insights. There are many valid ways to view the world. I have loved the community spirit here on TidBits for over 20 years.
Just hoping that someone at Apple hears the plea. The wider SME (Small Medium Enterprise) Pro community needs CUDA back.
If Apple’s budget for 2019 Pro is say, $1 Billion, then 5% of that for some solid CUDA/GL drivers would be nothing compared to the 10 fold increase in sales.
It would be the cheapest (line item) way to get the biggest result. Its ‘just’ software.
Join the discussion in the TidBITS Discourse forum