I doubt it. The powerpc to Intel switch was really painful because the desktop platform has the perpetual ball and chain of backward compatability. I doubt Apple would try to beat Intel at their own high performance game anyway.
I don't think that's necessarily the case - I think that if Apple switched architectures now, there would be a lot fewer issues than there were with the Intel switch.
Over the past few years, Apple's done a lot of work in making the same system APIs available across multiple processor architectures; at a base level, iOS and OS X have very similar cores. You can see this with the ease of the transition from ARMv7 to ARMv8, which in most cases just required a new compilation.
As general-purpose applications have been migrated to higher-level APIs, the difficulty of porting those applications to a new processor architecture decreases; if an application is Cocoa-based and compiled for x64, then if those Cocoa APIs are available on an ARMv8 platform, they can be compiled natively for that platform.
Apple has started requiring Mac App Store apps to be submitted in the immediate representation form, allowing Apple to recompile. If that's not a glaring hint at working towards ARM, what would be?
Having previously gone through the 68k-PowerPC switch, the PowerPC-x86 switch seemed to me like a non-event. The desktop platform most certainly does not have a perpetual backward compatibility obligation; Apple has always been far more willing than Microsoft to break old stuff after a few years if it happens to conflict with their new stuff.
I would expect that they have already been building OS X for ARM internally for several years, and that they'd prefer to avoid switching again but would certainly do it if they ever felt like the use of Intel's architecture was creating a problem for their business.
> Apple has always been far more willing than Microsoft to break old stuff after a few years if it happens to conflict with their new stuff.
Sidetracking this conversation a little, but I'm more and more wondering whether MS actually has a retrocompatibility track record that is that good, or if it is just a nice story. Granted, they communicate a lot on how hard they work on that subject, and they even have a guy who blogs about that and about how great he is because he injects patches in third parties programs to let them work on new OS versions, but the end result is just... random. -- Well, maybe that guy should work on compat between MS products before those of others...
First no medium/big company would think of upgrading the OS without months or even years of studies and tries -- they likely could do and already are doing the same with OS X, then MS actually actively deprecate a lot of stuff all the time (and even whole subarchs, like Win16 not avail on Win64 installs), they also have so much tech and product birth fail it is not even funny anymore, and finally even when they don't mean to, their very own products in more or less the same line are often broken by next versions supposed to be able to install side-by-side or even just patches (example: Windows SDK 7.1, which is upset if you try to install it with anything else than the .NET 4 RTM preinstalled, and then is very upset again during builds if you upgrade your .NET to 4.6 -- or on a completely different subject compat of recent Words with old .doc which is not stellar)
And finally on the technical design side, some choices are just plain complete crap and stupid. Why would you, I don't know, leverage UMDF (that is especially well suited for USB drivers, for example) to allow 32 bits drivers to run on 64 bits Windows when you can just not give a fuck and force people to use their old consumer or dedicated pro hardware with their old computer, and let them throw everything in the trash when it eventually fails. I mean, during the 16 -> 32 bits transition they actually made far more insane things working (at least kind of working) while here everything would be neatly isolated yet they manage to... not even attempt to do it.
I'll not even begin to talk about the .dll story, which is just even more complicated each time they try to fix it because you still have to support the old methods, sometime by some kind of virtualisation. And then, like I said, they just decide to change their mind and use the old replacement method again, (ex: the .NET 4 => 4.5/4.6 mess explained before) which breaks again because they are still not THAT good at backward compat. (In a cringeful way: have anybody heard about symbol versioning?)
So maybe Apple is doing worse (I don't know much about them), but on a Linux system you can actually administer it carefully if you are skilled enough to make any random old application crap REALLY work on a modern install (you might need to duplicate a complete userspace to do that, but not all the time thanks to symbol versioning, and it is not necessarily huge when you do, and at least you can).
At one point AppKit and FoundationKit supported 4 architectures (68k, x86, HP, and Sparc), most Cocoa based applications were just a recompile away.
The main issue with the PowerPC to x86 was Carbon which was never designed to be a cross-platform toolkit in the same away that Cocoa was. Given that Carbon 64 never got off the starting blocks and Carbon 32 was deprecated way back in 10.8 switching architectures will be less painful this time around.
The macbooks were never supposed to be a "power" machine. The whole purpose was a very portable, long batter life laptop. And it dose that very very well.
I would not call that giving up on performance...they just designed something for a purpose and it fits that purpose well.
Dude it has a 5 watt processor and a mobo the size of a raspi. I don't think their featherweight offering is appropriate to enter in the performance fight.