We wanted a cross-platform C++ layer and native Cocoa front end. Objective C++ wasn’t a thing then, and having built a plain C shim previously I didn’t want to repeat the experience.
We built our own bridge by registering our C++ classes with the Obj-C runtime, generating selectors for all the methods so you could send messages to (carefully constructed) C++ objects using Obj-C syntax, or even subclass from C++ to Obj-C.
It was a pretty neat trick, but would’ve been difficult to port to the Obj-C 2 runtime.
> The 12.9-inch Liquid Retina XDR display has an IPS LCD panel supporting a resolution of 2732x2048 pixels for a total of 5.6 million pixels with 264 pixels per inch. [1]
"Liquid Retina XDR" is just a high end LCD. Micro LED isn't yet on anything they sell. All apple devices phone-size and smaller currently use OLED, and everything larger uses IPS LCD.
I thought Apple also used Mini-LED though, with quite a large amount of zones in comparison to most competitors? At least in the 14 and 16” pro MacBooks?
Yes I’m aware that Mini and Micro-LED are different technology
yes, but the main benefit of more zones is that you can make the display thinner (it is hard to design very thin light pipes for zonal displays). You also get some power saving benefits, and less 'halo-ing' when displaying very bright and very dark things next to eachother.
I worked on 360 cameras / software for many years. Looking forward to the day when an article like this comes out for one of the products I worked on.
When you’re in the heart of it it’s so easy to take pride in the technical challenges you’ve overcome, but completely miss the realities of the marketplace.
I really love my 360 camera. Use it everyday. I fill my 128 gb memory card multiple times a week. I use it on my helmet when biking for work. On a selfie stick when skiing so that it looks like I have a personal camera man following me (example https://www.instagram.com/reel/CnFtVsMJd39/?igshid=YmMyMTA2M... )
But yeah, they haven't taken that much off compared to traditional action cams. The frame rate and resolution after reframing is the blocker for many. And hard to do that with the current available sensors it seems like, and no one right now wants to gamble on spending much on R&D for launching the next gen.
The problem is that you’re always at a sensor deficit because you need so much more resolution to cover a wide field of view. When sensors and chipsets would finally catch up, user expectations would grow to match. Now we’re up against physics: Can’t add more resolution past the diffraction limit, and larger sensors are impractical for super fisheye lenses.
I mean this is an intriguing rationalization, but I don’t buy it.
Buying votes simply doesn’t cost this much, by orders of magnitude. Burning your empire to ashes as a loyalty test doesn’t hold water either: It’s politicians that partake in loyalty tests, not donors.
It’s interesting: my recollection of that period was I rarely stored anything on the desktop. The file system was so much smaller and easier to handle that I stored things in folders and didn’t have trouble finding them again. Not until OS X did I pick up the desktop-as-staging-area habit because navigation was so painful.
Finder aided this by being spacial. If you moved a window, then closed it, it stayed there the next time you opened it. If you moved a folder or file icon around it similarly stayed where you put it when you next opened up the window.
But then how do you easily navigate / launch apps? Dig thru your folder trees each time in Finder? Most apps I’m finding have folder structures with a bunch of aux files. It’s not so seemless as a dock or even a start menu.
> But then how do you easily navigate / launch apps? Dig thru your folder trees each time in Finder?
Back in the day, Finder used to remember whether folders were open on the desktop or "put away". It was a direct, one-to-one mapping between your spatial awareness of objects in the real world and the representation of objects in the computer. Meaning that things were left exactly where you put them on-screen, just like in the real world and, hence, it was easy to find your applications because they will be right where you left them.
But you don't need to launch applications, you just double-click on documents. Mac OS remembered which program was associated with each document -- not each document type or extension, each document. Each file had distinct type and creator codes associated with it, so that a JPEG created in Photoshop will be opened in Photoshop, and a JPEG downloaded off the web might be opened in a browser, when double-clicked.
Mac OS, pre-X, was quite simply the best UI ever designed. It took advantage of pioneering research into human-computer interaction and the underlying psychology of how humans relate to objects in a way that nobody today -- not even Apple -- is doing. It is what all UIs should aspire to be like, even today.
I think the whole spatial desktop metaphor is overhyped. Rather than go point by point on this, I encourage you to use OS 9 for a week or two to actually do work in. Relive using it versus just from memory. Then I’m curious to see if you still feel that way. My guess is you’ll realize it’s actually not all that.
More often than not I was opening documents, not apps. But with spatial windows in Finder I used to just arrange my Applications folder the way I wanted (sometimes using Aliases) and have it open on the left of my screen, then have my Documents on the right. I kept a row of Desktop icons visible with an In and Out box.
There wasn’t a default folder structure in the early days. Your hard drive had a “System” folder with merely a few hundred files in it (in a hierarchy) that you can ignore day-to-day. Otherwise the whole drive was your playground.
O, it can be. It's not /the/ way, but it is a way. Often, aliases (shortcuts/links) to favourite apps ended up there, or on the desktop, or in one of various available launcher utilities.
But now we're just adding 3rd party software to change the fundamental UX. The GP was talking about how much better it was back then, but if you need 3rd party tools to make it work, then it really wasn't.
OS 8 also allowed you to drag any folder to the bottom of the screen to create a pop up tab, and switch any folder view to the At Ease view, so you could do the same thing without any third party software. But DragThing is just awesome.
As it is likely that the number of launch customers for this node is one (continuing the trend), this may be necessary simply to reach agreed upon volumes.
APIs existed for fractional scale and assets in early Mac OS X versions (10.4 I think). I remember building out 1.25, 1.5 and 2x assets for an application at the time. These were never to be shipped to consumers for a few reasons.
There were intractable issues with window spanning across displays with different scale factors. Ultimately this was resolved by not allowing window spanning on the platform anymore.
This display spanning issue exists in Windows but I think Microsoft made the right trade off by just allowing that one thing to behave strangely. The other issue is pixel-based UIs that don’t fractional scale without blur, but at this point that doesn’t affect any software I use except little utility programs that I’m not staring at for very long anyway.
Wasn’t there an early exploration of using vector UI elements some time around then? I have a vague recollection of it being found as a partly implemented feature.
We wanted a cross-platform C++ layer and native Cocoa front end. Objective C++ wasn’t a thing then, and having built a plain C shim previously I didn’t want to repeat the experience.
We built our own bridge by registering our C++ classes with the Obj-C runtime, generating selectors for all the methods so you could send messages to (carefully constructed) C++ objects using Obj-C syntax, or even subclass from C++ to Obj-C.
It was a pretty neat trick, but would’ve been difficult to port to the Obj-C 2 runtime.