One of the things that amazes me about NeXTstep is the fact that it brought together many technologies developed by Xerox PARC. Unlike the original Mac, where the dominant PARC influence was its GUI, NeXTstep had other PARC-inspired technologies, such as Ethernet networking, Display PostScript (while PostScript is an Adobe product, the creators of PostScript were ex-PARC researchers who worked on a similar page description language called InterPress), and dynamic, late-binding object oriented programming, albeit in the form of Objective-C rather than Smalltalk. These technologies created a solid foundation that is still relevant today; macOS more or less is still influenced by NeXT despite gradual changes.
Of course, there’s also the Unix foundation of NeXTstep, which importance cannot be understated, especially during the rise of the Unix workstation in the 1980s and the 1990s. This gave NeXTstep (and later Mac OS X) a user base who could take advantage of PARC-inspired environments while at the same time being able to take advantage of the vast Unix ecosystem.
Yes, the Xerox PARC and Unix philosophies are different and sometimes contradictory. However, it’s impressive how NeXT was able to bridge these worlds; the execution was great, and once again macOS serves as a living legacy.
As an aside, personally I’m curious about an alternative timeline where NeXT decided to go all-in on Smalltalk, building a “Smalltalk OS” with no Unix foundation, similar in philosophy to the Lisp machines of the era. There was certainly research going on in the 1980s to improve the speed of Smalltalk; part of this was continued with the development of Self by David Ungar and other researchers. NeXT probably still would’ve had the same market challenges, and the lack of a Unix foundation may have further hurt adoption. But from a purely technical standpoint a polished Smalltalk desktop would be amazing to see and use.
What amazes me about NeXTStep is that you could actually run that desktop, running Display PostScript, in grayscale, admittedly, but with the full motion window dragging and everything else, on that display, with 8MB of RAM. Oh, and on a 25MHz '040.
Oh, yes, it was tight. But it was eminently usable (including Project and Interface Builder) with 20MB of RAM.
Heck, I think XClock is challenged to run in 20MB today.
NeXT and its successor, the modern Apple, have done incredible things with low amounts of resources. It feels nothing short of miraculous that the first few iOS devices could do so much in just 128MiB of RAM. No Android device has ever managed with that little!
I would say it's more the power of incentives, really. Android has historically never had the performance investment that Apple has put on their platform. If anything, early Android was much closer to iOS in terms of hardware specs and diverged quickly as they were unable to keep their resource usage down.
I wouldn't call the JIT, AOT and PGO efforts nothing, and stuff like project butter.
On the other hand going with a basic Assembly interpreter for Dalvik, when Nokia and Sony-Ericson already had JIT compilers for Symbian, while claming Android was better, no comments.
Likewise when comparing Symbian Series 60 3rd edition phones with Android.
Symbian, Windows Phone did just fine with C++, and also .NET.
The power of AOT compiled languages instead of plain interpreter.
Apparently the original Android as it was bought by Google, was planned to use JavaScript, then they pivoted into Java, and it took decades until they got any kind of good AOT/JIT story (starting with Android 5).
This is less of an issue today (although there aren't very many jailbreaks to take advantage of it) but back then it was pretty clear the device ran with very little RAM to spare. The best tweaks were fairly simple when it came to RAM usage but true multitasking or other things that stayed resident generally had a massive hit on system performance.
I mean the concept of multitasking takes basically no overhead. The question is what resources your "average app" takes. On NeXTSTEP those requirements were quite low.
And the '030 varieties performed admirably well, even after the '040 & '040 Turbo varieties appeared. Our school had a bevy of cubes, and those were the primary workstations for three years until they opened a lab full of slabs before our senior year. The best part was having our user directories on a main server, and we could log in to any campus box and have access to our account. 30 years later & I still can't easily do that with the Macs on my home network.
> Heck, I think XClock is challenged to run in 20MB today.
Assuming you mean the 'xclock' binary you'd find in Xorg installations, it seems to use about 8MB of RES memory. But keep in mind that it drags in a bunch of libraries (objdump shows Xaw, Xmu, Xt, X11, Xrender, Xft, xkbfile and the math and C libraries as direct dependencies) and those drag their own (ldd shows 26 libraries) and all of that stuff add to the memory overhead for their own purposes even if at the end xclock didn't use them. Chances are an xclock reimplementation that talked the X11 protocol directly, used only the minimum functionality needed to display the clock lines and was statically linked to some minimal C library without any external dependencies would use much less memory.
Though even that would probably feel as too much memory since a lot of the memory used by a program in a Linux system relies a lot on what is already there - including the hardware and kernel drivers.
(which is also why these articles comparing memory usage between various desktops tend to make zero sense as they measure the entire system memory usage, the largest part of which is outside the control of the desktop environment in the first place and in most cases will vary between different computers even with the same distro)
Wmaker and Emacs? You did a custom kernel build right?
For being something to brag against, you used XEmacs, I guess. Or X-compiled Emacs, in order to be a "big" accomplishment. UXTerm+Emacs under the CLI was "big", but
manageable; the X build (even the Athena or Lucid one) wasn't a light thing at all.
Not quite, but close. 12 megabytes is still much, much more than is easily explainable. Here is xclock on a 36 megabyte m68030, hosted on the same machine (so be patient).
On the other hand, modern OSes do a pretty good job of hiding resource abuse with excellent memory management.
The really amazing thing was how after SJ returned to Apple, for a couple of years, the Apple Dev Keynotes were just re-hashing of previous NeXT keynote addresses.
It still kills me that Adobe reneged on their promise of a free Display PostScript license, thus killing Rhapsody and "Yellow Box" for Windows (and it's still hilarious to me that that colour was named for Bill Gate's rude response when asked if Microsoft would develop software for NeXT, "Develop for it? I'll p** on it."
As for Rhapsody and "Yellow Box" for Windows dying, not sure if it would make a difference, OpenSTEP also did not work out that well, not even for Sun, other than being an influence to Java and Java EE (originally an Objective- C framework).
By the way, my graduation thesis was porting an OpenGL based particle engine from Objective-C/NeXTSTEP into C++/Windows 95, because the university department was getting rid of their Cubes, at this time there were no hopes for NeXT.
Yes, bit there could have been new life if all the NeXT devs had gotten their promised entrée into the Windows market as scheduled. Anderson Financial Services in particular were looking forward to a big payout for selling PasteUp.app license.
The notable NeXT code considerations at Adobe are:
- they lost the source code to Glenn Reid's nifty "TouchType.app"
- they couldn't be bothered to revive the NeXT source code for Altsys Virtuoso which Macromedia Freehand was based on
It was a bit wild to see old demo videos of Project Builder, Interface Builder, and Cocoa Bindings under NeXTSTEP functioning more or less exactly like they would when I first encountered them in OS X around a decade later. It changed how I thought about the state of computing in the late 80s/early 90s, which had previously been shaped by my first computer use on System 7.5 in 1996.
Side-by-side screenshots of NeXTSTEP and Windows are particularly jarring. NeXTSTEP came out in 1989, and its contemporary was not even Windows 3.1, but 2.11
Don't forget though that NeXTSTEP was also running on workstation-class hardware and priced accordingly (~$6500), vs. Windows running on late 80s commodity PCs (probably ~$1500-$2000) with lesser specs. For that kind of cost difference NeXTSTEP had better have looked and performed better.
A better comparison would be the Motif/CDE GUI that commercial Unix workstations were using at the time on workstation-class hardware. I think NeXTSTEP still wins on style, particularly since X desktops were a garish mix of raw X, Xt toolkit and Motif apps all coexisting.
That's a terrible comparison. Windows 2.x ran on much cheaper and weaker hardware. It required only 512KB of RAM and ran comfortably in 1MB while the smallest amount Next computers came with was 8MB.
I think it gets even better if you integrate Smalltalk and Unix, nee, Plan 9, rather than using one as the foundation. Let's call it "Plan A from Userspace".
Start with a universal hierarchical namespace that subsumes memory, disk, and the network, integrated into the programming language[1]. But then don't do POSIX byte-oriented API, but rather a composable REST-like object-oriented API[2]. Add a variant of pipes/filters that doesn't just extend from bytes to (flat) objects, but can also handle hierarchies and polymorphism efficiently[3]. Combine them all: http://objective.st/Publications/
> Of course, there’s also the Unix foundation of NeXTstep, which importance cannot be understated, especially during the rise of the Unix workstation in the 1980s and the 1990s. This gave NeXTstep (and later Mac OS X) a user base who could take advantage of PARC-inspired environments while at the same time being able to take advantage of the vast Unix ecosystem.
A UNIX ecosystem built on top of Mach.
Both macOS and iOS continue to use a BSD-derived POSIX compatibility layer on top of a Mach microkernel - just like NeXTSTEP did.
No, not quite. The macOS/iOS kernel is extremely Frankenstein-y (not meant in a derogatory way), with the majority of codebase being extremely Apple-specific, and bits and pieces originally taken from Mach and BSD. In particular, there is no microkernel, and there never was. Mach itself was never used as a true microkernel in a commercial setting, with the first such implementation--Mach 3--showing significant real-life performance problems. As such, there is no "BSD on top of a Mach microkernel". It is and has always been a fully monolithic kernel with some subsystems originally derived from Mach (Open Group's Mk 7.3), some from BSD (FreeBSD 5), and the rest developed in-house over the years. Even the layerings aren't always clean, with "on top of" often morphing into "alongside" or "intertwined with".
What is interesting about Mach to me. Is that now 40 years later it's returning too it's roots of hosting multiple OS's on a single hardware architecture. Apple being able to design to M series chips to match Mach's paper over Mach's deficiencies and leverage it's strength I find very exciting.
Originally it was meant to be the foundation computing layer for a campus full of devices.
The BSD choice dates back to the original ARPA(DARPA?) grant they wanted the Mach microkernel with a BSD interface to prove the viability of the concept of Mach.
The first Smalltalk JIT was written for the 68020, so the NeXT hardware would have been suitable.
EDIT: OTOH, The Tektronix AI Workstations had already been on sale for a while by that point and had not been all that successful. They ran Smalltalk on top of a UNIX-like OS, not directly on the HW.
> As an aside, personally I’m curious about an alternative timeline…
This sorta happened at Apple (and IBM and Motorola) with the OpenDoc/Taligent initiatives. With the return of the Jedi however, Master Steve axed OpenDoc at Apple, to much chagrin of those involved.
Some of that hostility became internet famous, I am sure most of you have seen the video from the conference back then.
It is interesting how (in the video) Steve explains the decision: the Smalltalk way doesn't fit into an overall cohesive vision that allows you to serve the majority of people.
I think he got it right. The Mac/Win ways of Apps is the right way to slice and dice these issues. I do use Emacs every day. I do feel the Smalltalk way and I like it. But it's not the "right" way; just not.
The best concepts of integrating a terminal/shell CLI driven environment with a GUI and together with on-the-fly manipulation of ENV variables and objects based on those was actualized by the Amiga. Unfortunately Commodore went bankrupt.
I think a Smalltalk-based version of the NeXT computer would've killed it as an internet device. With the Unix base and C compiler, NeXT users could join Usenet et al. when the device was introduced with a few downloads and make invocations (after the initial diehards did the grunt work of porting).
Nope - if NeXT arrives as a Smalltalk-based device, much open-source/internet software does NOT get ported to run on NeXT. A few years later, TBL picks some other workstation to develop WWW.
The fancy names and the layering makes it a little tricky to understand. The core of the imaging model is called Quartz. It provides support for rendering 2D shapes and text. Its graphics rendering functionality is exported through the Quartz 2D API, which is implemented in Core Graphics. Quartz is also used for window management: the Quartz Compositor, a lightweight window server, is implemented partly in the WindowServer application and partly in the Core Graphics framework. Quartz 2D uses PDF as the native format for its drawing model. In other words, it stores rendered content internally as PDF, which facilitates features such as automagic PDF screenshots, export/import of PDF data natively, and rasterizing PDF data. Quartz 2D also does device-independent and resolution-independent rendering of bitmap images, vector graphics, and anti-aliased text. NEXTSTEP's window server was based on Display PostScript, so was Sun's NeWS (~1986).
NeWS wasn't based on DPS. It's a full display server based on the PostScript Red Book with some extensions (canvases, lightweight processes and sync primitives, events, classes, garbage collection). This allows to write applications in this extended PostScript or in other programming languages, using a preprocessor, such as cps for C or lps for Scheme-48 and Allegro LISP, that generates PS snippets that get sent to the server.
Of course, there’s also the Unix foundation of NeXTstep, which importance cannot be understated, especially during the rise of the Unix workstation in the 1980s and the 1990s. This gave NeXTstep (and later Mac OS X) a user base who could take advantage of PARC-inspired environments while at the same time being able to take advantage of the vast Unix ecosystem.
Yes, the Xerox PARC and Unix philosophies are different and sometimes contradictory. However, it’s impressive how NeXT was able to bridge these worlds; the execution was great, and once again macOS serves as a living legacy.
As an aside, personally I’m curious about an alternative timeline where NeXT decided to go all-in on Smalltalk, building a “Smalltalk OS” with no Unix foundation, similar in philosophy to the Lisp machines of the era. There was certainly research going on in the 1980s to improve the speed of Smalltalk; part of this was continued with the development of Self by David Ungar and other researchers. NeXT probably still would’ve had the same market challenges, and the lack of a Unix foundation may have further hurt adoption. But from a purely technical standpoint a polished Smalltalk desktop would be amazing to see and use.