Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A Story about Symbolics Lisp Machines (kremlin.enterprises)
110 points by kremlin_ on Sept 21, 2015 | hide | past | favorite | 102 comments


"lisp machines were made by (pretty much) a single company (symbolics)"

Actually, there was a rivalry with another company that's really worth mentioning, LMI (see https://en.wikipedia.org/wiki/Lisp_machine#Commercialization...)

"the amount of engineering effort that went into these machines far outweighs what you might get in a modern computer" - "the computer in front of you was probably purchased for a sum between $100 and $1000, and it represents $100 to $1000 worth of engineering. the kind of computer representing hundreds of thousands to millions of dollars of engineering are, well, pretty slick"

That's a pretty ridiculous comparison. The design of the intel chip in your cheap computer, and the fab process to manufacture it, really are worth billions. It's just automation and scale that brings the price down to a couple hundred.


It's such a ridiculous line. Even putting aside that many laptops and desktops cost over $1000 Intel spends over $10 billion a year on R&D and machines.[0] Add to that the kind of engineering that goes into the physical form of a Thinkpad or a Macbook Air, both of which are far more impressive that a stationary monolith. Then add the kind of engineering that goes into a modern Ubuntu, Windows or Mac OSX. Someone estimated as far back as 2008 that just the Linux kernel alone represents over 1 billion dollars of R&D.[1]

[0]http://fortune.com/2014/11/17/top-10-research-development/

[1]http://radar.oreilly.com/2008/10/linux-kernel-worth-1bn.html


I think he is talking about something different.

The difference is like this: A 5 dollar plastic keyboard vs. a handmade $1000+ keyboard with mechanical switches.

The very first Lisp Machines were handmade $100000 pieces of hardware. With wire-wrapped boards made of zillions of low-integrated chips and various other components. Huge power supplies for several KW worth of electricity. Really everything was specially made, like 25m long console cables with signals for monitor, mouse, sound and keyboard.


Yeah... author should read Hackers before writing another word about history of computing, there is hardly a correct sentence in that blog post - http://www.amazon.com/Hackers-Computer-Revolution-Anniversar...


I'm not sure that there's a correct sentence in "Hackers." It got enough details wrong that I used the book as a source of stories that I needed to look into to see if there was a kernel of truth to them (plus, the editor apparently didn't know how to edit). There is enough about Lisp Machines available, though:

* http://www.gnu.org/gnu/rms-lisp.en.html

* http://ergoemacs.org/misc/Daniel_Weinreb_rebuttal_to_stallma... (mirror of a blog post by Weinreb, made shortly after Weinreb died)

* http://www.dreamsongs.com/Files/Hopl2.pdf


I'm reading this book for a class right now and, personally, haven't enjoyed it so far. It treats these people as though the MIT hacker crowd as though these people are/were gods and I think does a lot to dehumanize them. It just reads like a fluff piece that skips over technical details in favor of what feels like blind infatuation.


The writing is so bad that it distracts from the story. It once mentions that somebody had access to "the best computer in the world known to man." How many computers exist that aren't known to man? I don't blame the writer in that particular case, but the editor really should have known better.

In another case, Levy tells a story about a chess program with a bug in it. If I remember correctly, the program was in check, and moved a knight that didn't get the program out of check. In other words, it took an illegal move. Levy says the programmers were in awe of this program; wondering if it was inventing new rules to increase its enjoyment of chess. I have a hard time believing the programmers truly thought they had created a self-aware program that would modify the rules of a game to increase its own enjoyment. I'm sure they knew a bug when they saw it. Levy, on the other hand, apparently did not, or thought that his audience would overlook such a silly statement. I do blame the writer for that one, and wonder why the editor didn't flag it as well.

There are good books on computer history. And there are good books about computers from the early days (written in the early days). "Hackers" is not one of those books.


P.J. Plauger had an article once about a co-worker who coded a chess program, which had two serious flaws: he got the search algorithm wrong, so that it was very easy to beat; and he didn't program it to lose, so that it would start to add pieces back in when it was about to. Plauger wrote, If you think kids enjoyed beating it, you should have seen their glee when got it to cheat.


> How many computers exist that aren't known to man?

That's a tough question to answer. The universe is a big place, so i'm going to go with a lot.

Although, I do understand your annoyance. The original phrasing with the "in the world" is stupid.


LMI wasn't even the big competitor. Texas Instruments and Xerox also produced dedicated Lisp machines.


Great acquisition! I'm jealous.

Symbolics stuff isn't as rare as the post claims; you can still buy the Symbolics OpenGenera LISP environment for running on top of DEC's UNIX on an Alpha. [0]

In addition there's a port to x86-64 Linux floating around that lets you run it on top of a 2007-era Ubuntu in a VM [1]. There was also the MacIvory series of expansion boards that were a LSI-based Symbolics machine that you could fit in a Mac and run OpenGenera in a window.

Dave Schmidt at Symbolics was still selling hardware (a complete system could be had for under a couple grand) up until a couple of years ago; last time I contacted him he said hardware sales were now stopped so that what's remaining can be used for service/support contracts.

I bought a Symbolics keyboard from him before that happened and wired up an adapter using a Teensy to give it a USB interface.

I know of a company in College Station, TX that used a number of Symbolics systems when they were new, and the owner refuses to get rid of them because they cost so much new. A friend that worked there until recently ranted one day that they were using one of the chassis as a stand for the coffee machine, and I wanted to cry.

(disclaimer: I own/run http://www.lispmachine.net)

[0] http://www.symbolics-dks.com/

[1] http://www.cliki.net/VLM_on_Linux [2]

[2] http://weblog.mrbill.net/archives/2008/05/18/finally-got-ope...


In regards to the x86-64 port, you can actually use it under modern Linuxes if you use VNC as the display host. The issue was with a bug incompatibility between XCB and XLib causing the xserver to lock up; VNC has no such issues.


> you can still buy the Symbolics OpenGenera LISP environment

I don't think this is still the case. I sent an email to their sales email address, and heard nothing. I also tried contacting their engineer, and even found the MIT email address of the current copyright holder nothing.

I also reached out to Peter Paine, but he only sells Symbolics hardware.

Is it still possible to legitimately obtain the software?


I've always gotten a response back from David Schmidt within a couple days of contacting him. Try again?


Check out #lispm on freenode for a current effort to bring a newer version of VLM up on native Linux/FreeBSD.


Is it better than #lisp? Spending just half an hour in #lisp, and you see more assholes than a Turkish Customs Agent.


Symbolics LISP machines were not really all that great to use. I used the refrigerator-sized one briefly, but it was more trouble than it was worth. We at Ford Aerospace switched over to Franz LISP on Sun 2 machines for real work.

Symbolics, the company, made unreliable hardware. Too much wire wrap. 1983 was late to be introducing a wire-wrapped CPU. The Sun 2 and the Symbolics 3600 both came out in 1983, but the Sun 2 was a 680x0 machine with a printed circuit backplane and far less hand wiring, fixable by swapping boards. Symbolics machines had to be fixed by on-site Symbolics repair people, and the service was both poor and arrogant. This was at the height of the false AI boom ("expert systems") of the 1980s, and some people (I could name names on the Stanford faculty) were saying that strong AI was right around the corner. For a while, Symbolics machines were corporate status symbols. When that bubble popped, so did Symbolics.

Symbolics' original networking concept was that all machines would share some huge memory address space, with shared memory over Ethernet. That never really happened. They also really did have multi-hour garbage collections at first, until they got the garbage collector and the virtual memory to understand each other.

Most of those fancy buttons on the keyboard didn't do much of anything. SHIFT, CTL, TOP, META, SUPER, and HYPER were all shift keys, so there were 64 possible shifts for each key, any of which could be bound in EMACS. This was cool, but not useful.

Which is the verdict on the Symbolics machines - cool, but not useful.


> Franz LISP on Sun 2 machines for real work.

The SUN 2 was a tiny machine in comparison. 1 MB RAM, 4 MB max. 16MB virtual memory.

> Sun 2 was a 680x0 machine

The Symbolics 3600 was also a 680x0 machine. ;-) It used one as its frontend processor.

> Symbolics' original networking concept was that all machines would share some huge memory address space, with shared memory over Ethernet.

That was never a part of the Symbolics operating system. Not sure where you got that from.

> until they got the garbage collector and the virtual memory to understand each other.

Which SUN never got.

> Most of those fancy buttons on the keyboard didn't do much of anything.

Actually they did.

> SHIFT, CTL, TOP, META, SUPER, and HYPER were all shift keys, so there were 64 possible shifts for each key

There was a logical system behind it and different shift keys were used for different purposes. Not all combinations were used. It's not too different from any modern computer. My mac has command, option, control, fn, shift keys. Five. The Symbolics keyboard had shift, control, meta, super, hyper. Five.


Early Symbolics 3600 machines had 256K words (36 bits) of memory. Later machines doubled this. Early Sun 2 machines had 1MB of RAM. Later machines had a minimum of 2MB, needed due to OS growth. So they were roughly comparable in memory capacity.


I used a Symbolics 3600 much more memory, 4MWords IIRC. My own 3640 has 4MWord. A word was 36 bits wide.

> So they were roughly comparable in memory capacity.

Sure not: you could use a 3600 with 20 MB RAM and 150 MB virtual memory.


In the preceding generation of CADR Lisp Machines built by the MIT-AI lab, I seem to remember that half a megaword of 32 bit words was not uncommon, and at least one had 2 megawords.


The CADR could address 64MB of virtual memory, a fair bit more than a Sun-2.


24 bits word addressed. The 25 bits for the LMI LAMBDA (a microcode hack that no doubt incorporated the 2 space copying GC), and 28 bits for the Symbolics 3600 family. In all cases I'm pretty sure too little ^_^.

There were a lot of Moore's law skeptics back then....


LMI had a distributed virtual memory system.


Symbolics used an object-database for that.


I'm only just now starting to get on the Lisp bandwagon a bit... I've been working my way through Practical Common Lisp bit by bit (no pun intended) and I have to say, I'm really liking this so far. I could see really getting into Lisp in a big way.

But... could one of you guys who knew the Symbolics machines help me out with something? What exactly made them so special, hardware wise? I mean, Lisp seems to run just fine on Linux on x64 hardware now. What advantage does one get from running Lisp on hardware which is custom tailored to it? Or would that even make sense nowadays IF something like the Symbolics Lisp Machines were still made?


I don't think Lisp ran well if at all on early microprocessors like the 8-bit 8086. The Lisp community was, then, as now, committed to Lisp being the One True Way, and centered around the MIT AI lab. The PC revolution was happening, so naturally they built their own CPU, OS and workstations around Lisp. Half the MIT AI lab ended up going to Symbolics, and half to LMI. The West Coast hackers out of Berkeley and Stanford were more into C and Unix. Once the 68000 and the first Sun (especially), also Apollo, SGI workstations came out, they performed well at a broad range of tasks including even Lisp. C compiled software often outperformed similar software written in Lisp. And when volume ramped up (for 68000-based Macintosh as well) they had much better cost/performance compared to relatively low-volume Lisp machines. (like $30,000 for a decent Sun workstation vs. something like twice as much for a Symbolics)

See Hackers, by Steven Levy, also some stuff in Wikipedia

https://en.wikipedia.org/wiki/Lisp_Machines

https://en.wikipedia.org/wiki/Symbolics

Had a summer job doing UI work on a Symbolics machine at MITRE, which was kind of like working in the future in summer of 1985... object oriented development, megapixel WIMP interface. I don't think Sun came close to that until UIM/X, Motif, PowerBuilder in early 90s, Visual C++, VB on PCs...like a 10-year head start.

But python especially feels like the revenge of Scheme and Lisp in the sense of being the same sort of dynamic interpreted language and environment where code is data, all the data and code (including libraries) are very accessible and amenable to inspection and tweaking.


Python is no where near Lisp in code as data. Also, I think what's important is to marry the Lisp programming language and the Lisp Machines. The hardware was built with Lisp in mind. I am a fifth of the way through a 1991 book called "The Architecture of Symbolic Computers" by Peter M. Kogge, and it is like my eyes have opened wide. No more splitting hairs over Ruby, Python, Clojure, Linux, OSX and Windows: Give me a machine tailored for the software running on it. the Von Neumann register-based machine cannot efficiently serve all, it bottlenecks. People talk about Linux as the break away from the Windows and Mac OS crowd, but let's face it, it is all the same thing running on Von Neuman architecture hardware. There are people trying to build machines from scratch to run their own OS on. I hope somebody succeeds. I don't think we need to go back to Lisp Machines, but there's a lot of gold nuggets there to mine and re-use. Technology doesn't age linearly. Sometimes things are forgotten, and need to be rediscovered for progress to jump again. I work in Common Lisp (SBCL, LispWorks), Racket (Scheme), and I play with PicoLisp (incredible for its size). Shen is something I am trying to wrap my head around (optional typing, and optional laziness, functional, pattern matching, and there implementations of the small instruction set it runs on for many common languages - CL (SBCL), Java, Ruby, Python, JavaScript, Haskell, Clojure...

Here's to the coming hardware revolution!


Sun was just spinning up - but remember that first Mac (3/4M pixel WIMP interface) had become available the year before for $2400 (and I was making a living in '84/85 porting Unix to 68ks as fast as we could, every man and their dog had built a workstation in their garage)


The early Macs had tiny displays, something like 512x342. The first Macs with a resolution of 1024x768 (¾ megapixel) wouldn't appear until the 90s.


How is python homoiconic?


He didn't say it was.


> But python especially feels like the revenge of Scheme and Lisp in the sense of being the same sort of dynamic interpreted language and environment where code is data

He implied it by saying that code is data in Python. Being able to inspect code is not the equivalent.

It is also incorrect to refer to Scheme and Lisp as just dynamic interpreted languages; while they are dynamic and tend to have an interpreter they have also had compilers for quite some time if not from near the very beginning (1957 according to this http://www.softwarepreservation.org/projects/LISP/ibm/Blair-...).


He didn't, but it was implied in "But python especially feels like the revenge of Scheme and Lisp in the sense of being the same sort of dynamic interpreted language and environment where code is data".

Python doesn't feel to me like revenge of Lisp. It's more like having a shard of a powerful magic artefact; it has some power, but a whole artefact has qualitatively more.


"I don't think Lisp ran well if at all on early microprocessors like the 8-bit 8086"

The 8086 was a 16-bit CPU, but you are correct that Lisp didn't run well on it.


my bad lol ... was thinking 8088, Z80


Yep, the Z80 was an 8-bit CPU, but the 8088 was regarded as a 16-bit CPU. It did have a 8-bit external data bus (the 8086 had 16-bit) to make it cheaper, but the register sizes and instructions were the same as the 8086.


It's not quite as straightforward as that with the Z80 either - it has 16 bit addressing, a few 16-bit registers (which can also be treated as two separate 8-bit registers), and instructions for manipulating 16-bit numbers. The 16-bit operations feel a bit like second-class citizens, which I suppose is why it's still regarded as an 8-bit CPU, but they are there.


Hey, the 80186 was 8-bit, so maybe you were thinking of that?


The 80186 was also a 16-bit chip with a 20-bit address (like the 8086).


8-bit bus interface?


The 8088, 80188 had 8-bit buses, but I haven't met anyone who claimed that made them 8-bit chips. Neither really affected programmers and it was a maneuver to make the chip a bit cheaper. The 68008 was the same way, and used in some Sinclair designs.


Right, that makes sense.

Programmatically they were still 16-bit. But as an OS engineer, the 8-bit bus was very much evident when writing drivers, moving data, initializing memory etc.


How would the code be different between the 8088 and 8086? The execution unit and internal bus are exactly the same between the chips. The only thing I can think of is that it took twice as long to write data out, but it was the exact same instructions. The prefetch queues were different sizes also.


Driver writers care about latency, program DMA devices, set up memory factors (RAS/CAS, bank addresses etc) and bus width is visible there. Also atomic operations were done with bus-lock - a 16-bit update took two bus cycles! So the high and low parts could be non-atomically updated.


Thanks. I wonder what Microsoft did for MS-DOS? I remember a few clones with 86's instead of 88's.


DOS isn't an OS at all. The BIOS did the hardware-specific stuff, including the floppy driver(!) DOS was almost pure software services.

For cases where DOS had used an instruction that would change, Intel rolled over and crippled that instruction in 16-bit machines (at least for a while) so it would behave the same.


I'm by no means a user of an actual Lisp Machine (would love to be though), but I can offer a few anecdotes I have picked up.

Firstly, computers back then were so much slower than they are now that the speed boost from running native versus interpreted is definitely nothing to scoff at.

Secondly, I now use StumpWM, a tiling window manager written in Common Lisp. This means I can connect to the lisp process that runs my window manager to inspect it, debug it, automate it, or make changes live. Now imagine you can do this with the entire machine!

Thirdly, and finally, not everyone is aware of the etymology of "car" and "cdr":

* car (short for "Contents of the Address part of Register number"),

* cdr ("Contents of the Decrement part of Register number")

* cpr ("Contents of the Prefix part of Register number")

* ctr ("Contents of the Tag part of Register number")

The original lisp implementation was on an IBM machine that had 36 bit words broken into 4 parts: the address register (15 bits), the decrement register (15 bits), the prefix (3 bits) and the tag (3 bits).

Lisp started out very close to the machine, for being such a high level language!


"Secondly, I now use StumpWM, a tiling window manager written in Common Lisp. This means I can connect to the lisp process that runs my window manager to inspect it, debug it, automate it, or make changes live. Now imagine you can do this with the entire machine!"

A million times this! Such a machine, end-user-programmable from the ground up, is a real 'bicycle for the mind'.


To that end, I'm really excited to see where InterimOS is going. http://interim.mntmn.com/ (it was on hackernews earlier but I can't find the article)


This! InterimOS looks like a glimmer of hope in the dark. Here's to hoping it reaches its goals and we see an interim machine eventually.


And would probably make just about any marketer and/or lawyer run screaming...


Indeed.

Which just means it's a doubly worthwhile project :)

But seriously, this is the promise of computing we're talking about. It had to be put on hold for a while, because when you're dealing with a resource-constrained machine like an early 8-bit micro, IBM PC, or Mac, you're going to have to ship tightly optimised binaries and use non-native dev tools.

So, given that no-one ships such a machine today, where are there priorities?


One can dream.


They were powerful desktop workstations (first of their kind!) They had up to 40 bits per instruction word in a tagged architecture, so a byte could be identified by object type without having the program say it outright. The processor was a stack machine that ran a bytecode compiled version of Lisp that did type checking and dispatching, in hardware, at runtime. Some of them had very powerful graphics boards that did all kinds of rendering.

tl;dr: They had special hardware that aided in dynamic language processing, like 40-bit words and hardware dynamic dispatch.

And it would not help you at all today because the most prohibitive part of the system is really the size of the CPU silicon wafer and its efficiency, and C is faster than Lisp and people are totally okay with writing code in it.

EDIT: RE: getting into Lisp in a big way: https://web.archive.org/web/20080709051856/http://www.lambda... tells you all you need to know. Specifically:

"And the problem with Lisp? The answer is tailor made for the minds who program it. It is the koan of Lisp.

"The answer is that there is no problem with Lisp, because Lisp is, like life, what you make of it."

Your Lisp code (unless it's an emacs tool) will never work with anything or anyone else. You're invited to write everything from scratch, and doing so is easy; it's just difficult for anything you do to be used by or with anyone else's code.

Lisp is so beautiful, and yet so dangerous.


> Your Lisp code (unless it's an emacs tool) will never work with anything or anyone else.

I wonder how the Lisp I'm using got its Cocoa user interface. Oh, it has an Objective C bridge, so that it can create natively looking Mac applications...


CCL is dead, move on :P


CCL is not dead. It's the best choice for Windows and also for ARMs (like Raspberry Pi), because SBCL doesn't have native threading support on the latter.


I was not talking about CCL.

CCL is also not dead.


> Your Lisp code (unless it's an emacs tool) will never work with anything or anyone else. You're invited to write everything from scratch, and doing so is easy; it's just difficult for anything you do to be used by or with anyone else's code.

Not really true these days, with the prolifiration of high-level-languages most libraries present a C interface which is no harder to bind to lisp than to other languages.


"Your Lisp code (unless it's an emacs tool) will never work with anything or anyone else. You're invited to write everything from scratch, and doing so is easy; it's just difficult for anything you do to be used by or with anyone else's code." - That might have been true once, but arguably isn't true any more, now that we have Clojure, ClojureScript, ClojureCLR, ABCL, Kawa, etc. And if you are willing to go for a multiprocess solution, there are numerous ways to integrate multiple languages over IPC or the network, from decidedly old-fashioned approaches like ONC RPC or CORBA, all the way through to the REST microservices of today, and there exist Lisp bindings for all of them.


> Your Lisp code (unless it's an emacs tool) will never work with anything or anyone else. You're invited to write everything from scratch, and doing so is easy; it's just difficult for anything you do to be used by or with anyone else's code.

Since everyone else is jumping on this point, I might as well also point out that there are things like Chicken Scheme that actually transpile to C and are therefore compatible with it (and anything else that's compatible with C).

> and C is faster than Lisp and people are totally okay with writing code in it.

While C is indeed faster than Lisp (sometimes, at least; IIRC, some Common Lisp implementations like SBCL come within the same ballpark depending on the code being worked with), people are decreasingly comfortable with it as alternatives emerge and the cost of running a high-level language decreases (originally due to Moore's law, but nowadays also due to better-optimizing compilers and interpreters). I expect that we'll eventually hit a point where - much like how it's very difficult (if not borderline-impossible) to produce hand-written assembly that's faster than what a compiler will generate - it'll also be very difficult to write any sort of low-level code in a way that's faster than what a HLL compiler will spit out.


Here's a talk from Kalman Reti using a Lisp Machine.

http://www.youtube.com/watch?v=o4-YnLpLgtk

pros: everything is lisp structure, no text serialization/deserialization to communicate between parts of the system.

cons: see above.


A Lisp Machine today would be an anachronism. Hardware type tagging could be useful. But list efficiency and manipulation would likely be futile. Today, cache is king. Which means, linked lists are out. CAR/CDR? Let's not.

The glory days were not that glorious. There are stories out there of runaway garbage collections, that grinded on for days. Today you get garbage collections in your web browser that would embarrass any Lisp Machine ever built.


> Which means, linked lists are out. CAR/CDR? Let's not.

Data structures are dictated by the requirements of what you're doing, not the machine. Linked lists are out? The C middleware that runs everything is full of "struct foo { struct foo * next; int other_field ... }".

Linked lists benefit from caching, like other kinds of data.

> Today you get garbage collections in your web browser that would embarrass any Lisp Machine ever built.

The garbage collection in your web browser is complete, utter garbage compared to what Lisp people were doing 30-40 years ago. Sorry!

(I will add to this comment later; I have to go to Task Manager and kill the web browser.)


> The garbage collection in your web browser is complete, utter garbage compared to what Lisp people were doing 30-40 years ago. Sorry!

Can you back that statement up? GC has advanced heavily over the years (and still is).The first paper on generational GC was published in 1984, and I would imagine that it has drastically improved since then. I don't think that generational GC even landed in SBCL until sometime around 2005.

Without anything to back that up, your post just sounds like Lisp fanboyism.


You could read the chapters on memory management and GC in the Symbolics Genera 8 manuals: areas, generations, resources, ephemeral GC, generational copying GC, full GC, ...

http://bitsavers.informatik.uni-stuttgart.de/pdf/symbolics/s...


Not the OP, but LISP machines had hardware incremental GC, I'd say GC in the hardware is pretty bad ass and was probably more predictable than modern software based GC.


"The C middleware that runs everything is full of "struct foo { struct foo * next; int other_field ... }"."

C datastructures most commonly used are Array of structs, Trees and Hashes. Linked lists are not commonly used I imagine. I don't have any stats to back that up though, neither did you.


It's still interesting to see the duality in both paradigms. Lisp goes for genericity with the cons based list, while "static languages" reimplements them under encapsulated names. I'm not trying to compare per se, just to look at both side of the coin.


> But list efficiency and manipulation would likely be futile. Today, cache is king. Which means, linked lists are out.

Out for what?

> CAR/CDR? Let's not.

Works okay even today on a typical Intel 64bit processor.

> The glory days were not that glorious. There are stories out there of runaway garbage collections, that grinded on for days.

Maybe in the early days. Lisp Machine development started in the mid 70s and ended in the early 90s. In the later days its GCs (it provided several different, cooperating GCs) were quite sophisticated.

> Today you get garbage collections in your web browser that would embarrass any Lisp Machine ever built.

It won't. Browser GCs are actually not more sophisticated compared to a later day Symbolics GC. The language model of JavaScript is much more primitive than what ran on a Lisp Machine.

Lisp Machines had the same problem like we have today: a hierarchy of memory with different speeds. Very little 'fast' memory, and much much slower, but much larger, virtual memory. Actually the Symbolics Lisp Machines had pretty sophisticated GCs which you even won't find today on typical machines. Caching was all important on Lisp Machines and there was a lot done for that.

A typical Lisp Machine in the Mid 80s had something like 16 MB RAM and 100 MB or more virtual memory on slow disks locally or over the network. End 80s it was 40MB RAM and a couple of hundred MB virtual memory on local disks. Without effective caching strategies this would not have been useful. Some special machines had more RAM, but this was very very expensive.

These machines saved optimized images from which they booted. The memory was sorted according to object types into regions. Lists were converted into a vector-like representation (cdr coding), etc etc.

At runtime a copying GC would move objects around into generations with object-sorted regions, improving locality. An ephemeral GC tracks changed memory in RAM.

Additionally there were a lot of data structures which were under special memory management like networks packets or raster arrays.


Is there any resource you recomend to someone that wants to understand the implementation of these systems? Not the implementation of lisp, but the implementation of these specific environments (lisp machines os's) in lisp. Specially with regards to memory management.

EDIT: you seem to have posted a relevant link in another comment as I wrote this one, but I'd love more information if available.


I am not sure if it will suffice to fully answer the os side, but I mentioned this book in a reply at the top: "The Architecture of Symbolic Computers" by Peter M. Kogge. Amazing book, THE book, at least for me. Written in 1991, it is so fascinating and clear. See if you can get a copy. They seem to have gone up in price this year - from $45 to $250! Lisp is having some sort of renaissance.


Sure. I don't think anyone is arguing that we should fire up a Symbolics machine in 2015 and try to get work done on it.

Rather, it's the spirit of the machines - inspectable and programmable from the ground up - that's worth preserving.


> inspectable and programmable from the ground up - that's worth preserving.

This is what made me curious about the Smalltalk-based Squeak VM http://squeak.org/. Its entire interface and stack was inspectable and manipulatable in real time.

In practice it seemed a little buggy, though. If you installed Squeak programs too willy-nilly, pretty soon you'd get errors due to incompatibilities.


Cdr-coding is a fine way to take advantage of locality with linked lists, especially when the garbage collector is doing it for you, transparently. https://en.wikipedia.org/wiki/CDR_coding


And you can still have GC problem. Google ran into that with their Dalvik VM in Android.


50 years of list processing, and member is still not O(1) :(


It was the whole experience of having the full OS stack written in the same language.

Also having the REPL act as shell, which meant any Lisp library on the system, or running application was exposed to the shell and could be manipulated.

Imagine having something like IPython or Swift Playgrounds as your OS shell.

Also the power of having an expressive power of language like Lisp for both systems and business application programming.


I think the simplest way to understand the zeitgeist is this quote:

  Giving up on assembly language was the apple in our Garden of Eden:  
  Languages whose use squanders machine cycles are sinful.  
  The LISP machine now permits LISP programmers to abandon bra and fig-leaf.
                -- Alan Perlis, Epigrams in Programming, ACM SIGPLAN Sept. 1982


"Languages whose use squanders machine cycles are sinful."

But not nearly as sinful as languages whose use squander human brain cycles.


As the availability of machine cycles has increased much faster than the human brain cycles has, it's reasonable to assume that this wasn't true if you go far back enough in time.

I think it was Von Neumann didn't like assemblers because it was wasteful to use computers for such a mechanical process that cheap labor could do.


Yes, I agree entirely. I think if you went backwards in history you'd hit the point where the additional overhead in implementing a higher-level language would render the resultant system either unusable, or unusable for a class of problem that could be solved with a lower-level language.

Edited: in fact, I started a project to implement a toy Lisp implementation on an 8-bit micro a few years ago. I'm almost at the point of giving up on the idea because it became quickly apparent that a less resource-hungry language would allow me to achieve more with the same hardware.


Forth is by far the most bang-for-the-buck on severely resource constrained machines. Lisp isn't CPU hungry, but it tends to be quite RAM heavy.


Indeed; that's where I was thinking of going instead. The system in question has 42KiB usable RAM w/o expansions.


You could always write it in Forth, and then create a Lisp in Forth :)


What were the main issues ? the heap all the thing + GC ? the cost of cons and atomic types ? all of that ?


Don't forget that most lisps have symbol names available at runtime. IIRC MIT modified the early PDP-10s to have demand paging support for running ITS, so even Maclisp wants many kilo-words of address space.

LISP 1.5[1] did run on a 704 which had only 4096 words, but that's in many ways unrecognizable as a lisp of today.

[edit] LISP 1, not 1.5 was implemented on the 704, which included up to 16384 words on the 733 magnetic drum (core was 4096), so that already is more space than what the parent has, and it seems likely that LISP 1.5 was on a 709 or 7094 which would have had 32768 words which is about 3x the memory the parent has.

1: http://www.softwarepreservation.org/projects/LISP/book/LISP%...


IIRC MIT modified the early PDP-10s to have demand paging support for running ITS, so even Maclisp wants many kilo-words of address space.

Yes, and then convinced ARPA to fund the purchase of 1 megabyte (MiB, of 9 bit bytes) of core memory, a full address space. At the edge of the state of the art back then, many people said it couldn't be made, and I seem to remember coming across some stories about it being quite difficult to get operating.

When I showed up in 1979 it was referred to with names like the "crufty", it wasn't all that reliable.

Those were original formula KA10s. Afterwards came a KI 10 processor I don't know much about, none were bought by MIT. For MIT-MC, David Moon modified the KL10 firmware to do ITS style paging.

Developing that KA10 paging unit must have been wild; a student run computer center I started inherited the hardware, and the schematics were fascinating, especially since they were from before there was a standard for logic gates. Instead, they were represented by boxes with the logic function marked in an upper corner.

A friend of mine (who I recruited into LMI and who put several "100 hour weeks" into successfully debugging and finishing their LAMBDA processor) noticed that in some cases the state of a gate, or perhaps a flip-flop?, was changed by hitting its output hard. Gates were sufficiently expensive, and the designers knew the hardware sufficiently well; you can see a picture of one 9 transistor Flip-Chip here: https://en.wikipedia.org/wiki/Flip_Chip_(PDP_module)


Adding to your question: who would accept a computer that only executed programs written in a single language? I remember people talking about CPUs that would run Java bytecode natively back when the Java hype cycle was near its peak, say 12-15 years ago. Never saw one on the shelf though, I guess once the JIT compiler started working well people lost interest.


Most of today's computers only execute programs written in a single language: its instruction set.

The original Symbolics architecture ran some form of Lisp-firendly but not really Lisp instruction set. IIRC there were compilers from other languages targeting it too.


There used to be things like this: https://en.wikipedia.org/wiki/Jazelle


Amazon Kindle 3 has an ARM CPU that claims to support this, so this has been in production as recently as 2010. Not sure if it's actually used with the stock firmware.


Modern ARM CPUs report that they support Jazelle but will just trap to an exception handler for every JVM bytecode. A Java CPU that works in a similar way to a Lisp Machine is JOP [1], it has microcode to implement the user-visible instructions.

[1] http://www.jopdesign.com/


I'm sure there are folks (archive.org?) who would love to help you make digital copies of all those tapes for historical purposes...


I'm surprised a machine explicitly designed to write and run Lisp requires a two-key combo to make a parenthesis.


Look closely at the two keys to the right of "p", or this more clear picture of a full fledged Space Cadet Keyboard: https://en.wikipedia.org/wiki/Space-cadet_keyboard#/media/Fi...


Did the Symbolics List Machines have Z80 processors in them? Or was it another machine in the MIT labs you're talking about?

"...by someone who had hand-wrapped Z80 boards in the MIT AI labs at one point".


We use one of these in the lobby as a coffee table at my work.


You don't happen to work in College Station, TX?

(Search the rest of the current discussion for "coffee machine" ;) )


Texas yes, College Station no.


The text is horrible on this page. :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: