The IT dept installed and compiled tons of software for the various systems and AFS had an @sys string that you would put into your symbolic link and then it would dereference to the actual directory for that specific system architecture.
Yes, everything at CERN, at least in the 2000s, was in AFS. Fermilab was also using AFS extensively.
I remember compiling AFS from source for Scientific Linux 3.x because there was a weird bug that didn't let the machines mount AFS when they were integrated with LCG (before it was renamed to WLCG: https://wlcg.web.cern.ch/)
Well I'm 50 but AFS in college was superior to all the NFS and NIS silliness I've put up with at the 8 companies I've worked at since then. I wrote another comment about our Unix groups at work and how we have a setuid root command that we type in our password and it changes our groups dynamically.
> The IT dept installed and compiled tons of software for the various systems and AFS had an @sys string that you would put into your symbolic link and then it would dereference to the actual directory for that specific system architecture.
This sounds cool, but I've wondered - couldn't you just stick something like
(I'm actually doing something like this myself; I don't (yet) have AFS or NFS strongly in play in my environment, but of all things I've resorted to this trick to pick out binaries in ~/.local/bin when using distrobox, because Alpine and OpenSUSE are not ABI compatible)
You could do that, but @sys is resolved in the kernel - so you can use it in symlinks and just add .../bin to your path (with bin -> .bindir.@sys ) and thus it works for non path cases too...
Not as advanced as what Domain/OS did before it - the kernel straight up evaluated arbitrary environment variables in the path resolver, which they used for things like selecting "personalities" (BSD vs SYSV vs "native") but it wasn't restricted to any particular names. "We don't make them like that anymore..."
What's really interesting about @sys is that it supports a search-path at run-time, so the resolution can walk back through a list of target systems to find an available one.
When working on a class project it was great that I as a normal user could create an ACL group and add and remove users to it and then give them read or write or both permissions on a directory in my account.
At my job we have hundreds of projects and there are strict security requirement so we only have permissions to the projects that we are assigned to. The problem is that software and libraries are in different directories with different permissions so they can't just add us to every group as it would go over the limit for number of Unix groups. So we have another command that is setuid root that we type in our password and it changes our Unix groups on the fly. The process for adding people to the groups has to go through a web site where only the project lead can add people and even then it can take a day because some VP needs to approve it.
Last time I tried, mounting a directory webdav server in the same manner as an NFS or CIFS server was a hot mess. Some FUSE client tried to fully download and cache everything in ~/.cache or something.
It's been a while, but I haven't tried anything since then.
CIFS is stupid for UNIX <--> UNIX, and NFS has that UID mess...
I tried it back when obsd included a client, hooking into the public list was amazing, you could almost see an alternate web in there but based on the unix filesystem instead of http. unfortunately afs was dropped in 5.2
The big disappointment for me at the time was that obsd did not also include a server component so it was comparatively much more difficult to use afs in your own infrastructure. The lesson being always make the effort to include the server side if possible. Without that you feel like a second class citizen.
My university was similar (SGI, HP-UX, IBM AIX, Sun, Linux, SCO), but they used NFS to mount home dirs local to the computer clusters, which wasn't as cool because it wasn't possible to mount home dir volumes remotely like an AFS campus. They also, unfortunately, used original NIS which could easily extract all password hashes of all users with a simple `getent passwd`. I proceeded to run John The Ripper against a dump of everyone and found 60 passwords in 30 seconds, including several tenured professors.
Those were the days when portability and longevity were important and there wasn't as much of a monoculture or incompatible code/language features churn.
IMHO, HP-UX had hands down best written man pages I've ever seen any UNIX commercial or free. And I've been working quite many with.
All man pages were well written, nicely formatted easy to read and almost all came with often valuable examples giving quick enough understanding to check usage most often. That has been absolutely the thing that I've missed other *nix systems since.
But there are too many things were done so nicely and made it nice to maintain with HP-UX that it's not worth trying to remember and list all. But unfortunately shell environment was not match to convenience GNU tools Linux had from beginning. That is without making effort to install (read: compile from source for quite long time) those HP-UX if that was allowed. With university computing center that no problem, but telco side it was big nono -- not without getting product owner permission first :/
But just an example Ignite-UX was one of my favourites with HP-UX. The simplicity using a one simple command with few options bootable DAT tape that could then be used to either recover whole running fully functional system or clone that developed system first to staging lab and then up to production with ease was great time saver major upgrades and migrations. None of the Linux bare metal backup systems I've tested have been able to recover exactly same disk layouts, usually LVM part is poorly done. As has been VmWare p2v migration tools also btw.
That Linux LVM that Sistina did first before Red Hat bought them, is implemented quite exactly what HP-UX had for some time then.
I remember the hours gcc needed to compile itself on those HP servers. We needed it for all the programs that would not compile with HP's cc. We also installed some GNU userland utilities because, as you wrote, they were better than the ones in HP-UX. Those were the years around 1990.
I did some HP-UX in late 80's, migration of servers across the country for a courrier company from NCR towers to HP servers running HP-UX (sorry don't recall the models of hand).
Had fun porting sortware across, a radio system that was unable to test fully unless in the field (which it did first time, which was amazing). Had many good chats with HP engineers back then (we did a large purchase as a global company) and one I still recall was early editions of HP-UX having an error code of 8008, until somebody in senior managment at HP saw it one time (no customer had ever complained apparently about it).
I liked HP-UX having previously worked on IBM RT systems running AIX, as well as NCR towers with there more vanilla System V. Though did have SMIT with AIX and SAM with HP-UX for those manual saving moments of ease to fall back on. Though my favourite flabour of unix of that time would be the Pyramid systems dual universe OSx. You could have a BSD or an AT&T enviroment at once, able to use both flavours in scripts by prefixing with bsd or att, to run that command. Don't recall how it handled TERMCAP/TERMINFO of hand (that was always an area of fun back then).
Fun times, in the days in which O'Reilly and magazines like Byte or Unix World, were the internet, along with expensive training courses and manuals that you would use and thumb every page of the multi tombed encyclopedic stack they came in.
Best C platform for developing that I did use in that era, hands down the VAX under DCL, the profilers etc, pure leaps and joy.
> I liked HP-UX having previously worked on IBM RT systems running AIX, as well as NCR towers with there more vanilla System V.
There's very little on the internet about those "NCR Towers."
> 1987: https://www.techmonitor.ai/hardware/ncr_marries_its_tower_un...: "Despite abandoning its effort to implement Unix on its NCR 32 chip set, NCR Corp did not abandon its ambition to bring Unix into the mainstream of its mainframe product offerings, and the company yesterday launched a facility whereby its top-end multiprocessor Series 9800 fault-tolerant mainframes can be used as servers to a network of 68020-based Tower Unix supermicros."
> 1988: https://www.techmonitor.ai/hardware/ncr_renews_its_tower_uni...: "When you sell as many machines as NCR does with the Tower, you can’t rush to incorporate a new chip as soon as it arrives because there simply aren’t enough chips to meet your needs. Accordingly the new Tower models use the 25MHz 68020 rather than the 68030."
Yeah, HP's cc was "not technically a C compiler" - the only supported use of it was to compile a couple of stub files and link the kernel, on kernel configuration changes. (This led to a bunch of work in making gcc bootstrap from cc, even on top of HP/UX weird ABI, something involving function pointers being longer than other pointers IIRC?)
HP-UX general support seems it is EOLd by end of this year. Extended, apparently very pricey, support will last till 2028.
It would be nice if anyone having still contacts they could ask if HPE would be willing to relax at least parts of HP-UX, like documentation and let achieve.org take them and let us occasionally check things as rererence how it was HP-UX.
It would be shame if all that work that they did documents were lost and unavailable general public later on.
Could it be possible to copy the man pages directly from a running distribution? I'm sure that's not allowed, but if it's otherwise disappearing forever...
Sure it is, if you have installation disks. Those are bog std ISO9660 with rock ridge -overlay extension. Which is just hidden file on CD top directory, which maps those silly uppercase ISO naming conventions with file version, and Linux should be able to mount them without problems.
I do not remember any more if those man files were preformatted and .Z compressed or were there the troff source files and "an" package also. Commercial unicen did have bad habit not to provide sources, so that could be the case.
But if someone have the CD:s then its not too hard to check I believe. Installation files could be packed somehow, like compressed and then cpio or tar inside. That's what I now think those would have been. But I can't remember for sure, its's bit over 25 years when I did work HP-UX last time.
And if I remember correctly HP did ship some printed manuals also with CD's. I have some kind of memory seeing some disks like that, but I never used those. We had paper manuals back then and which were then sent to customer as part of our product. Nor have I any idea which format those documents or whole document CD's would be. Postscript or PDF if we would be lucky, but it could be some proprietary format in worst case.
I agree that LVM in HP-UX was far ahead of Linux back in the day. To be fair some of those advanced features in HP-UX LVM required an additional license (eg: mirroring required Enterprise Operating Environment). I haven't touched HP-UX in like 10 years however.
That is true, sw licenses were a major nuisance. As they usually are. Not just where to get one, but in time, to keep track of those and secure so that proof of purchase was not lost before deployment and include final delivery. HP product codes and major version change product renaming plague were not exactly my favourite part of work!
Many HP-UX boxen (servers) came with default (interactive) multiuser OS licenses. Product differentiation which HP sales loved had license castrated workstations, which came only two user license.
First time I had no clue about this and were wondering why some odd network management software I was installing a server did not restart properly and was causing head scratching. Then I found that logs stated our license was not valid though it had been confirmed valid in other test install.
A HP support guy I knew and saw later told that I had probably to install optional two-user package and then the software will start. Oh, great that it was. But what the heck that two-user license only prevented only two serial line users simultaneously and only systems console was serial that time and everyone else logged in via network. To be sure I made PM check if we still were within license because of that. He told me later yep, no problem there. Just get it done and we're ready deploy it to site.
Oh hey, a 9000/340 in the Cambridge area. Almost certainly that originated with the university's Engineering department, who back in the 1990s got rid of a lot of these machines that they had been using as X terminals. My notes say they had six diskless workstations to each server, and kept the monitors to use with the replacement machines, which would explain why this person's 9000/340 has no disk or monitor.
Some truly terrible quality pictures of the one I used to own are at https://www.chiark.greenend.org.uk/~pmaydell/hardware/tiroth... (I have long since disposed of it). Some of the people who got the machines had a play around with getting Linux booting on them. Amazingly some of that code is still in the kernel, eg drivers/net/ethernet/amd/hplance.c so it might even still work ;-)
Context Dependent Filesystems were one of those weird, wonderful experiments in Unix' early days that never ultimately escaped its home world. Every vendor Unix had a few. HP were true engineers in those days, so HP-UX had more than a few. But the general corporate attitude toward sharing and standardization was very different. "We want to be standards-based, but we also need some special sauce to differentiate us from the competition."
For example, HP-UX was a BSD-based Unix implementation that tried very very hard to pretend it was UNIX System V (R2/R3). "No, no really! I'm not one of that university kids!" But BSD was a far better foundation, vastly better networking etc., so that's what it was underneath.
Unix of the era was billed as a multi-user shared system, but it wasn't always great at that. It desperately lacked much of the quiet robustness and workhorse-ness of the proprietary minicomputer OSs of the day (e.g. VMS, AOS, HP's own MPE). No vendor did more to fill that gap and make multi-workload a workaday reality. HP added a fair-share scheduler (FSS), the first multi-system high availability clustering in Unix (MC/ServiceGuard), and scores of refinements along the way. As a result, in practice HP-UX was admirably hardened, and it ran more users and more concurrent competing jobs per system than any other Unix system could. Often by a wide margin.
In ~1995 HP doubled down on FSS with Process Resource Manager (PRM), which could guarantee various "shares" (weighted priorities) of total machine resources. First commercial Unix ancestor to today's containers. In production ~6 years before BSD jails and Virtuozzo, ~10 years before Solaris Zones, ~18 years before Docker/Linux containers, and ~20 or more years before container were mainstream production vehicles.
Unfortunately for HP, its workstations (the ones OP acquired) weren't nearly as popular with universities and developers as Sun Microsystems', so you tended to find HP-UX in commercial production—larger servers, more workload, but smaller numbers. And thus smaller ability to promote its innovations or be selected because of them.
Hat tip to steely-eyed missile man Xuan Bui and the many unsung engineering stars of HP in the Unix era.
>Unfortunately for HP, its workstations (the ones OP acquired) weren't nearly as popular with universities and developers as Sun Microsystems', so you tended to find HP-UX in commercial production—larger servers, more workload, but smaller numbers. And thus smaller ability to promote its innovations or be selected because of them.
Columbia University during the 1990s was a SunOS/Solaris shop (and, before then, VAX <https://www.columbia.edu/cu/computinghistory/>). My first year, AcIS (Academic Information Systems, IT for faculty/students) set up a single computer lab in the engineering building <https://cuit.columbia.edu/computer-lab-technologies/location...> with HP workstations. Although they booted into HP-UX and its Motif window manager, MAE provided Mac emulation and, in practice, was usually used because most students were unfamiliar with X Window, of course.
The boxes used the same Kerberos authentication as the Sun systems, so I presume I must have been using context-dependent filesystems for binaries when logging into the systems locally, or when I chose to remote log into one specifically from elsewhere (just for novelty's sake; I preferred the Sun cluster, or the Sun box dedicated to staff use).
MAE—the raison d'etre for the HP boxes—was slow and unstable, and by the time I graduated Macs, I believe, replaced HP, which made the lab consistent with what most of the other computer labs had.
I would say SunOS (i.e. pre Solaris SysV and not including it) was the quality bearer for UNIX in that era. Particularly once they did the Unified Buffer Cache; HP-UX was never able to accomplish this and it makes it not an ideal file server amongst other problems.
HP-UX 10 and 11 progressively imported more SysV code and lost some of the charm that 9.x has.
I find AIX to be fascinating. Especially 3.x against contemporaries with its LVM, and a pageable kernel. A lot of people have snap judgements against it because they saw 'smit' but don't really understand anything about it.
SunOS evolved to be a great file server and "network computing" server, whereas HP-UX evolved to be a better multi-workload commercial server. Horses for courses—and they often ran on different turf.
You're also right to shout out out some of the other innovators: Data General's DG/UX did a great "let's redesign the kernel for multiprocessing and NUMA." IBM's AIX had kernel threads, pageability, and preemptibility at a time when no one else did (plus JFS, LVM, and eventually LPAR isolation). And Sequent DYNIX/ptx had some impressive multiprocessing (RCU) and large DBMS optimizations very early on. HP was by no means alone trying to engineer away Unix' early weaknesses.
>Unfortunately for HP, its workstations (the ones OP acquired) weren't nearly as popular with universities and developers as Sun Microsystems', so you tended to find HP-UX in commercial production—larger servers, more workload, but smaller numbers
Agreed, the university I worked HP systems cost was the major reason Computing Center Sun was purchased, though we had stray discount price purchased units of almost all vendors too.
We did have one HP 3000/MPE running library VTLS quite long time. I can't remember its exact model any more. But was first 160cm heights rack filling old system and then later replaced with some 9000/E35 matching size smallish (a thick and very heavy PC) size smaller 3000 series box. I did not manage that, but helped its sysadmin with his 9-track autoloader issues couple of times. I would have certainly recycled that tape unit to another use, but it was HP-IB (IEEE 488 / GPIB) connected like whole rack filled with disks all daisy-chained were easy to believe not having been cheap. Too bad it was so hard to get GPIB adapter working with other systems. Those terminals used with MPE having local edit buffer were weird, as was HP Roman character set used. All so well built that was a shame to let the go when VTLS was retired about 30 years ago.
Maths department did have better funding to get few HP-UX running long time. Only HP-UX we had at CC was C160 workstation running OpenView NMS, but that's it.
Yes and commercial side (a telco vendor) I did work customer demanded HP and there were very few Sun servers. It was only used if and when software was not at all available for HP-UX. What I recall Ericsson switching systems tended to come with Sun/Solaris and Lucent 5ESS HP/HP-UX that time.
A friend of mine went SF some conference, I don't recall year. But he came back with HP brand sunglasses which HP gave all visiting their booth and told "Remember, not to look at Sun" :D
I totally missed the EOL announcement. Not that I use it, but it is one on the few last big proprietary Unix. I thought their will always be enough paying customers to maintain it (even if sold to third party).
Nowadays NetBSD offers something similar to "context depended filesystem", i.e. a special form of symbolic links that can points to different locations, according to wide range set of attributes: from domainname via machine_arch to gid.
My first thought, upon reading that these were being given away, and seeing "Cambridge" was that they should go to the "Centre for Computing History".
I've been trying to visit this place with my daughter for 4 (or more?) years now, every time we've been in the area (roughly once per year), I forget that it isn't open on Mondays (which is the day we typically have a couple of hours before leaving the area), walk up to the doors only to realise (again) I've made the same mistake, and my daughter and I walk away disappointed.
Thanks for this. Brings back so many memories of the long hours spent in computer rooms with HP 9000s and RS/6000s back in the 90s. Seeing that SAM interface made me shiver :)
It's great that there are folks like you preserving this history
The "context dependent filesystem" concept is a bit trippy, but I think it's a pretty neat solution to "some systems need a their own version of a file, other files ought to be universal".
It reminds me a little of a thing used in clustering of DECs (later HPs) Tru64 Unix.
The clusters had a shared OS image - that is a single, shared root filesystem for all members. To allow node-specific config files, there was a type of symbolic link called a “Context Dependent Symbolic Link” (CDSL). They were just like a normal symlink, but had a `{memb}` component in the target, which was resolved at runtime to the member ID of the current system. These would be used to resolve to a path under `/cluster/members/{memb}`, so each host could have its own version of a config file.
The single shared root filesystem made upgrades and patching of the OS extra fun. There was a multi-phase process where both old and new copies of files were present and hosts were rebooted one at a time, switching from the old to the new OS.
The am-utils "amd" known as its running process current use I don't have much to say as I've not much seen it as at least Linux distros have had autofs-tools quite long time. But -90 something am-utils was the thing we mostly used.
Adding: Oh, that made me remember we had then also user mode nfs daemon, which allowed re-exporting remote mounts, which was at times with smaller disks and always looking where to get it more if nothing but temporary storage great help. Current kernel based nfs doesn't support it any more.
My significant experiences on HP-UX were HP Vault, one the very first approaches of doing containers in UNIX, and going through 32 bit to 64 bit transition.
If it's the model I'm thinking of, it's basically a 9000/712. An easy way to get a PA-RISC workstation from someone who doesn't realize what it actually is. :)
Oh my! Thanks for the memories - HPUX was my first workstation class unix operating system (sili-g's were too expensive). I remember downloading and compiling gcc on hpux. THe ideas of compiling a compiler with itself blew my mind!
>I’ve got my HP 9000 Model 340 booting over the network from an HP 9000 Model 705 in Cluster Server mode and I’ve learned some very unsettling things about HP-UX and its filesystem.
>Boot-up video at the end of the blog, where I play a bit of the original version of Columns.
My university in the 1990's had hundreds of Unix workstations from Sun, HP, DEC, IBM, SGI, and Linux.
It was all tied together using this so everything felt the same no matter what system you were on.
https://en.wikipedia.org/wiki/Distributed_Computing_Environm...
https://en.wikipedia.org/wiki/Andrew_File_System
The IT dept installed and compiled tons of software for the various systems and AFS had an @sys string that you would put into your symbolic link and then it would dereference to the actual directory for that specific system architecture.
https://docs.openafs.org/Reference/1/sys.html
https://web.mit.edu/sipb/doc/working/afs/html/subsection7.6....
"On an Athena DECstation, it's pmax_ul4; on an Athena RS6000, it's rs_aix31" and so on.
reply