As soon as I saw the title, I thought of the furor when Gingrich and Clinton discovered that the FAA was using vacuum tube computers and radar in the air traffic control system. http://govinfo.library.unt.edu/npr/library/clinton.htm The foreign (sole) vacuum tube supplier that Clinton mentioned was Russia. Oh, the embarrassment!
I knew that there were some delays in rolling out the the replacement system, so I did some googling to see when it actually occurred. My god, it's now. We are in the middle of it. The upgrade was responsible for the recent air traffic meltdown. Please somebody, point out my flawed research skills. I cant believe this.
It doesn't specificaly mention vacuum tubes, but it does explain why it's entirely plausible that they're still around:
>This technology is complicated and novel, but that isn't the problem. The problem is that NextGen is a project of the FAA. The agency is primarily a regulatory body, responsible for keeping the national airspace safe, and yet it is also in charge of operating air traffic control, an inherent conflict that causes big issues when it comes to upgrades. Modernization, a struggle for any federal agency, is practically antithetical to the FAA's operational culture, which is risk-averse, methodical, and bureaucratic. Paired with this is the lack of anything approximating market pressure. The FAA is the sole consumer of the product; it's a closed loop.
I'm skeptical that the culprit is mainly to be found in specific features of the FAA, and the article doesn't seem to provide much causal evidence of that (though I recognize that it's hard to prove that kind of causality). Major upgrades/replacements/migrations either failing or coming in a decade late and billions over budget is the norm in Fortune 500 companies, too, which leads to a hypothesis that it's a more general problem with big rewrites of safety/business-critical systems in large organizations.
Wow. Their end-goal upgrade, the one that they've installed on one line, and are installing on everything else looks so antiquated.
My city's subway/elevated train system will be 30 years old this December. Train control is fully automated; there are no drivers on trains, and in normal operation there is no remote (human) operator either. It's been automated for 30 years. https://en.wikipedia.org/wiki/SkyTrain_(Vancouver)
(Side note: during the BART strike, some HN commenters suggested computers to replace drivers, and other commenters said that wouldn't be safe. Such hilarious technophobia on HN.)
I don't want to trivialize NYC's challenges and accomplishments: for one thing their system is far, far more complicated (468 stations on 34 lines, vs 47 stations on 3 lines), and for another we all know it's very hard to upgrade legacy systems.
But I'm astonished that they would work so hard to modernize their 1930s technology to 1970s standards in the 2010s.
The problem with some of these old lines is that they might have level crossings in which pedestrians or cars might occupy the tracks - so someone needs to be there to stop the train if that happens. Fully automated systems like the SkyTrain or Japan's Yurikamome line have grade-separated tracks (and in some cases platform gates as well) to ensure that sort of incursion doesn't happen.
The NYC subway system has no level crossings where pedestrians can occupy the tracks.
The only level crossing where another train might occupy the tracks is Queens-bound from the JMZ stop at Myrtle Ave & Broadway in Brooklyn. The M crosses at grade over the J/Z track.
Some less technology minded people would actually be freaked out by automated trains. The airport where I live has automated trains to go between gates and the cars very closely resemble the public trains cars without the seats.
Most likely its not a technical challenge its a bureaucratic problem. Not to mention the unions aren't going to let the transit authority obsolete their jobs.
Another thing about the NYC subway is that it runs 24 hours a day 365 days a year. Not many metros around the world can make that claim.
Living in New York, the amazing thing about the subway system is how they keep it running. It's old and ridiculously complex, and despite what natives say, the MTA by and large does a good job running it. There's no political will to fix it right now. It's going to take trains becoming unsafe or costing so much to keep safe that people eventually notice and start making noise about it, I think.
Yep. They do a pretty darn good job, considering what they have to work with.
And I'm not talking just the technology - they have 31 billion dollars of debt to service, and constantly get the shaft from Albany when it comes to funding. They continue to take on debt because they have mandated projects and lack the funding to complete them.
The D.C. Metro system was planed in the '60 and constructed in the '70s, with the first operations starting in 1976. It too is based on electro-mechanical relays, although it's more ambitious, after the operator closes the doors and sees all is fine, he hits a "go" button and the system automatically takes the train to the next station, avoiding collisions as long as the "is there a train in the next segment" sensor system is working.
The DC Metro actually (mostly) hasn't used their automatic train control system in full automatic mode since a collision in 2009 that killed nine (for which the ATC system was directly responsible; a faulty circuit both forced a train to stop and rendered it invisible to the control system).
Safe train control seems like it needs a little more sophistication than '60s or '70s tech was able to give. For one, it's common sense that a train that disappears from the system isn't really gone-- but the Metro system couldn't take that into account. Stuff like that should be pretty trivial today.
> For one, it's common sense that a train that disappears from the system isn't really gone-- but the Metro system couldn't take that into account. Stuff like that should be pretty trivial today.
It was pretty trivial in the 1970s. Require the sensor for an adjacent segment to be engaged before the sensor for an active segment will disengage.
You can't do any computation with modern computers that you couldn't have done with relays in the 1970s, or the 1870s for that matter. Mechanical relays are slower, less reliable and more expensive than transistors... but they do the same thing.
They're showing a really old tower in that video. Most of the system does not use mechanical lever interlocking machines. It's mostly General Railway Signal NX, which is entirely relay-based.
NX, (for eNtry eXit), is probably the first system with an intelligent user interface. The dispatcher has a track map, with lights showing which blocks are occupied. To route a train through the controlled area, the dispatcher pushes a button on the track map at the current location of the train. Lights then come on for all the places the train can reach from there. This takes into account other trains present, other routes already set up, and track or switches locked out for maintenance. The dispatcher presses an exit button where the train is to go, which locks in the route. All the signals and switches are automatically set by that one button press, and conflicting changes and routes are locked out. As the train progresses through the control area, the switches and track behind it are released, and can be used for other trains.
It's designed to be fail-safe. There are "train stop" devices at every signal. These raise a big metal lever at the right of the rails. On the lead truck of each subway car, there's a big metal lever connected directly to an air valve for the air brakes. If a train passes a red signal, the air valve trip lever hits the train stop, which slams on the brakes and cuts power. The signal system even checks the position of the train stop devices; there are always at least two train stop devices up and locked between two trains, and train stops can't clear until the next one is up and locked.
It's a good system. The main problem is the sheer number of relays and the amount of wiring required.
Train detection is by checking if the rails are shorted together by wheels. In keeping with the fail safe concept, the power for that check is fed into one end of the block, and it's sensed at the other end. Thus, a rail break or a power supply failure causes the system to sense a train and set signals red. On electric railroads, the rails are also the power return for traction power.
(The London system, with two power rails, is an exception.) So the sensing signals have to be different from traction power. They're usually audio frequency signals, with a different frequency used for adjacent blocks. At block ends, there's a low-pass filter capable of handling a few thousand amps, basically a big inductor. One of those failed in DC, causing a crash.
Washington, DC has a similar system, but on top of it, they have a computer-controlled system for dispatching. This is quite common - a dumb, reliable system based on trackside equipment for safety, and a computer-controlled system for scheduling and dispatching.
All this stuff has to handle snow, ice, snowplows, flooding, the huge traction currents (600VDC in New York), and lightning strikes on the rails as part of normal operation. So everything is in hulking iron and steel cases and very rugged.
There's a relay-level simulator for NX.[1] Runs on Windows.
It's possible that the computers didn't run on tubes but rather the radar systems. The two may have been conflated. I believe I've read that tubes are still quite common in high-power radio transmitters because they're more effective than solid-state components. Still a bad situation, but maybe not as sensational as it appears on the surface.
I looked at several articles before I posted and I saw several mentions of "computers and radars", which matches my recollection from the time. The computers were a special purpose design, not commercially available systems, so they could have been controllers as opposed to large data processors, but the controversy was definitely about computers and not about the high-power tubes used by the radars themselves.
Radar and satellite systems still use traveling-wave tubes in the RF signal chain. The reference to tubes could also be the radar displays, which were traditionally vector CRTs. https://en.wikipedia.org/wiki/Traveling-wave_tube
I was in the control tower of an international airport as recently as 4 years ago, and can confirm that at least some of the systems were still built on 1960's technology - the radar scopes were basically pre-CRT technology. On the other hand, for something as fault-intolerant as air traffic control there's a huge amount of value in reliable, well understood systems.
I worked in an ATC tower with the Marines, and we used the AN/FPN-63 as our precision approach radar. It came into existence around 1958, I believe. The scopes looked something like this, but even more primitive: https://upload.wikimedia.org/wikipedia/commons/b/ba/PAR_Scop...
It was maintained with duct tape, gummi bears, and black magic, as the suppliers for said parts had long, long since gone out of business.
Probably Raytheon's STAR program. I wasn't involved but one from inside it was rumored to be one of the biggest boondoggles there (and thats saying something). I think it inspired the "yesterdays technology tomorrow" motto.
When working in risk management business development, my firm was pursuing some line of business with Sabre. Most probably know, but they're in the travel logistics business primarily. A spinoff from American Airlines.
Anyway, when I was getting materials together, I reviewed the CEO's summary remarks and looking ahead to the future. Over, and over, and over, and over again he railed at the FAA and government for not enhancing the flight systems nationwide. It would've been funny if he wasn't so right in his passionate pleas.
It is clear to me that the government is not only a criminal organization but also dangerously incompetent.
Therefore we have a moral obligation to replace government institutions with high tech systems and protocols that are not reliant on the government for support.
Ah legacy enterprise software/hardware, it keeps me employed and insane.
Where i currently work we have a system which was originally designed in '69(coding started in 75) running on an ibm iSeries. It's old old tech and most of the system is cludged together so poorly that instead of being able to write patches we just write wrappers around the applications(micro services really) that modify the program as its running. Yeahp.
We have a system written in something called "Workstation Basic" which was designed/written in '79( a decade before I was born). The original author no longer works for us and can only be contacted certain times of the year because he's a snowbird.
Millions of dollars of business is done in access databases and custom made excel workbooks.
I worked with a huge multinational company that was trying to replace the system they used for planning.
The system was 10s of millions of lines of 16-bit assembly code written for a minicomputer in the the mid-1970s. The next generation of 32-bit systems in the early 1980s were able to run the code, so they purchased one and kept it running to nearly 2000.
They had a project that was trying to rewrite everything in C targeting the DEC Apha, but after 10 years it still wasn't ready yet.
Because 80s era minicomputers sucked so much electricity, we were able to sell them new Pentium servers running SCO OpenServer that could emulate them. I always wondered if they ever got their rewrite project finished.
I know of someone who is in the position of your snowbird, though still employed. He almost delights in the fact that there's been no effort to upgrade from something completely reliant on him, even though he knows the situation is crazy. And he has no personal interest in writing himself out of a job. I can't approve of that attitude.
The weird thing is that he's advocated ditching the application in favor of either a custom written or customized off the shelf product, it's more so that upper upper management said "if it's still making us cash, why chance it?" .
Most companies that end up with that kind of hardcore legacy code a) refuse to upgrade on a reasonable basis and b) refuse to compensate their developers in proportion to the value they ultimately deliver.
So, fuck'em. Might as well enjoy the pseudo pension.
If the company has pidgeon holed themselves into the situation, they clearly have the 2 options you stated, and that should not reflect poorly on the original author.
This is in health/medical and the employee in question is nearing retirement. I'd like to think that he should be encouraging them to consider a succession plan ahead of time to minimise risk for the thousands of people likely to be effected by this.
There's a good chance, from my understanding of the situation, that his managers don't understand the situation as well as he does. I think he should think bigger than himself.
Management's decisions are unlikely to be budget related.
This isn't specific to computer or software engineering, of course. There are still Roman aqueducts in use that were engineered thousands of years ago.
Overengineeing doesn't take nearly as much skill as you'd think. It would be pretty easy for any of us to design a wall that, if left alone, would last for thousands of years.
You would think, but often things fail in unexpected ways.
https://en.wikipedia.org/wiki/Zinc_pest is destroying thousands of die-cast zinc-alloy toys from the 1930s to 1950s. Zinc alloys are pretty corrosion-resistant, so this was a surprise.
But this kind of thing sometimes happens to architectural materials, too.
https://es.wikipedia.org/wiki/Aluminosis is destroying a lot of buildings in Spain in the 1960s and 1970s. I've seen a similar process underway in buildings built during the same period in Montevideo, Uruguay, but I don't know if it's from the same cause.
Reinforced concrete, especially in chloride-rich environments, will eventually spall from oxidation of the reinforcing minerals. Some kinds of aggregate, like dolomite, although resistant to weathering themselves, also have a tendency to slowly destroy concrete.
You can definitely build a wall that will last thousands of years in the absence of earthquakes, war, gray goo, or acid rain, if you make the wall out of quartz-rich or otherwise slow-weathering, low-TCE rocks like granite, cement it with lime cement instead of Portland cement, don't try to reinforce it, and don't make it too long, thin, and straight, in order to reduce thermal-cycling stresses.
Most of us would never think of most of those issues when we tried to overengineer.
The Roman aqueducts have some other interesting virtues, though: due to the arch, they used very little material. We could probably do better with stainless steel today, but they did pretty darn well with masonry. I don't know if you've ever tried building a masonry arch, but there are some nonobvious considerations, and a mistake can kill you.
Zinc and zinc alloys are also known to produce whiskers slowly (or rapidly if subjected to temperature/stress cycles) which can short circuits over time.
This is especially a problem in RoHS compliant appliances as the common additives, i.e. lead, used to supress whisker formation in solder cannot be used.
Yes, although it's tin whiskers rather than the zinc whiskers that come from solder. It might be the same mechanism, but the mechanism isn't well understood yet. Which is kind of funny, given that we've been smelting tin since the beginning of the Bronze Age; you'd think we'd understand it by now.
I thought about bringing up tin whiskers in my earlier comment, but I decided I was already going over my non-architectural surprising-failure-mode word budget. :)
Yes, the Pantheon is a good example of what I was talking about — except that it's built with Roman concrete, which is substantially more weather-resistant than modern concrete, for reasons that are still not well understood. It's probably a matter of using slightly different pozzolans, but the Roman concrete manufacturing process was lost with the fall of the Western Roman Empire. So if you were to try to build a Pantheon today, you'd have a good chance it would fall down in only a century or two.
Lime cement, though, we do still know how to make, and it has the advantage of having a lower TCE, which will reduce thermal-cycling stresses on your wall when combined with other low-TCE materials like granite.
Both lime cement and Portland cement can suffer pretty badly from acid rain, a problem that was only discovered in 1853 (just yesterday, in the lifespan of the aqueducts or the Pantheon), and which I think we're well on our way to solving. It might come back, though, and it's harsher on lime cement than on Portland cement.
This should reinforce my point above about surprising failure modes in things that you think you've "overengineered" to last for the ages.
I wouldn't disagree that you and I could build a wall that would be fine for a thousand years, but could either of us build a wall that could be used daily for a large part of that time?
I realize the idea of using a wall is awkwardly phrased, and I pre apologize.
I can tell you that trillon dollars banks are also ran on spreadsheets and access database.
What I don't get is that a software designed in the 60s to run on machines of the 70s surely can't be that complex. Wouldn't it be cheaper to just re-implement it? Might not even need to change the logic, just convert to a modern syntax.
Not for technical reasons, but my worst job in software was also my first, a part time gig in college back in the late 90s.
It was a software company owned by two people that didn't write code or do anything on a computer other than send email. They had somehow managed to hire somebody to make a moderately successful version 1.0 of their product, but now none of the original programmers were there and they were trying to come up with a brand new release.
They mentioned during the interview that they had been "burned" by every programmer they'd ever hired. In retrospect, this was a red flag that the business owners were the only common denominator there, but again, this was my first ever software job, I had no idea what to expect.
They stopped paying me after 1 month, and kept promising that I'd get paid as soon as the big contract they were pursuing landed. I quit shortly after I took a phone call from one of their creditors demanding that they fax over a check.
I got another job quickly, and it didn't hurt me too much financially (I was still living at home with my parents at the time). But you're right, I should have, if only to force them them out of business and out of the next guy's misery.
You're suggesting legally chasing a few weeks salary after nearly 20 years? That's quite a grudge you're encouraging, and likely to cost much more than it would gain, even if it could be proven.
In Washington State, the state will pursue them with glee and vigor free of charge (to you). I think 20 years is a little outside how old a debt they would track, though.
Not nearly as bad, but this reminds me of some of the ridiculous requirements at the last program I worked at my previous employer.
One of those was that our code had to compile on a 32-bit SPARC pizza box running Solaris 2.6. This was in 2009. It was all about some stupid metric: our code portability was measured by how many different platforms we could compile and run on. It didn't matter how obsolete or inappropriate those platforms were; if we could compile and run it counted. Having that number go down was politically unacceptable.
There's something strange about Solaris. Around the same time, an investment bank (where the Solaris team constantly justified their existence with 'dtrace!' 'zfs!' 'zones!') was paying 50K a year per server multiplied by hundreds of servers to run Solaris 8 extended support - which didn't have any of those things, which is good, because the staff didn't know how to use them other than say they made Solaris great.
Dtrace was, and still is, awesome. Linux has come a long way in this area in the last few years, but DTrace was one of the hottest systems management capabilities I'd seen in forever, particularly on the very large SMP systems that had become Solaris's sweet spot in those waning years.
There are reasonable criticisms of Systemtap, but a lot of Sun marketing was just bizarre. They set it up on Ubuntu (which doesn't include it, and which isn't their competitor: Red Hat is) as an example of what the setup is like. They kept complaining it was unsafe, when it was only unsafe if you wanted to use raw C for tapsets. Sun had some great tech, but they were also a sales and marketing machine that had no problems spouting garbage to make a sale.
I had to port a product to Solaris 8 on similarly crusty hardware in 2011. Wasn't so bad other than having to work around an excruciatingly slow build process.
Not as vintage as this, but back in '10, I, along with a couple of other freelancers that I'd worked with in the past got asked to work in a <legacy clothing manufacturer> in a big, old mill in Yorkshire.
We were immediately isolated from the modern-ish IT department and placed in a room at the end of the mill that contained a couple of dozen vacuum drive tape machines. So loud that they silenced the room when they were warming up.
Our task was to "check for security problems" in their ecommerce platform because there was some serious financial motivation from the men upstairs.
Their 'platform' was a decade-ish old install of osCommerce that had been hacked to death to make it support multi-side and multi language, together with some customisations to it's templating engine to provide asset reuse across sites.
It was immediately apparent that all the SQL injection and most other vulnerabilities present years ago in that old version of OSc were still there. They were alongside the many, many vulns that were introduced with the hacks. Oh, and most of the hacks were done by a mixture of programmers on the continent, so variable names and comments were easy to understand.
They didn't want to even consider upgrading, just patch it up and get outta there.
Out of professionalism it was necessary to document the many times management passed on our recommendations to fix issues we found. Looking through that file, most of those issues seem to be present even today.
How ancient this all seems . . . until you realize people are doing the exact same thing by shipping "golden" VM/container images around instead of doing proper package and configuration management.
Yes. The DevOps people seem to be saying that the sysadmin job will go away, to be replaced only by programmers, but they miss what a sysadmin is - The sysadmin job has always been about keeping the systems that actually make money up and running after the developers got bored and moved on to newer and cooler technology.
(Ugh. I'm actually pretty insulted by the incompleteness of the tools they write in an attempt to replace what they think we do. I was going through some puppet add-on to manage your ssh configuration the other day... it had a setting for "allowrootlogin" - but only allowed "yes" or "no" - no "without-password" - which is the only acceptable setting if you must allow root login at all, which you probably shouldn't.)
That's the thing... the uncool stuff that isn't making money goes away. Nobody cares. But the uncool stuff that makes money? Someone needs to actually understand it, long after the developers have moved on to the new technology of the week.
Really, the division of labor makes some sense. 90% of everything is crap nobody wants, that can safely be left to die of neglect. So when building a new application, it doesn't really make sense to engineer it to be scalable from the ground up; you're probably going to throw it out before you have to worry about scaling. And maintainability even more so. The way most companies run, five years from now, that app will be huge (and they can throw huge resources at maintaining it) or it will be dead, and the fact that it's difficult to upgrade libraries in it with known security holes won't matter. So let your developers run wild and use whatever unmaintainable crap they feel like using, why not? If it's successful, you can pay someone to deal with the problems later; if it's not, it won't matter, and you will have more developer time to spend on a project that might actually succeed.
But say you've got a hit; something that people care about, something that pays the bills. At that point? You hire people like me to prop it up. If you are smart, you hire another set of developers to re-write the system in a way that scales... but meanwhile, you need this giant pile of shitty php to keep serving the ads that pay your salary, so you hire sysadmins.
In the aughts, this was all about managing shit php code and corrupt mysql tables. I'm guessing five years from now, at the companies that manage to survive the coming carnage, people like me will be slogging our way through ancient and impossibly tangled VM images, trying to figure out all the weird versions of this and that and the other that the developer installed, and how we can set up something compatible that isn't full of known security holes.
"So when building a new application, it doesn't really make sense to engineer it to be scalable from the ground up; you're probably going to throw it out before you have to worry about scaling ... if it's successful, you can pay someone to deal with the problems later"
Agree. It's important to iterate quickly and get the intent right and not sweat the small stuff, including whatever notion people have of what's "scalable" which usually is misguided if not a fantasy. Scaling is "a good problem to have" after all; it's only necessary if people are actually using your stuff in the large.
Having said that, migrating off of a terrible language is nearly impossible and always painful. Pick stuff with a good runtime -- Java, Go, Haskell, Erlang, C#, whatever -- and you have a chance of scaling if you ever need it.
I was going to argue with you; but look at facebook. They started with PHP... instead of re-writing it in something sane, they wrote a better php runtime. So perhaps you are right.
It still seems wrong to me, though; it seems like if you've got several orders of magnitude more resources to build version 2 than you had to build version 1, you could re-do it from the ground up in something... better.
Sounds like you're mixing up DevOps and NoOps. DevOps is what happens when dev and ops (sysadmins) work more closely together. NoOps is the (probably misguided) notion that sysadmins aren't needed at all
Think of both as ways to get a consistent baseline system which your configuration will be applied to. It's faster to spin up a VM or container than it is to do a fresh install on bare metal so you can realistically test that you really could do a clean install from the documented starting point.
Containers add even faster startup time and the option of isolation so you can have multiple containers which think they're writing to the same location but actually aren't. That's not terribly important in a one-app-per-system model but it's awesome for testing or running multiple instances of an insufficiently-configurable app on the same system.
> Call me a simpleton but I am happy with better package management and config systems.
Fundamentally (in terms of process, not technical implementation) that's all containers really are.
Run this, with this config file, and everything is pre-packaged for you in a bundle you can ship to your prod server if you like (or not, run it locally, whatever).
It's quite nice. I shared the skepticism ("so.. it's just VM's then? So what?") but having switched to a dev toolchain using it - it's actually a LOT simpler than developing locally normally.
Containers are pretty flexible (unfortunately?). They can be used as a stop-gap to everyone jumping ship to Nix (we can only pray to god) somewhere down the line in that they let you get some decent isolation properties out of existing package management.
Since you can pretty much use any config management system you like with containers, they also play nice with that too.
There is one other ancillary benefit in that they enforce a little bit tighter process isolation than you might get if you just installed a bunch of stuff on a given server. So that lets you do things like overcommit available resources (put more on the server than it can run at one time, but stochastically, it can probably handle) and it helps ensure you have "livestock" and not "pets" (servers configured automatically, repeatably, and with no concern with their demise rather than servers you need to baby).
They're pretty much just a (mostly) handy abstraction for a bunch of things that you probably want to be doing but is exceptionally frustrating to set up piece by piece.
I had similar views, until I did some contract work for distributed systems platform.
I have written a demo app that connects to twitter stream api and pushes data to elasticsearch instance running on Apache Mesos. One thing that struck me was separation of concerns principle, starting containers on a cluster with Marathon and Mesos is such a breeze. No complex deployments or package management, everything is isolated from each other very neatly and thoughtfully. Of course, the complexity is by orders of magnitudes higher and this tech comes with its own bundle of "joy", like maintenance difficulties and monitoring issues, but what new tech doesn't?
Basically, it allows you to requisition servers more quickly by providing a baseline system. If you're not doing scalable-on-demand work (and most of us aren't), it's not that useful.
Virtualisation gives you much better use of your physical resources, gives you a heap of flexibility, provides much better remote control of machines, and has been around since before the year in your handle.
The only way to stop this madness is to go on strike against it, as you did.
I work very hard to never mention online that I've had anything to do with SharePoint or ColdFusion because it will bring recruiters with a job that will kill you.
I was in our recruiting DB the other day, and came across an old version of your résumé. I am very impressed with your background and think you might be a perfect fit for an opening that we have for a Senior ColdFusion Developer. Please get back to me at your earliest convenience regarding this exciting opportunity. Thank you.
I legally changed my name over a year ago, and I still get calls from recruiters once a week or so asking for my old name.
Why? Because they have an old copy of my resume from before my name change. This also leads to me explaining that I don't work at my previous employer anymore and that I've found a new job that I love. They just don't give a shit about having up-to-date records.
My entire name was changed: first, middle, and last, and my new name doesn't resemble my old one in the slightest. It's very awkward having to explain to them that the name they asked for isn't my legal name anymore (and I'm half-tempted to be glib and say "he's dead", but I won't because that could get me in big trouble if they believe it and it gets back to the authorities that I've been using my name change to fake my death).
It's especially awkward, because it forces me to out myself to them as transgender, since my old name was unmistakeably male, and my new name is unmistakeably female. Hell, I'm very much out and proud, but it's just plain awkward to have to tell a recruiter over the phone when they're not expecting it and when it's my very first communication with them.
My resume has a "note to recruiters" attached. It is very clear if a recruiter has made any effort to read any of that. The clueless ones end up in the list above. http://www.rogerbinns.com/recruiters.html
Priceless. That ought to be a thing like the 'GPL' or other licenses, a 'license to contact'. Consider making it available under CC so others can use it too and maybe the cumulative effect would be enough to make some real changes.
I might add pierpoint.com to that list. A few hours ago, I got an email from them asking me if I was interested in a job with my immediate previous employer (which is kind of a mu question to ask, as I'm so interested in that idea that I've actually secured a start date there in less than 2 weeks--let's just say that I really liked it there and regret leaving).
If the guy there had done anything more than scrape LinkedIn for .NET stuff (which I can do and have done), he'd know that I would not need assistance in returning to $FORMER_EMPLOYER.
I still get occasional emails like this about a ten-year-old, straight EE version of my resume. Someone out there really wants to hire me as a facilities engineer. :P
People that want to avoid recruiters should just move to Brazil.
I am in Brazil, and I accept any tech job that people offer, even if someone offered me to work with MUMPS I would accept (it is better than going hungry at least...)
But I get one or two recruiter calls (as in, phone calls! not e-mail) per year, and usually it is useless (the last one they told me some bullshit about their process needing adjustment and that they would call me later and they never did... another one before that complained I was not experienced enough, but wanted something that is almost impossible to find anyway...)
Since I'm anon, I'll say that I did a few ivy league SharePoint implementations. Thankfully, their IT/IS shops had to inherit the content and provisioning responsibilities.
Btw, anyone needing Exchange or similar implementations should use http://www.coyotecrk.com/. They did Cisco's and Stanford's implementations. They'll charge arms/legs, but they'll deliver something production supportable for real shops.
Man, you are not joking. I still get the occasional ping from someone looking for me to work on TIBCO. I did a tiny bit of work on that in ~2001. It was on my resume for about a year. I'm still getting pinged for it 13 years later.
Ironically (or perhaps predictably) I've avoided adding the same two technologies you mentioned to my own resume.
Add "Crystal Reports" and "MUMPS" to the "do not put on resume" list.
I never even actually worked with Crystal Reports. A long time ago, I made a bunch of similar resumes that each included one thing extra on them, because I was curious about what I could be learning to get better response rates. The Crystal Reports variant produced a deluge of the worst-sounding jobs touted by the most clueless recruiters. And that was back when I still put my phone number on my resume, so I imagine the unlucky guy in area code 773 that got my old number is still getting calls about it.
And that is why when I got a job with a company that actually used Crystal Reports, I was very careful to not work on those portions of the code, on the off chance that some recruiter would smell some whiff of it on me, chase me down, and sit on me until I coughed up a resume in Word format.
> A long time ago, I made a bunch of similar resumes that each included one thing extra on them, because I was curious about what I could be learning to get better response rates.
Add "Crystal Reports" and "MUMPS" to the "do not put on resume" list.
And RPG/400. I took two semesters of that in college, but (thankfully) never got roped into programming in it professionally. I'm honestly not sure you could pay me enough to write RPG code, even if you were Bill Gates. Just thinking about using SEU (Source Entry Utility, IIRC) to program in that column oriented nightmare gives me the heebie-jeebies.
Somebody should create a CV with years of experience in all the worst technologies and circulate it through all the worst recruiters and see what happens.
Aha, a CV honeypot! You could harvest the names of all the recruiters that take an interest, then publish a blacklist for legitimate job seekers so they would know who not to talk to. :)
It seems like everyone wants resumes in Word format. The one time I tried to send a PDF the recruiter nearly treated it like it was some unknown format and asked me to resend as Word.
That's exactly why they want it as a Word document. That's what they use to produce their in-house resumes for distribution, and they don't want to waste ten seconds on opening an HTML or PDF resume in Word and selecting "Save As... Word document". They want to create their own resume, with your name on it, that is loosely based on your actual resume, so they can send it out to hiring managers.
It's very safe to say that if you don't know how to open and edit a PDF document, you have no business being a tech recruiter. Anyone foolish enough to be represented by such a person while looking for software jobs is going to have a bad time.
If you're seeing a demand for it in Word format, they're a recruiter and a lazy one at that.
Most in-house guys accept PDF. I explicitly tell recruiters that my resume is typset, so they have a choice: a PDF or a LaTeX file that nobody will ever make heads or tails out of. I do not even know if LaTeX has a .doc plugin.
If you're seeing a demand for it in Word format, they're a recruiter and a lazy one at that.
Most in-house guys accept PDF. I explicitly tell recruiters that my resume is typset, so they have a choice: a PDF or a LaTeX file that nobody will ever make heads or tails out of.
Don't do it! Shady recruiters ask this so they can edit your resume without telling you. Only send PDFs to recruiters, the shady ones won't bother with those.
If what I have heard is correct, it is because some resumes scanning softwares have trouble with PDFs. They often default to asking for Word because they know it works.
I know the feeling... I mentioned that i had built NodeJS and Perl integrations between Sharepoint, our monitoring platforms, Graphite and our ticketing system on LinkedIn and had 3 or 4 new recruiters in a month talking about Sharepoint consultancies.
Aside from the bureaucracy, that actually sounds like a fun job to me, solving obscure problems and working with things almost no one has heard of, instead of churning out and maintaining boring "cookie-cutter" applications. Then again, I do like retrocomputing, reverse-engineering, and the demoscene...
I imagined myself in a job interview in 2010, trying to explain how useful my extensive knowledge of Xenix, PL/M build systems, and VMS would be to my prospective new employer.
You can phrase it as having gained that knowledge through your problem-solving skills. Working on obscure things, that you can't just Google the answer to when you get stuck, really helps build those skills.
I first read the term as "necrocomputing" instead of "retrocomputing". I think I have a preference for "necrocomputing", but then I may have been reading too much fantasy lately.
Wouldn't the sheer pointlessness of it all bother you after a while? If someone wanted to pay you to dig a ditch and fill it up again, for the right price, would you do it? I mean, I'm not trying to claim that working for that hot new startup which is going to revolutionize ___ or endlessly tweaking some megacorp's ERP system is necessarily such a boon to humanity, but still.
Most things we do are a little pointless in the end. In the long run, the universe will cool to uniformly distributed inert matter, and all of our efforts will scarcely affect the distribution.
I currently work at big company X doing cutting edge work on the latest rewrite of big system Y, used by millions of people each day. In five years, the next rewrite will begin, and probably within ten years every last line of code I've written will be deprecated and deleted.
I supported booking systems that could have bookings stretching far into the future, this made switching systems tricky as a lot of quite complex bookings would need recreating with each taking proper time to do. The scope for automation was there but that wasn't possible.
With the software we supported there was no expertise in the company as the original developers left a long time ago. Therefore there was no way to understand why something broke or to write a patch. Instead everything was fixed by rote, if this went wrong then you did this. If that went wrong then you did whatever was needed for that. All of these fixes were handed down in the oral tradition and you took your own notes - there was no manual.
We further out sourced to India so there was no trace of the people responsible for the code, just us clueless idiots.
Some of these systems were MS DOS based and this was about a decade ago. What I did manage though was a means of getting on to any computer anywhere in the world even if it was totally air gapped and owned by someone that did not speak English. Even better, in most cases this would take seconds from a standing start. If I needed to access a computer that was on a remote network but turned off, I could usually Magic Packet it on and be fixing it as soon as it booted. This was a golden era for my remote access skills, nowadays I have difficulties getting my own computer online!
There is another way of looking at this, from the perspective of what will be out there for you to do in 25 years time, I am optimistic that there will be legacy work for us if we cannot keep up with the new stuff.
> What I did manage though was a means of getting on to any computer anywhere in the world even if it was totally air gapped and owned by someone that did not speak English. Even better, in most cases this would take seconds from a standing start.
My first job in software was also my worst. I was learning a lot by building and enhancing a SaaS applicant tracking system for a small firm in Denton. But it was run by really shady people; our CEO would hire grossly unqualified people that he was friends with back in his home country... Our director of QA had previously been a high school history teacher, and had never used a computer before... it was nuts.
We were also actively screwing one of our business partners. They sued us, and my boss considered me a very dangerous witness, so he sent me on a paid vacation so I couldn't be subpoenaed. Crazy place.
We have a box running windows 3.1 hooked up to an instrument on our plant. The answer I got when I asked about it was it has a proprietary ISA DAQ card in it with 'flaky' drivers' that prevents us upgrading it.
It's a non critical instrument but is providing 'useful' data I think its some type of microwave radar measurement instrument - very specialised hardware. It's been working well since the 90's so not worth spending money until it fails.
"Proprietary hardware cards" are notorious in industrial control system world they inevitably lock you into some ancient infrastruture when modern hardware and O/S's stop supporting ISA slots and things like that.
As late as 2008 or so, when leaving university, there was still a PC running DOS for it's pulse height analyzer card for spectroscopy. Same thing: still works, and the pain to use it is not bad enough to warrant the $10k or whatever it takes to replace it with a completely new measurement system.... Which often won't even have better specs regarding the actual measurement.
One of our clients still has a DOS box connected to a 14.4k modem that we have to drag EDI interchanges off. You wouldn't believe how much it cost to get a modem and POTS line installed in our DC so we could talk to it. There's a windows service written in .net that talks to it via a serial port and POSTs it to a REST endpoint.
Of course you could put a small internet connected embedded box to your clients place... (maybe even a commercial terminal server, if you trust the security of it) and directly emulate the whole modem+POTS line.
Although nothing compared to what was in this story, you should see the level of obsolete crap in place at adult website providers. Systems that haven't changed since the 90's that still generate revenue, so they'll never be shut off until there's a flood or a 40MB hard drive finally dies.
Have to admit, that approach makes a lot of sense to me. It's still chugging along and bringing in $, so why not just have a policy of benevolent neglect?
That Pentium-90 is probably sucking a ton of power at this point, when it could be virtualized and you could run 50 of those boxes on an $800 Dell pizza box. I mean, they are literally running the same hardware from the 90s.
There's not really any significant room for improvement on any of those factors. You might be able to save a rack unit or two, and several watts by replacing hard drives with a SSD, but if you don't have 50 boxes to consolidate, you're still facing the baseline cost of having a box taking up some amount of physical space and requiring several watts of power. Unless you make the much bigger up-front investment of migrating to a shared hosting platform.
How many person-hours is it going to cost you to do the move from a broken box, a faulty hard-drive and/or there's a non-updateable security vulnerability on the system running on the boxes?
I had a good backups business for adult websites going around 2008. It was nothing more than rsync over ssh with a couple duplicates and I would burn some dvds and drop them in the post once a month. Was a very good business. Probably still viable, because those companies valued having physically possessable images of their config files and source code.
I've read in passing that in times past they were web technology pioneers. Seem to remember some were using Freebsd when the canonical approach was Sun etc. big iron and UNIX(TM), and Linux was perhaps not quite yet mature enough.
From what I've read, the adult industry (or, more generally, the human desire to have porn) has pioneered technology in general. I can't recall the source, but I recall reading something that said that most, if not all visual media were used for porn almost immediately after their inception.
I think porn is also something where your architecture only needs to just work, so that encourages cutting corners and occasionally creative solutions.
>From what I've read, the adult industry (or, more generally, the human desire to have porn) has pioneered technology in general. I can't recall the source, but I recall reading something that said that most, if not all visual media were used for porn almost immediately after their inception.
" Various figurines exaggerate the abdomen, hips, breasts, thighs, or vulva. In contrast, arms and feet are often absent, and the head is usually small and faceless.
The original cultural meaning and purpose of these artifacts is not known. It has frequently been suggested that they may have served a ritual or symbolic function. "
yep :) Imagine an archaeologist in the year 3000 looking at backups of our days Internet and wondering what "ritual or symbolic function" all those images and videos played...
I remember upgrading a bunch of this stuff in the mid to late 00s. Likely nothing that I upgraded to then modern standards using Python 2.6 and PHP 5.x has seen an update. At the time, it was actually hard to upgrade to PHP 5 or anything other than PHP 4 or Perl 5 because so many sites were using shared hosting that only support PHP4/Perl5 + MySQL.
Did some work for a mammary related business many years ago.
A periodic job was to go through "Lois'" inbox, and clean it out. It seems their payment frontend simply generated an email for the backend payment processor (Lois). Presumably, she manually keyed the cc info into a physical xon terminal to process payments. There were usually 10s of thousands of messages consisting only of credit card info sitting in the clear for months at a stretch.
It was FreeBSD though, so they were cool in my book :)
Five characters! Luxury. I learned to program on a system where we only had two. (Not joking: on the Commodore eight bit machines, only the first two characters of variable names were significant.)
I learned to program on a BASIC system that had only one. Although strings were suffixed with $, so you could have both A and A$. Did I also mention that only upper case was available?
I do love war stories, but I think the OP has missed something:
After the meeting one of the managers told me that it was really our job to come up with projects that the customer wanted to buy, not the other way around. And it usually couldn't just be a general project for minor improvements, it'd need clear and ideally measurable goals
I think the OP has misunderstood the nature of consulting. This is exactly what needs to be done to convince X to spend some of their cash on having Y do work for them. You need to make a business case, with concrete duration, cost, goals, and benefits. That this system is a mess is really not in dispute by anyone... but no one will pay to improve it "just because".
Mine. Pick Basic, with only an ed-like line editor, in all caps, on Pick OS running inside an emulator called VMark UniVerse on HP/UX. I lasted a year.
I worked at a large aerospace manufacturer who had their most important production systems (just for a single factory) running on an ancient VAX.
It was written in the 60's and the original author's son, who was the sole person on the planet who knew anything detailed about it - he was the architect, the dev, the BA, the sysadmin and so on - was due to retire after 35 years with the company.
The DR plan was similar to this - they had found a model of the same computer in a museum somewhere that still worked and they had dibs on it.
In the end they shut down the factory, one of the major factors in that decision was this system. Just too hard to replace it
It's the people that make a job good or bad; the technology plays a comparatively small role.
I've worked with people I'd do anything for, including spending a couple years wading through old technology. You do it for the team, because you love your people. The same is true using new technology: it's easiest when you're doing it for your team.
In a previous life, I was a systems engineer at a company[0] that had several client with what would be charitably termed "legacy" systems, including SCO UnixWare and "Olivetti Unix".
Yeah...pretty much everyone with a vaguely ISA/EISA/MCA box in those days just shipped a 99% unchanged reference port of i386 SVR4. It was interesting that one of the exceptions was, of all folks, Dell. They had a really nicely done SVR4 distribution with a hugely improved installer, a lot of open source stuff prebuilt and lots of bug fixes. Ran it for a while as my primary desktop and liked it alot. Shame Dell didn't stick with it.
I went to college at the University of Texas at Dallas.
Our online Student Information System (SIS) was one of the very rare online services with operating hours. It wasn't available at night (and possibly on weekends too, but I graduated eight years ago, so my memory is fuzzy).
At one point, I started asking around, and I found out that it was because SIS ran on an old mainframe that couldn't read and write the disk at the same time. During operating hours, it reads the disk and makes all changes in memory. When operating hours are over, it writes all its changes to disk.
This was in 2003-2007 and they still had that mainframe controlling all their class schedules and student data. UTD was considered the biggest tech-oriented university in the southwest, too.
Ten or fifteen years ago, I spent a few years maintaining a "system" in FoxPro for unix on bastard hardware . . .
I once had to maintain a system using "Westi" 1.1, Westighouse's knockoff of CICS . . . The day I completed my first serious project and linked it, I crashed the company because the operating system was named "test", so I'd just overwritten it. About the time I was fired, I figured out that the entire system was bootleg.
IF NO ONE WILL ADMIT EVEN KNOWING WHO BOUGHT THE IDIOTIC PILE OF MISMATCHED PARTS, WALK OUT IMMEDIATELY.
It's its sibling, Already Invented Here. Every kludge makes sense in the context of the system state when it was implemented, but that leads to a system where /everything/ ends up a kludge. Management didn't want to invest any time in fixing a system that already "worked", so 30 years of temporary fixes turned it into an absolute monstrosity.
Both of these are important observations. And, as far as I can tell, both of them exist largely because of corporate politics.
If you could design a corporate culture that minimized the incentives towards both, I think you'd be a long way towards having a more sustainable company viz technology.
There was one time (2006 at a company of less than 50 people) where I absolutely refused to build the new pricing features in a billing system (which would have been a big hack on a long series of smaller hacks) unless they let me refactor it (meaning rebuild it from scratch). After waiting for a few months without their new pricing features they finally gave in. That's at a young, nimble company where the co-CEO had a programming background. I can only imagine how hard a fight that would be at a larger, slower company.
I've worked for a government data center where they ran instances of Ultrix in a commercial VM to run old data processing software that ingested and barfed out CSVs that contained UUencoded blobs of image data. All this was written in a mixture of very very old Perl, shell scripts in some gawd-awful Ultrix shell CSH script, and C code full of weird -isms that made it difficult to port to anything made after the rise of mammals. This was of course mission critical. At least they virtualized it instead of continuing to run physical Ultrix boxes, which was apparently a huge fight with management.
A side dish of LULZ was that they were experimenting with moving the VMs to EC2, so there is probably Ultrix in EC2 now.
This stuff is really, really common in large "mature" industries and government.
What fight? The way those battles are waged in large, slow companies is that the political decisions are made over your head, without your input, and decided before you're made aware of them. It's a lot harder to try and fight a decision when the decision-makers don't know or care what you have to say.
I think the most interesting lessons of this story have to do with the following expectations.
First, the expectation of support timelines. It seems in tech that support is promised for products far beyond when the vendor should reasonably be expected to support them. Companies make large investments in technology that ultimately aren't long-term enough.
Lesson: Any company doing a major project where there is significant cost (as a percentage of the overall cost) in hardware or software, and that selected hardware or software is not commodity level or has low market penetration in their industry should seriously reconsider the projections of TCO, as long-term support will get increasingly expensive over the life of the project (Contrived example, supporting Windows XP via virtualization vs OS/2 Warp)
A second expectation is the timeline of viability for technologies. Non-technical business people of the past generally seemed to have an expectation of viability that was around 15-20 years. More recently, stated expectations seem shorter, but are actually not.
Lesson: Businesses that make serious investments in technology need to consider length of that investment, and technical people should probably take those estimates and double or triple them, because that's probably how long hardware/software will be in use. Companies providing technology or support need to have realistic timeframes of support.
As a corollary to this lesson, I have spent some of my career thus far doing IT support-type roles for small organizations, and while I haven't encountered quite the same level of obsolescence as the author, I have seen people using NT 4 for critical server infrastructure in 2012, I've set up DOSBOX[1] in Windows 7 for clients to continue to use that version of Lotus 1-2-3 they paid for and have been using since 1987 , and I've tried to figure out, in 2011, why Netscape Navigator on, I think System 7, ( using dial-up! ) wouldn't properly render a webpage. People keep tech a long time, and people in the tech world need to expect that, and plan accordingly.
The final expectation is one that technology companies have, which is that tech should be replaced. While there are many use cases where tech gets old, breaks, has to be replaced, there are lots of cases where it can be functional for almost indefinite periods of time, and industries that will expect such.
Lesson: Tech companies in should not neglect the future, but should also strive to design objects of technology that can last a very, very long time. Companies that do so will find a niche and make money because there will always[2] be enough customers to support them.
[1]It was going to get some use on an iPad via LogMeIn and there was some reason I've forgotten why DOSBOX was a better choice than the command prompt
[2] Subject to prevailing economic winds, of course. You can still buy buggies and buggy whips, but it's not a growth business.
1. Your systems can last longer if they're based on free software, so you don't have to worry about a dependency going away and you not being able to fix it. (And of course effective copy protection here is the kiss of death.)
2. Your systems can last longer if they have less dependencies. (I tried to compile a six-months-old Guix from six months ago from source for several days. This was difficult because it depends on so many things.)
3. Your systems can last longer if the dependencies they have are more popular. Installing DOSBOX today is a lot easier than installing an emulator that will emulate the Tandy 1000 version of MS-DOS. (I think Lotus 1-2-3 had a Tandy version, right?) This is synergistic with the previous two: your free-software dependencies are still going to be a pain to handle if nobody else uses them.
4. Your systems can last longer if they're internally simpler, so that you can fix them when they do fail.
What often happens is that a company no longer wants to support a product that's still in use running production stuff. So the company sells off the support accounts to some other company which makes money supporting that product. This is how dBASE, filePro, and Framework (Ashton-Tate software) are still around. It's how OS/2 morphed into eComStation.
I agree with you, but most software engineers seem hell-bent on ignoring the discipline that every other kind of engineer is required to learn in order to exercise their craft. I don't mean to say that software engineers are not engineers, but I do think we should put more emphasis on shipping correct programs rather than merely functional ones.
> I have seen people using NT 4 for critical server infrastructure in 2012
This is rather worrying given how Microsoft abruptly abandoned support for NT 4 when they spotted a security hole they couldn't fix without fundamental changes.
Reminds me of a contract gig I passed on to support Microsoft's creaky inheritance of Danger's Hiptop/Sidekick stuck in cheapest maintenance mode. Everyone associated were trying to rotate out and it seemed like morale was less than zero.
Also never work for slave-driving shops like Taos (aka Tause Mountain) unless you really need the money because it will crush your soul. (eg master cool techs and be awesomely personable to get cooler gigs/referrals.)
That transition was a real mess. We had some games in the T-Mobile Sidekick store and getting paid the royalties by Microsoft was way more difficult than it should have been after the transition. It took months to get added as a vendor and I had to even send them a "final invoice" using the numbers that they had provided. The next collection step was going to be at least small claims court. The Danger team did built a great system for its time and by the time Microsoft took over, it went downhill fast. And then Microsoft's Kin flopped too.
The incident caused a public loss of confidence in the concept of cloud computing, which had been plagued by a series of outages and data losses in 2009. It also was problematic for Microsoft, which at the time was trying to convince corporate clients to use its cloud computing services, such as Azure and My Phone.
I've heard good things about Oracle's RAC, but it's understandably intolerant of your screwing up its disks (SAN mis/re-configuring) when you aren't properly maintaining backups. I also heard the consultants you have to hire after you manage such a feat are expensive.
> I've heard good things about Oracle's RAC, but it's understandably intolerant of your screwing up its disks (SAN mis/re-configuring) when you aren't properly maintaining backups
There are a number of problems with RAC, some of which are people using it wrong, and some of which are inherent to RAC. "Using it wrong" covers things like people not understanding it's on shared storage so it's providing compute node resilience, not storage resilience, so they probably sould spend on some Dataguard (or equivalent) unless they want to be the DBA equivalent of the server admin who thinks you don't need backup because you've got RAID.
The built-in problems come from the fact Oracle ASM doesn't check[1] the signatures on disks/LUNS presented to it. So if the SAN admin, I don't know, manages to somehow reverse the mappings for one LUN of 30 between the stress RAC and the dev RAC, Oracle will not start and say "that ASM disk has the stress signature on it"; Oracle will overwrite the stress LUN with dev data for a while, then go to read it, then discover it doesn't have the on-disk structure it expects, then crash with a SEGV or other entertaining but unhelpful error. But only after it's irretrvably corrupted the ASM group, of course.
I wasn't totally skeptical - we did RAC on AWS in tests back in the day using a third node as an iSCSI target, but it was a) sketchy as hell, b) not at all redundant, c) not something I thought Palm would go for.
It might work on something like OEL with ZFSonLinux using zfs send/recv. Larger implementation might want to investigate drbd or something like OCFS2, GPFS, AFS or Lustre (none of which probably plays well with cloud environments). Maybe Gluster but with trepidation.
(It was an AWS consulting shop with banking / military chops, whom could sell ice to enterprise eskimos.)
What people don't always understand is outside of the tech world, there are people who still think computer upgrades are unnecessary expenditures. It's true that the theoretical lifetime of a computer is probably something like 50 years, minus the hard drive replacement and the power supply which will generally go bad much quicker.
The more difficult thing is in a field like aeronautics any minor changes have the possibility of generating catastrophic events. In these situations, the cost of replacement, and testing, and phasing stuff in is enormously high and the risk is obscene when change needs to be made.
I once worked somewhere where we never switched an old machine off. It was 15 years old and would cost £15K to replace. They were recommended to never switch it off - on the basis that it was most likely to fail on power up.
wow, my worst job infrastructure experience ever was when we were all forced to run a windows virtual machine to run outlook as our mail client, even though we were developing on linux and there was no real reason to prefer outlook over a linux mail client. (our company was making software for the windows vm market, and the ceo got the bright idea that everyone should run a windows vm to dogfood our product.)
it was a pretty miserable experience, but this kind of puts it into perspective :)
I won't say it was my worst job ever, at all, but I can empathize. I joined a consulting company a few years back and ended up doing performance optimization on a legacy point of sale system written twenty years before, in C, for a large home improvement supplies company. Some of it actually proved interesting, but in terms of career moves it wasn't the best.
I love it :-) What a time travel! I've been reading a bit about computing history of late, and that article sounds like it's straight from the pages of some old Jargon File entry...
In 2007 for a big bank I fixed an intranet site for a version of Mozilla browser (pre-Firefox) that run on an old version of Debian that were used in their branches.
MSG: APL 1
DISTRIB: *BBOARD
EXPIRES: 03/17/81 23:08:54
MINSKY@MIT-MC 03/11/81 23:08:54 Re: too-short programs
APL is compact, I suppose. So is TECO. When I wrote the following
Universal Turing Machine, which works, I actually understood it.
[ I've interpolated the non-printing characters as displayed by (Gnu) EMACS, escape is ^], ^^ is one character, as is \356: ]
i1Aul qq+^^0:iqm^[29iiq\356y0L1 00L1 11L2 A1L1
y0L1 0yR2 1AR2 AyR6 yyL3 00L0 1AL3 A1L4 yyL4 0yR5 11L7 A1L4
yyR5 0yL3 1AR5 A1R5 yyR6 0AL3 1AR6 A1R6 y0R7 0yR6 11R7 A0R2
^[j<sR^[;-d-2ciql-^^^[ci"ed^^^[cii^[ciuq'^[>
j<sL^[;-d-2ciql-^^^[ci"ed^^^[cii-2c^[ciuq'^[>jxblx1lx2lx3lx4lx5lx6lx7hk
iyyAyyAyy^[32<i0^[>ji110101110000010011011^[ 1uq<htmbqq=>
I do not advise attempting to understand this code, which is
almost as bad as that for the Universal Turing machine.
Please ack receipt of this and/or send me email (in my HN info); for others, note this is ITS TECO, which I was told was by far the most powerful version of it (fortunately, by the time I showed up learning it was no longer really necessary).
I knew that there were some delays in rolling out the the replacement system, so I did some googling to see when it actually occurred. My god, it's now. We are in the middle of it. The upgrade was responsible for the recent air traffic meltdown. Please somebody, point out my flawed research skills. I cant believe this.