Hacker News new | past | comments | ask | show | jobs | submit login

Most of the stuff people use Docker for, I was already doing in 2006 with WebSphere and EAR "images".



No, it was miserable as f and no way comparable. Getting WAS to run locally could take a day or two of configuring. And since everything was manually done and had to be tweaked when loading in new stuff. And if you broke something (or it broke by itself) it was easier to just wipe it and reinstall everything from scratch, again wasting a few days of work.

And if you tried running your own EAR files, it could take an hour from a code change to actually test it. Building it, installing it, reloading the server, and then getting to the correct state of testing. If you discovered a bug, you had to patch the code, build the file again and start over, wasting another hour. Some of it was evaded by using JRebel, but if the changes couldn't be swapped you were SOL.

No way WAS+EAR can be compared to running stand-alone images. It was a clusterfuck of dependencies and dependent configuration.


Absolutely. It was hellish. Very few people understood how intricate this was, how bad the logging/messages were when you got it wrong, and paid little or no mind to being able to be portable between application server implementations from the word go.


Just to be clear, I was talking about J2EE deployments of the time in general, not WebSphere in particular. I was mainly stuck on JBOSS 2.x/3.x and I was keen on using Resin instead.

Yuck.


Without regard to the merit of each tech, I've made the J2EE to K8S comparison before and am glad to see here I am not the only one who saw the resemblance.

It mostly goes to show that containerizing application code for execution in a managed environment in nothing new. I'm sure this was also done somehow in the mainframe world.

As to whether K8S is a superior embodiment of the concept, I'd say that the problem is rather that a large majority of adopters are just cargo-culting without having a real need for what it actually provides, externalizing the costs of complexity it brings. I have nothing against K8S itself, but its adoption curve is telling of CV-driven architecture at it's worst. Mind you, the same might have been said of J2EE back in the day.


> I'm sure this was also done somehow in the mainframe world.

LPAR -- https://en.wikipedia.org/wiki/Logical_partition

Upon understanding this, I realised that there is / was nothing new under the sun.


Even safe systems programming languages, with unsafe code blocks.

https://en.wikipedia.org/wiki/Burroughs_large_systems

https://en.wikipedia.org/wiki/NEWP

If you follow NEWP manual from Unisys (nowadays still selling Burroughs as ClearPath), you will see UNSAFE code blocks and how its use tainted binaries, which requires admin permission for execution.


> It mostly goes to show that containerizing application code for execution in a managed environment in nothing new. I'm sure this was also done somehow in the mainframe world.

Yes, it's called an operating system :-)


My experience with running images on k8s is way worse than what you describe. It is a clusterfuck of dependencies and configuration when in reality people just want to run java -jar.


That java -jar usually requires a clusterfuck of dependencies and configuration.


maven "shaded" jars seem to be something of a help here.


Personally I disagree, but even if that was true, why add more unnecessary complexity to the mix? Deploying a jar vs deploying jar packaged as a docker image brings little benefits at a non negligible cost.


Exactly! Sometime WS would fail to start a local instance in IDE in 5 minutes and timeout. It was hilariously bad software.


I did not have any problems getting WAS to run locallly and I was very Java newbie at that time. I guess I was just lucky. It was literally the first thing I did on my first Java work: I installed WAS and started writing some servlets.


For just getting something simple/singular up and running it was passable.

The problem was when it was used for multiple services (as his comparison with EAR and docker implies). Lets say you had a service for something packaged as EAR. You would then have to log in to the dashboard, add the package and configure it manually. If something changed with the package, you would probably have to reinstall it in WAS, you couldn't just pull the newest from git in your repo and be done. And if that needed any config changes, everyone on the team had to do the same locally and manually in the control panel. When you got multiple moving parts, you would spend more time bringing them in and configuring everything correctly than actually developing whatever you were supposed to.

But what I wasted most time on was certificates. Everything had to be signed and stuff, mostly self signed by randoms and more theater than security. Sooo many times stopped working because the certificate for some module expired, or the CAS was unknown and had to be manually loaded into everyone's local install etc.


The story with certificates is still quite up-to-date with docker, k8s and other stuff nowadays.

Just last December I was having similar fun with certificates and "modern" approaches.


Sorry but I will take Websphere over Docker/Kubernetes any day of the week.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: