Hacker Newsnew | past | comments | ask | show | jobs | submit | more zcdziura's commentslogin

I don't actually believe you can patent game mechanics. Specifically for D&D, there's nothing novel about rolling a polyhedron and adding the resulting value with some other static value on a piece of paper.


I was really lucky to find the 5800X for sale directly from AMD. Looks like it's out of stock right now, but was definitely available for purchase even like a week ago. Just poor luck! Buying PC components right now is an absolute chore.


Sort of off-topic: I love looking at images of how the Earth's continents have shifted and moved over time. They provide a lot of inspiration for me when making homemade maps for my D&D games. Whatever nature has done makes for much more compelling and believable maps than what I can make on my own!

My latest map is based on a rotated view of what Pangaea Proxima will (probably) look like in a few million years. Looks pretty neat and provides a lot of inspiration.


Aren't generators really good now ? I was under the impression they really did improve over the last years.


My first job out of college was working at a large financial services company in one of their departments that, among other things, handles corporate actions on behalf of its customers. Much of that corporate actions processing is done as part of a nightly batch cycle that runs on an IBM mainframe. The system itself was first deployed and continuously maintained for longer than I've been alive.

Long before I joined the company, that department tried to rewrite the system to be "modern", porting everything over to Java and having it run across a couple of servers. This was in the late 00's if I remember correctly. Apparently this rewritten system couldn't handle the volume of data that the mainframe can easily process during its batch cycle. However, were they to rewrite the system now (having the tools available to better facilitate real-time processing along with flexible resource scalability) I bet that a new system would be able to keep up with the demand and be just as expensive, if not a little more cost effective to run.

Does anyone else have any experience updating and rewriting old mainframe batch processes to newer systems and architectures?


Around 2000, I spent some time developing snazzy new Java e-commerce stuff, at a large company that was mostly based on midrange AS/400's running RPG code. At the time, they announced a huge project to retire RPG and re-write everything in Java.

I've stayed in touch with some friends there all this time, and apparently they still haven't finished this project as of 2020. Disaster after disaster.

The first executive couldn't execute, so he moved on and the project was slow-rolled for awhile. Until enough time had passed for a subsequent executive to claim it as HIS "fresh new idea".

That executive came it very aggressively, actually pushing out all of the RPG employees and contractors well before the company was really ready. So that executive was pushed out, and his replacement had to hire IBM Global Services to come in and take over everything. IBM hired most of those employees and contractors who were let go. So the net result was the same people doing the same job, with the company paying 2-3x more than they had been paying.

It was around 2015 when they finally reached the tipping point, to where the company was "a Java shop with some legacy big iron" rather than "an RPG shop with some Java e-commerce pieces". But there's still a lot of AS/400 on the inventory side of things, and it will probably be another couple of decades before that finally goes away.

A lot of people on HN and Reddit, who are either students or startup employees, just have NO IDEA how things work for large businesses in the other 49 states. Stuff lasts forever, it's just a completely different world.


What people fail to understand is that the rules/logic of a particular business that has been around for awhile can be absolutely enormous, complex, and poorly documented outside of the thousands of lines of code in production systems .A lot of this is due to historical choices that nobody remembers why a decision was made, but another problem is that many employees hate documenting what they know as it is either boring or they think it makes them less valuable, and management usually doesn't push it because it isn't a big quantifiable win on their resume and most aren't going to stick around long enough to see the fallout from the loss of institutional knowledge over time and turnover. Rewriting can be very challenging even with the domain knowledge, onboarding people to be competent in the business can take close to a decade... etc. They're kind of like programming languages where there is no spec other than the interpreter.


> Does anyone else have any experience updating and rewriting old mainframe batch processes to newer systems and architectures?

A fair bit and key is that the business process and understanding of all that is more key to getting a good transition over anything else.

The other key is small chunks, identify aspects that can be shifted across bit by bit - reports that are using daily data can be a datawharehoused data affair with your reporting system tapping that - that opens up tools for the the various departments to do ad-hoc reports by themselves without suddenly spiking the key-business loads.

But so many avenues and aspects there is no simple list to go thru, as each one depends upon the set-up and more so - business logic.

So I'd take a COBOL programmer who knows the bushiness over a top-end java programmer who does not know the business for a migration project as be easier to teach the COBOL programmer java than it would to teach the java programmer the business logic as would take sooo long and to learn all the nuances and what needs to happen when things go wrong and what is priority and the dependencies. Alas many management and project types will see a glossy top-end java programmer and pick them and ignore their in-house talent all too often.

However - batch stuff is maybe the easiest area to work upon - less users in the equation and with that - less variables to go wrong ;).

Mainframes are also not just large legacy number crunchers - the whole aspect of resilience and fault tolerance goes much deeper than ECC memory, RAID and dual NICs and PSU's. Much deeper and that is why they have up times above and beyond your X86 server class. This with database and messaging systems equally tried, tested and robust. Which gives many factors to work upon.

Though as always - business logic and process along with legal requirements and other constraints for data-flow are key and certainly flowcharting that all out and the dependencies would be key to have anyhow and essential for any migration.


Spot on.


I worked at * bank [Australias oldest bank!] within the last 3 years. The entire bank CRM (~12 million active customers) is a Z/OS CICS system which was first installed in about 1972-73.

The same goes for all wire transfers, credit card transactions, ATM infrastructure and so on.

The banks solution was to simply write about 40 different Javascript and Visual Basic interface apps for staff to use, which are poorly maintained failures, running inside on-premises VPS. If you wanted to access the apps for one of the banks subsidiary banks... you had to open a VPS via remote desktop INSIDE the remote desktop instance of the VPS you were already working on.

Where it got really stupid though is they wrote a windows program which emulates the 'green screen' environment. My coworkers in their 50s were using the same IBM commands to update client account details that they learned during training in the 1970s as branch staff.

>Does anyone else have any experience updating and rewriting old mainframe batch processes to newer systems and architectures?

What we did is... use AutoHotKey to record keystroke macros which are then executed into CICS via a windows emulator. It sounds stupid, but we managed to automate about 50,000 FTE (full time equivalent) of man-hours from our India based back office teams. The MBAs loved that. Of course, this was achieved by using an auto hot key script on an IBM terminal emulator running CICS inside a windows vps running inside a second windows vps, running inside a desktop computer. All written in a mixture of VBS/javascript/AHK


Your biggest mistake was failing to call this "robotic process automation" and collecting a VC cheque.


Not personally, but most every migration attempt I have heard about has gone similarly.

We run a mainframe shop and for us the cost is fairly reasonable, and the high bandwidth/reliability (even with the cpu pinned to 100%) is the most critical aspect to us. Ignoring the risk and cost of migration, I am not even sure that an alternative would be cheaper... but it is quite difficult to determine the cost and my boss obviously factors in the cost/risk of migration anytime this comes up.

I'd be curios to hear of any successful total migration attempts. Most shops I know of keep their mainframe and build around it with modern tools and we do as well.


That's been the experience I've had as well. One company I worked for around 2015 had 5 or 6 mainframes (which they kept upgraded to the latest hardware on an IBM lease program every few years), and it was well understood that the number of distributed computing servers required to keep up with the volume and resiliency of the mainframe would be too great to even consider.

Another company I interviewed with did just what you said, they built REST APIs and other "modern" interfaces to the mainframes so that 99% of app developers in the company don't have to learn anything about the mainframe.


In 2015 I took a contract job with a large team at Verizon because I am an old man and know both Java and COBOL. The idea was to replace COBOL code with Java. The Java ran on IBM iron that was similar to the existing COBOL system. The COBOL code consistently ran at least 3-4 times faster than the Java code, no matter what we did. I think that the bottleneck was access to the huge Verizon customer database, but even the straight Java ran slower than the equivalent COBOL. They moved the project to another data center and I lost my job (my last job before I retired). I don't think that they ever got things going. I'm not sure that they ever will.


COBOL is ridiculously efficient when it comes to doing bulk IO when coupled with the right hardware that is optimized for COBOL. Attempting to use some other language to do the same thing will have the same effect as inefficient memory access patterns have on a GPU.


Few people will realize that COBOL is actually a very low level language these days.


There seems to be a moral to every Old language replacement story.

I am Not sure why languages are suddenly considered obsolete, I think it is that all the programmers suddenly run over to the next big thing leaving massive gaps in staff that mean recruitment becomes a problem. If I was in the market today I would be running the other way and embracing what currently works.


I blame the Resume Driven Development mindset.

"Maintained a COBOL system that processes twenty zillion transactions per hour" just doesn't sparkle like "replaced 40-year-old monolith with a galaxy of JavaScript microsoervices that get winded at 100 requests per hour."


I work in a bank and we have a team of people writing microservices on top of mainframe "programs". I'm not sure how it works in the mainframe because I work mostly on the front-end, but apparently what we are used to in modern REST apis are just called programs in the mainframe. Anyway, I digress, what we are experiencing is quite the opposite: operations written in the microservice run much faster and we are kind of bottlenecked by the mainframe.

I also recently came to learn that the mainframe run jobs in batches, so I'm not sure how does that affect performance, if it even does.


I think this hints at the best practice, establishing APIs around the edges of the legacy system, then sectioning off chunks of legacy functionality, building internal APIs inside the legacy code around those chunks and then replacing those chunks with modern systems and making those APIs external (external as in legacy -> modern instead of legacy -> legacy). Some legacy core pieces might never be replaced but those can be contained so the bottleneck is limited. (a good example is genuinely batch operations likely nightly email alert processes, while it might be nice to modernize those)

My limited experience with this is this strategy is that it works, although it is not error proof and can be messy. You also end up discovering that assumptions you made about the legacy software are not correct and then you need to back pedal and potentially throw away previous work (ironically your own assumptions about how to modernize the legacy code become their own legacy that you need to work around as the project ages).

It's good if you have some of the original developers still around and probably worthwhile to give them consulting $$ in order to bring them out of retirement.

Perhaps the best thing about this strategy is if you in the end conclude it isn't worthwhile, you will still have modernized some sections of the codebase. Of course, because of this you need to make sure that the individual section by section modernization efforts are themselves a net gain, instead of just hoping that the final end of the modernization process will be a benefit, because you might never get to that end point.

Also, you might be able to enhance the product early on by being able to iterate and improve the sections that have been modernized, instead of waiting till everything is done to flip a switch and get new features.

The downside is this is a slow and gradual process. In the meantime your developers are filling their heads with details of a unique system that is hopefully going to go away never to be seen again.

If you have specs and documentation and tests and all of that good stuff, it might be better to just start from scratch, and build things up according to requirements.


That's the way to go about it. It won't be cheap nor fast but you'll get the job done eventually.


This is what I see most shops doing, including ours. I think the cost of getting off a mainframe is too high, so this will likely continue to be be the trend for the foreseeable future.

Also, mainframes can run both batch jobs and "online" (CICS) jobs. Our shop is about half and half with batch and online programs. The main thing that batches can be annoying with is that they run on schedules and may have to wait for other jobs to complete before they can kick off, causing delays. We just upgraded from a z13 to z15, so our mainframe is no longer a concern but we are also much smaller than a bank.


IBM's profit margins are so high they sell you a machine that's fully configured because they know you'll pay for the upgrade that's already delivered but deactivated in a year or so.

If moving off the mainframe becomes cheaper, they have a lot of flexibility


You seem to be conflating "throughput" and "latency" here. Mainframes are very much focused on the former.


Mainframes (Z/OS) will mainly run batches (most often COBOL programs controlled by JCL jobs) but there is also CICS (https://en.wikipedia.org/wiki/CICS) which runs COBOL (and probably others) "on demand".

My experience with connecting UNIX to Z/OS was not with a real API (HTTPS) but UNIX > TIBCO EMS > something called TIBCO Substation > CICS. It was horrible, mainly because of Substation which needs to translate on the fly between two completely different platforms.


I can second this - our mainframes average response time is 2-5 seconds based on the call, with 99 percentile responses in the minutes per transaction. This is after a fortune was spent moving us to a "faster" platform.

Also, at our bank, very few applications use the REST API. Since most applications are developed by contractors, most vendors don't want to use our custom API, and instead opt to connect to the mainframe directly, because then they can reuse the code at other banks after charging us to develop it.


Isn't this confusing latency with throughput? Mainframes are designed for throughput, raw TXNs per second, and a lot of the architecture is designed around batching. Which is great for throughput, but the antithesis of latency.


Last time I worked on mainframes in 2012 the price to have a legacy banking app running with around 300 transactions/sec on IMS or DB2 was just insane.

IMO any "modern" architecture can handle the load. After all Google, Apple, Facebook, Microsoft, Amazon, etc. are not running on mainframes (just taking mastodon examples).

The main difference between ARM, x86 or Power vs Z is that you can keep your CPU usage between 95-100% in production and not break everything.


Yep. We migrated from IBM Z13 running COBOL + DB2 to HP machines hosting RHEL VM's running TIBCO, JAVA and Cassandra. Similar load as you mention.

Performance is really not an issue, especially when you factor in the price (we could easily double our servers and still be cheaper than a pretty baseline Z13).

Main pro of the mainframe is: you can basically not touch your code for decades and it'll still run fine, every technology on it and the hardware itself is rock solid.

Main pro of leaving the mainframe is: you don't need to worry as much about expensive/hard technical debt (i.e. EBCDIC text encoding, special hardware for everything), pricing/billing is much simpler and cheaper, way easier to switch technologies.


Mainframe have live CPU and RAM swapping. You can't even compare.


Yep. That's true


Same here, it's been a great asset to have available to document stuff in my homebrew world and have it reference other stuff. Being able to link my wiki out to my players for their own use is very handy.


While I still listen to and enjoy The Daily, I'm 100% with you with Barbaro's cadence and the way he reads from his script. Especially with how he reads off the ending "Here's what else you need to know today" line after the main story is over. It sounds like "Here's... What else... You need... Toknowthcirbfifhskd". I always laugh to myself every time I hear that.

I also understand that teasing someone about the way they speak (even if it's just to myself, hundreds of miles away from my commute to work) especially shitty because how you speak is one of the most personal things about someone, but JEEEZE!


I don't think it's wrong to critique a radio/podcast personality on their cadence. It's a part of the act, and it's something most pros work on and do intentionally. A lot of sportscasters talk about this (on podcasts and TV, ironically) - they put on a voice like an actor puts on a character.

I just don't care for Michael Barbaro's chosen vocal persona and find it distracting. But he's doing a good job with his show, so some people must like it.

In a similar vein I really like Terry Gross and Joe Buck but a lot of people disagree with me on both those counts.


What have you noticed about mp4 video playback that is broken? I've used Firefox as my daily driver for many, many years (even before the Quantum updates) and have never noticed any playback errors.


It simply doesn't work for some non-trivial amount of videos, e.g. some instagram stories or gfycat mp4s. If you load the link directly in the browser it will say the file is corrupt. Loading the same link in chrome plays fine.


Would you mind filing a bug at [0] with the content of about:support and maybe a link or two where it doesn't work? Happy to have an initial look into this (I work on the media team at Mozilla).

[0]: https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&comp..., you can log in with a github account or create one


I suspect this user is on Linux and doesn't have h.264 codecs installed.

They should be able to do this fairly easily -- two distros here:

https://help.ubuntu.com/community/RestrictedFormats

https://docs.fedoraproject.org/en-US/quick-docs/assembly_ins...


Wow, it's a fun surprise to see Dana Ernst featured on Hacker News! Before he moved to Arizona, he was a mathematics professor at Plymouth State University. I had him as a Discrete Mathematics teacher, and he was awesome!


If you do find the size of MomentJS to be troublesome for your particular usecase, and are in need of handling timezones, Luxon is a great alternative. It's from the authors of Moment, and I believe it uses the built-in i18n APIs found within modern browsers to handle timezones.


+1 for Luxon.


This article talks primarily about people from Japan, whose script I imagine is much easier and faster to enter via a smartphone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: