I've never worked in payments, but I think you can't take too many lessons from trading infrastructure for payments infra due to the way the scaling works.
In trading infra you are dealing with fairly finite number (100s to 10,000s) of instruments trading very large volumes of transactions (thousands to billions), so you can shard by instrument, dedicate CPUs, keep everything in memory and voila. Cross instrument consistency is generally not as much of a problem, depending on what part of the stack you are working on (order book / matching engine / market data feed / order router / position calculations / etc ).
With payments I'd imagine you are dealing with millions of payers, millions of payees, and 10s - 1000s of payments each.
not sure why you're downvoted, the point is quite valid. I worked on both sides. They may seem similar from afar over the big picture but largely distinct in details and thereafter scaling tricks.
Trading on both buy and sell sides are highly structured data, and post trade requires high level of interpolation among data entities.
Payment processing data unit are somewhat unstructured / document based (payer/customer details, payment intent, plans, gateway infos..), but each payment object are quite isolated from each other. Post payment there are some interpolation if system is dealing with payouts account ledgers, but far from level of trading risk management (particularly derivatives).
I love finance because they take what is basically logs and name it the LMAX disruptor and rather than defining it, link to a martin fowler blogpost which seems to have a unique domain name for the post
and all the technical sophistication is getting around the fact that java was a piece of garbage in like 2008
> We’re forced to deal with asynchronous programming because we believe databases are completely necessary.
No, payment systems still have third party systems they rely on. Async programming is there to get around blocking the thread with long-lived tasks where you depend on the result. Removing the database does not remove the potential for all data dependency on long-lived operations.
The article then proposes leaving things in memory, event queues, and hot backups solves all the problems off this flawed assumption. Admittedly leaving things in memory should reduce complexity, but it is not a silver bullet, and you lose robustness to certain failures
That's a lot of ads for the book, but inbetween the author describes event sourcing. There is still state, but with event sourcing it does not need to be persisted, and so it can live entirely in process' memory - thus no database is needed.
Maybe I'm missing something but how does the system handle the growing list of payments that are stuck in limbo (i.e. memory) until the customer completes the payment (equivalent to abandoned carts)? Doesn't there need to be a mechanism for purging stale payment requests, lest memory eventually become completely consumed?
Unfortunately not. Every single payment has to be persisted to a disk locally. Even worse, a single payment needs to be persisted like 3-4 times during the payment exchange. Otherwise there’s counterparty risk of publishing an outdated state to the blockchain.
TLDR there's a database but POS systems just write events to it, not read. The article doesn't discuss checking balances, or preventing overdraw when multiple purchases happen at the same time. It mentions event durability on the POS, but the solution is hot backups.
Edit: Maybe I got this wrong. Maybe the point is that each POS communicates directly with the payment processor, in which case the above applies but wrt inventory instead of balances, and the mention of event sourcing is non sequitur. Also there's still a database, it's just the payment processor's database.
In trading infra you are dealing with fairly finite number (100s to 10,000s) of instruments trading very large volumes of transactions (thousands to billions), so you can shard by instrument, dedicate CPUs, keep everything in memory and voila. Cross instrument consistency is generally not as much of a problem, depending on what part of the stack you are working on (order book / matching engine / market data feed / order router / position calculations / etc ).
With payments I'd imagine you are dealing with millions of payers, millions of payees, and 10s - 1000s of payments each.