Cross border payments with stable coin is way easier and faster than with USD. When crypto is in a bull cycle demand for stable coins raise as short term interest, sometimes up to 50%/year (for a few hours or a day). Stable coins generate yield for their operators, they won’t run with your money due to the same reasons why a bank CEO won’t.
> Cross border payments with stable coin is way easier and faster than with USD.
Only when it's because the other methods are highly restricted.
I make cross border payments quite regularly, and it's cheaper, faster, and safer, using the regulated systems (denominated in fiat currency).
> When crypto is in a bull cycle demand for stable coins raise as short term interest, sometimes up to 50%/year (for a few hours or a day).
And, pray tell, what happens when the reverse happens, and a death spiral begins?
> Stable coins generate yield for their operators, they won’t run with your money due to the same reasons why a bank CEO won’t.
From Wikipedia:
Tether's USDT is currently the world's largest market capitalization stablecoin. Tether initially claimed their stablecoin is fully backed by fiat currency. However, in October 2021, it failed to produce audits for reserves used to collateralize the quantity of minted USDT stablecoin.[44] Tether were fined $41 million by the Commodity Futures Trading Commission (CFTC) for deceiving consumers.[45] The CFTC found that Tether only had enough fiat reserves to guarantee their stablecoin for 27.6% of the time during 2016 to 2018. Since then, Tether began issuing assurance reports on USDT backing, although some speculation persists regarding the use of Chinese commercial paper for reserves.[46] As at March 2025, Tether had never completed an audit by an accounting firm.
Edit:
The reason that crypto is most often presented as an alternative is because it's "Not regulated"
The reason I have faith in a fiat currency, and not crypto (of any kind) is the regulation - the handlers are regulated, the way that the banks invest the money that they hold is (supposed to be) regulated.
When banks have had the regulations on how they can use the money they hold relaxed is what has caused the last two DEPRESSIONS - 1930s, and 2010s (GFC)
There's zero advantage to use crypto except, as stated before, when the goods/services being exchanged are restricted, or the cross border trading is restricted.
Those border trades, you're dealing with countries where the banking system has failed (because the government has failed), or you are at risk of breaking sanctions or financing terrorism.
Edit: Used restricted where I'd previously used the word regulated to try and make the point clearer
> And, pray tell, what happens when the reverse happens, and a death spiral begins?
Interest drops to 1%, nothing else. We talk about USDC on Kraken or Coinbase, both regulated by SEC, FINRA and having ATS license (only 50 such licenses granted in US).
> I make cross border payments quite regularly
This is easy only between very few countries. Try Africa, India, former Soviet republics. You can send them money, they might not be able to receive.
> Tether's USDT is currently the world's largest market capitalization stablecoin…
This is why I only talk about USDC and others, not Tether.
> The reason I have faith in a fiat currency, and not crypto (of any kind) is the regulation
Stable coin is a different form of fiat operated by a regulated institution. They actually much more regulated compared to banks. For instance, they cannot use fractional reserves, everything must be 1-1 backed by cash equivalents (bonds and friends).
> There's zero advantage to use crypto
Wire cost me $27, USDC transfer cost $1. Wire takes 1-2 days, USDC transfer - 15 seconds. I can get 2-10% interest on my crypto holdings without any commitments, I can make 0.5% with savings account or 3% if I commit to a yearly deposit. I cannot get USD nominated debit card, but can from a crypto exchange. So on and so forth. Life is easy when you are a US Citizen, much different if you come from Russia, Iran, India, most of Africa, most of the world, really.
This is correct and addressed with diversification. Is money is spread across multiple safe instruments your chances to get in trouble is minimal. If you do get in trouble then your exposure is small too.
Imagine you do it with PG - add a column “money”, put some numbers into it and issue a ToS guaranteeing money in your db 1-to-1 exchange to USD. Because now you store money amount in your db and can manipulate them at will you have to be a bank. Good luck with that.
Blockchain guarantees there is no double spend while not having one controlling entity. Legal requirements are there to do exactly the same thing - not let managers mess with other people money.
But there are 2 separate controlling entities in this scenario. The hypothetical company that wants to issue the stablecoin and Bridge. They have complete and full control over the money anyway, blockchain or not.
During STW GC has to walk all the heap. Checking 50% of live objects tell you nothing about possibility of using memory at address X. Only after checking all objects you know that X is not occupied.
Agree with others saying HN needs more content like this!
After reading I don’t get how locks held in memory affect WAL shipping. WAL reader reads it in a single thread, updates in-memory data structures periodically dumping them on disk. Perhaps you want to read one big instruction from WAL and apply it to many buffers using multiple threads?
>Adapting algorithms to work atomically at the block level is table stakes for physical replication
Why? To me the only thing you have to do atomically is WAL write. WAL readers read and write however they want given that they can detect partial writes and replay WAL.
>If a VACUUM is running on the primary at the same time that a query hits a read replica, it's possible for Postgres to abort the read.
The situation you referring to is:
1. Record inserted
2. Standby long query started
3. Record removed
4. Primary vacuum started
5. Vacuum replicated
6. Vacuum on standby cannot remove record because it is being read by the long query.
7. PG cancels the query to let vacuum proceed.
I guess your implementation generates a lot of dead tuples during compaction. You clearly fighting PG here. Could a custom storage engine be a better option?
After reading I don’t get how locks held in memory affect WAL shipping.
WAL reader reads it in a single thread, updates in-memory data structures
periodically dumping them on disk. Perhaps you want to read one big
instruction from WAL and apply it to many buffers using multiple threads?
We currently use an un-modified/generic WAL entry, and don't implement our own replay. That means we don't control the order of locks acquired/released during replay: and the default is to acquire exactly one lock to update a buffer.
But as far as I know, even with a custom WAL entry implementation, the maximum in one entry would still be ~8k, which might not be sufficient for a multi-block atomic operation. So the data structure needs to support block-at-a-time atomic updates.
I guess your implementation generates a lot of dead tuples during
compaction. You clearly fighting PG here. Could a custom storage
engine be a better option?
`pg_search`'s LSM tree is effectively a custom storage engine, but it is an index (Index Access Method and Custom Scan) rather than a table. See more on it here: https://www.paradedb.com/blog/block_storage_part_one
LSM compaction does not generate any dead tuples on its own, as what is dead is controlled by what is "dead" in the heap/table due to deletes/updates. Instead, the LSM is cycling blocks into and out of a custom free space map (that we implemented to reduce WAL traffic).
I don’t think L7 can do 40 hours of L4 work in 10 hours. In my experience L7 can do things L4 cannot do in principle, no matter how much time they have. This makes their time more valuable.
There is a lot more
- Aurora to handle our spiky workload (can grow 100x from normal levels at times)
- Zero-ETL into RedShift.
- Slow query monitoring, not just metrics but actual query source.
- Snapshots to move production data into staging to test queries.
Besides this we also use
- ECS to autoscale app layer
- S3 + Athena to store and query logs
- Systems Manager to avoid managing SSH keys.
- IAM and SSO to control access to the cloud
- IoT to control our fleet of devices
I’ve never seen how people operate complex infrastructures outside of a cloud. I imagine that using VPS I would have a dedicated dev. ops acting as a gatekeeper to the infrastructure or I’ll get a poorly integrated and insecure mess. With cloud I have teams rapidly iterating on the infrastructure without waiting on any approvals and reviews. Real life scenario
1. Let use DMS + PG with sectioned tables + Athena
2. Few months later: let just use Aurora read replicas
3. Few months later: Let use DMS + RedShift
4. Few months later: Zero-ETL + RedShift.
I imagine a dev. ops would be quite annoyed by such back and forth. Plus he is busy keeping all the software up to date.
> I’ve never seen how people operate complex infrastructures outside of a cloud
That’s your issue. If all you have is a hammer, everything looks like a nail.
I have the same issue with the junior we hire nowadays. They have been so brain washed that the idea that the cloud is the solution and they can’t manage without them that they have no idea of what to do instead of reaching for them.
> I imagine that using VPS I would have a dedicated dev. ops acting as a gatekeeper to the infrastructure or I’ll get a poorly integrated and insecure mess.
You just describe having a real mess after this.
> I imagine a dev. ops would be quite annoyed by such back and forth.
I would be quite annoyed by such back and forth even on the cloud. I don’t even want to think about the costs of changing so often.
>That’s your issue. If all you have is a hammer, everything looks like a nail.
While I admit lack of experience at scale I had my share of Linux admin experience to understand how it could be done. My point is that building a comparable environment without cloud would be much more than just 500 LoC. If you have relevant experience please share.
>I would be quite annoyed by such back and forth even on the cloud. I don’t even want to think about the costs of changing so often.
In cloud it took 1-2 weeks per iteration with several months in between when we have been using the solution. One person did it all, nobody in the team even noticed. Being able to iterate like this is valuable.
>What you see as “rapid iteration” looks a lot like redoing the same work every few months because of shifting cloud-native limitations.
This is not the case. The reason for iteration is the search for solution in the space we don’t know well enough. In this particular case cloud made iteration cheap enough to be practical.
I asked you to think about what it would take to build well integrated suite of tools (PG + backups + snapshots + prom + logs + autoscaling for DB and API + ssh key management + SSO into everything). It is a good exercise, if you ever built and maintained such a suite with uptime and ease of use comparable to AWS I genuinely would like to hear about it.
Last time I checked history books said Britain donated land to Jews. At the time Britain took house land there were no state and no nation called Palestinians, just tribes. Since then Palestinians formed as a nation.
So what do you want Israel to do, disappear? Or negotiate, but with whom? The only power there is hamas which is non-negotiable. I really interested in seeing any realistic solution to the problem, however far fetched it is.
So I’m not good enough for you to share your ideas, did I get it right? You realize this is not how people reach consensus? If you cannot give me a compelling argument what makes you think jews and arabs would be happy with your ideas?
reply