The (cognitive) overhead of managing and deploying to multiple clouds usually isn't worth it for most teams. Hiring experts and maintaining knowledge about the ins and outs of two (or more) clouds is less feasible for small, fast moving teams.
Simplicity is linked to uptime and having a single cloud solution is a simpeler solution.
For large companies, its mostly cost savings. Easier to negotiate a good discount at N million versus N/2 million.
Besides that no-one ever got fired for picking AWS ;)
Not a justifiable expense when no one else is resilient against their AWS region going down either. Also cross-cloud orchestration is quite dead because every provider is still 100% proprietary bullshit and the control plane is... kubernetes. We settled for kubernetes.
Cross region isn't simple when you have terabytes of storage in buckets in a region. Building services in other regions without that data doesn't really do any good. Maintaining instances in various regions is easy, but it's that data that complicates everything. If you need to use the instances in a different region because your main region is down, you still can't do anything because those cross region instances can't access the necessary data.
Such a sophomoric response. It does not matter how large your storage use is exactly. The point is that nobody is going to pay to replicate that data in multiple clouds or within multiple regions of the same cloud provider.
Btw, I'd love to have a link to where I could buy an SD card the size of a pinky nail that holds terabytes of data.
It absolutely matters how large your storage use is. Terabytes of storage is easily manageable on even basic consumer hardware. Terabytes of storage costs just hundreds of dollars if you are not paying the cloud tax.
If you got resiliency and uptime for a extra hundred dollars a year, that would be a no-brainer for any commercial operation. The byzantine kafkaesque horror of the cloud results in trivial problems and costs ballooning into nearly insurmountable and cost-ineffective obstacles.
These are not hard or costly problems or difficult scales. They have been made hard and costly and difficult.
Your pedantry is just boring. Yes, I used the word terabyte instead I guess something more palatable to you for being large. Fine s/exabyte/terabyte/.
I work with buckets where single files are >1 terabyte. There's more than one of these files, hence terabytes. I'm not going to do a human-readable summary listing of an entire bucket to get the full size. The point of the actual size is irrelevant. When people are spending 5-6 digits on cloud storage per month, they are not going to do it in multiple places. period. Maybe the new storage unit should just be monthly cloud spend, but then your pedantry will say nonsense like which cloud sever, which storage solution type, blah blah blah.
Ah yes, let us just gloss over 6 orders of magnitude when we are discussing cost-effectiveness and feasibility. What is the difference between 100$ and 100,000,000$ of spend really? Basically the same thing.
exactly what tools helps make your large volume of data stored in a down region available to other regions without duplicating the monthly storage fees?
I seems to recall it was fairly common to have a read only versions of sites when there was a major outage - we did that a lot with deviantart in the early 2000s, did that fall out of favour or too complex with modern stacks or?
If only everything was a simple website. You're totally ignoring other types of workflows that would be impossible to use a read-only fall back. Not just impossible, but pointless.
I don't think storage cost is the reason, more that it's hard to design for regional failures. DB by itself as one example, cross region read replica usually introduces eventual consistency to a system that'd otherwise be immediately consistent.
Thanks for the helpful reply! Do you think that would be still true if one accepted a constraint of the "down" version of the property served had data that was stale, say 24 hours behind what the user would have seen had they been logged in?
Yeah except it would probably be delayed way less than 24h. And then you have to figure out how to merge the data back in after, unless you're ok just losing it permanently. And make sure things are handled right if other healthy DBs point to things in the failed-over DB that disappeared.
All the cloud providers have cheap compute but ludicrously expensive network egress. Trying to multicloud will stick you with a massive traffic bill, which is probably not a coincidence.
It really depends on how you will built it. You can architect it for multi cloud from top down where the client/browser talk to one region, With DNS with health check, and replication happens at the DB layer. Your services don't talk cross region at service level, so avoiding a lot of cross region/cloud communication. Most use cases can be addressed this way.
It's a market regulation failure. Which results in a failed market, with the cloud infra provider also providing data services. 20 years ago, there were 20+ widely used operational databases. Now, it's like DynamoDB with like half the market.
How should this have played out in a regulated market? DynamoDB gets released, then what? Has limits on the market share it's allowed to steal?
Should we similarly cap say Front End frameworks on market penetration / growth? Is react too big to fail? Do we need to force some of it's users to use something else?
It seems that clouds balance their budget on egress charges... which leads to cross cloud communication being too expensive to setup multi cloud redundancy. Cross region redundancy is often too expensive too. Even cross availability zones is too expensive for some clouds and applications. (Cross region redundancy in a single cloud doesn't always work out, if the cloud has an outage on a global subsystem, or the broken subsystem gets pushed to multiple regions before exhibiting symptoms)
Additionally, moving your load to a different cloud can be challenging while one is down. It ends up being a lot of work that pays off for a few hours a year. For a lot of applications, it's better to just suffer the downtime and spend money on other things.
If you're a company providing services to people that already have data stored in VendorA's cloud, being on a different cloud would be expensive and prevent you from winning much work. If it turns out that VendorA happens to be the vendor for your clients, you build your services to run on VendorA's cloud too.
This is the situation for my company that started with the intent of being platform agnostic, but it quickly became much less complex as all of the potential client pool was using the same cloud. People with buckets with large amounts of data are not going to be able to convince the bean counters that it would be worth it to have that storage bill from multiple vendors.
> are not going to be able to convince the bean counters that it would be worth it to have that storage bill from multiple vendors
Because it rarely is. Occasional downtime is just a cost of doing business. It is, or should be, rare enough that you just take it as it comes instead of trying to have a redundancy. We don't build tunnels everywhere as a backup for surface roads on snowy days. We just cancel school and work for the day and make up for it later. Do some important things get impacted? Sure, but most things are as mission critical as we make them out to be. The press coverage of an AWS outage makes it so easy to shrug it off and point fingers.
Many companies idea of a disaster plan is to make it after the disaster.
You have to build it in. That takes time money and training. Do you do failovers? Do they work? What is your backup situation? What is your list of work items to do during the failover? How long does it take? Do you even HAVE a failover plan? Can your services handle being in 'split brain'? Do you have specialty services that can only run in one place?
The unfortunate reality is this planning happens many times too late.
It feels like a hat on a hat, cloud systems are already designed for redundancy, adding a redundant layer on top of that is like a double condom, or invesisting in multiple investment funds.
Did multi cloud redundancy end up being too expensive? Tech didn't line up enough? No good business case?
The elastic cloud story that never was? https://www.slideshare.net/slideshow/pets-vs-cattle-the-elas...
What happened?