Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You duplicate the fees. But it's the same or worse trying to do multi cloud.


Which is precisely why it's not done


I seems to recall it was fairly common to have a read only versions of sites when there was a major outage - we did that a lot with deviantart in the early 2000s, did that fall out of favour or too complex with modern stacks or?


If only everything was a simple website. You're totally ignoring other types of workflows that would be impossible to use a read-only fall back. Not just impossible, but pointless.


HN does it too, but it's a simple site


I don't think storage cost is the reason, more that it's hard to design for regional failures. DB by itself as one example, cross region read replica usually introduces eventual consistency to a system that'd otherwise be immediately consistent.


Well yeah, but that's why we get paid the big bucks right?


We do, non-tech company's IT dept doesn't so much


Thanks for the helpful reply! Do you think that would be still true if one accepted a constraint of the "down" version of the property served had data that was stale, say 24 hours behind what the user would have seen had they been logged in?


Yeah except it would probably be delayed way less than 24h. And then you have to figure out how to merge the data back in after, unless you're ok just losing it permanently. And make sure things are handled right if other healthy DBs point to things in the failed-over DB that disappeared.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: