It looks like that it is a central service @ Google called Chemist that is down.
"Chemist checks the project status, activation status, abuse status, billing status, service status, location restrictions, VPC Service Controls, SuperQuota, and other policies."
-> This would totally explain the error messages "visibility check (of the API) failed" and "cannot load policy" and the wide amount of services affected.
There are multiple internet services down, not just GCP. It's just possible that this "Chemist" service is especially externally affected which is why the failures are propagating to the their internal GCP network services.
At Cloudflare it started with: "Investigating - Cloudflare engineering is investigating an issue causing Access authentication to fail.".
So this would somehow validate the theory of auth/quotas started failing right after Google, but what happened after ?! Pure snowballing ? That sounds a bit crazy.
> Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information are unavailable [...]
Surprising, but not entirely unplausible for a GCP outage to spread to CF.
Probably unintentional. "We just read this config from this URL at startup" can easily snowball into "if that URL is unavailable, this service will go down globally, and all running instances will fail to restart when the devops team try to do a pre-emptive rollback"
After reading about cloudflare infra in post mortems it has always been surprising how immature their stack is. Like they used to run their entire global control plane in a single failure domain.
Im not sure who is running the show there, but the whole thing seems kinda shoddy given cloudflares position as the backbone of a large portion of the internet.
I personally work at a place with less market cap than cloudflare and we were hit by the exact same instances (datacenter power went out) and had almost no downtime, whereas the entire cloudflare api was down for nearly a day.
Nice job keeping your app up during the outage but I'm not sure you can say "the whole thing seems kinda shoddy" when they're handling the amount of traffic they are.
What's the alternative here? Do you want them to replicate their infrastructure across different cloud providers with automatic fail-over? That sounds -- heck -- I don't know if modern devops is really up to that. It would probably cause more problems than it would solve...
I was really surprised. The dependence on another enterprise’s cloud services in-general I think is risky, but pretty much everyone does it these days, but I didn’t expect them to be.
AWS has Outpost racks that let you run AWS instances and services in your own datacenter managed like the ones running in AWS datacenters. Neat but incredibly expensive.
> What's the alternative here? Do you want them to replicate their infrastructure
Cloudflare adverises themselves as _the_ redundancy / CDN provider. Don't ask me for an "alternative" but tell them to get their backend infra shit in order.
There are roughly 20-25 major IaaS providers in the world that should have close to dependency on each other. I'm almost certain that cloud flare believe that was their posture, and that the action items coming out of this post mortem will be to make sure that this is the case.
Cloudflare isn't a cloud in the traditional sense; it's a CDN with extra smarts in the CDN nodes. CF's comparative advantage is in doing clever things with just-big-enough shared-nothing clusters deployed at every edge POP imaginable; not in building f-off huge clusters out in the middle of nowhere that can host half the Internet, including all their own services.
As such, I wouldn't be overly surprised if all of CF's non-edge compute (including, for example, their control plane) is just tossed onto a "competitor" cloud like GCP. To CF, that infra is neither a revenue center, nor a huge cost center worth OpEx-optimizing through vertical integration.
But then you do expose yourself to huge issues like this if your control plane is dependent on a single cloud provider, especially for a company that wants to be THE reverse proxy and CDN for the internet no?
Cloudflare does not actually want to reverse proxy and CDN the whole internet. Their business model is B2B; they make most of their revenue from a set of companies who buy at high price points and represent a tiny percentage of the total sites behind CF.
Scale is just a way to keep costs low. In addition to economies of scale, routing tons of traffic puts them in position to negotiate no-cost peering agreements with other bandwidth providers. Freemium scale is good marketing too.
So there is no strategic reason to avoid dependencies on Google or other clouds. If they can save costs that way, they will.
Well I mean most of the internet in terms of traffic, not in terms of the corpus of sites. I agree the long-tail of websites is probably not profitable for them.
True, but how often do outages like this happen? And when outages do happen, does Cloudflare have any more exposure than Google? I mean, if Google can’t handle it, why should Cloudflare be expected to? It also looks like the Cloudflare services have been somewhat restored, so whatever dependency there is looks like it’s able to be somewhat decoupled.
So long as the outages are rare, I don’t think there is much downside for Cloudflare to be tied to Google cloud. And if they can avoid the cost of a full cloud buildout (with multiple data centers and zones, etc…), even better.
Latest Cloudflare status update basically confirms that there is a dependency to GCP in their systems:
"Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information are unavailable"
Definitely very surprised to see, that so much of the CF products that are there to compete with the big cloud providers have such a dependance on GCP.
Down detector has a problem when whole clouds go down: unexpected dependencies. You see an app on a non-problematic cloud is having trouble, and report it to Down Detector but that cloud is actually fine- their actual stuff is running fine. What is really happening is that the app you are using has a dependency on a different SaaS provider who runs on the problematic cloud, and that is killing them.
It's often things like "we got backpressure like we're supposed to, so we gave the end user an error because the processing queue had built up above threshold, but it was because waiting for the timeout from SaaS X slowed down the processing so much that the queue built up." (Have the scars from this more than once.)
Surely if you build a status detector you realize that colo or dedicated are your only options, no? Obviously you cannot host such a service in the cloud.
I'm not even talking about Down Detector's own infra being down, I'm talking about actual legitimate complaints from real users (which is the data that Down Detector collates and displays) because the app they are trying to use on an unaffected cloud is legitimately sending them an error- it's just because of SaaS dependencies and the nature of distributed systems one cloud going down can have a blast radius such that even apps on unaffected clouds will have elevated error rates, and that can end up confusing displays on Down Detector when large enough things go down.
My apps run on AWS, but we use third parties for logging, for auth support, billing, things like that. Some of those could well be on GCP though we didn't see any elevated error rates. Our system is resilient against those being down- after a couple of failed tries to connect it will dump what it was trying to send into a dump file for later re-sending. Most engineers will do that. But I've learned after many bad experiences that after a certain threshold of failures to connect to one of these outside system, my system should just skip calling out except for once every retryCycleTime, because all it will do is add two connectionTimeout's to every processing loop, building up messages in the processing queue, which eventually create backpressure up to the user. If you don't have that level of circuit breaker built, you can cause your own systems to give out higher error rates even if you are on an unaffected cloud.
So today a whole lot of systems that are not on GCP discovered the importance of the circuit breaker design pattern.
Down Detector can have a poor signal to noise ratio given from what I am assuming is users submitting "this is broken" for any particular app. Probably compounded by many hearing of a GCP issue, checking their own cloud service, and reporting the problem at the same time.
"Chemist checks the project status, activation status, abuse status, billing status, service status, location restrictions, VPC Service Controls, SuperQuota, and other policies."
-> This would totally explain the error messages "visibility check (of the API) failed" and "cannot load policy" and the wide amount of services affected.
cf. https://cloud.google.com/service-infrastructure/docs/service...
EDIT: Google says "(Google Cloud) is down due to Identity and Access Management Service Issue"