Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you use an old web browser, lots of sites are already not usable because Cloudfare's CAPTCHA will deny you entry.

New but non-standard niche browsers are also problematic.





I usually have the same (residential) IP for weeks on end and there's absolutely no malware or scraping or whatever the heck it is that Cloudflare thinks it's protecting against going on in my house. Yet I still get blocked or captcha'd.

Website owners may understandably be appreciative of CF. But as as someone browsing the web, I think it's done a lot of irreversible* damage to the open internet.

* I say irreversible because I don't think they'll be looking to improve this anytime soon, but rather add more restrictions.


As a website owner who uses Cloudflare after having being DDOS'd, I agree whole heartedly.

Cloudflare succeeded to do what Google tried and failed with AMP, and we are all the worse off for it. [Though at least it is not Google, that would be worse.]

I cannot afford to be DDOS'ed and there are bad actors that have already proven that they _will_ take me down if they could. So, I feel bad for the internet being walled up, and I feel bad for users that will lose access. And I fret that one day CF may just decide to take all my content and use it somehow to shut me down.

Meanwhile though, I hold my nose, cry inwardly, and continue to use Cloudflare.


What was your infrastructure like? Were the DDoSes affecting you at the application or network layer? I wonder if there's the case to be made for something like CF but integrated into your L4 and L7 LB infrastructure.

CFs single biggest piece of leverage on L7 DDoS is that once a node in a botnet attacks one of their properties, it usually can’t be used to attack any others for a substantial duration. Botnets rely on being retasked frequently so this dramatically reduces their effectiveness. Volumetric DDoS is even worse: you need to have the peering relationships and hardware to handle Tbps of traffic to an IP you announce. Doing either of these in your own infra is not feasible if you’re much smaller than a hyperscaler.

right, CF (along with Google and Meta) is already servicing double-digit percentages of the world's traffic so it can absorb whatever packets you can toss at it. On the other hand, I suspect most services are going to fall over at L7 first due to common patterns like pre-forked ruby/python servers that struggle to process more than 1k qps per node, unauthenticated user actions putting load on hard-to-scale resources like RDBMS, next to no load shedding designed into the system, etc.

That wouldn't be a DDoS, just flaky rate limiting.

And when you use decent protections against the worst bad actors on the internet (dns blackholing, adblocking, cookie dropping/corrupting, vpns) cloudflare again causes problems

Just be a good little consumer.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: