> When companies like Cloudflare mischaracterize user-driven AI assistants as malicious bots, they're arguing that any automated tool serving users should be suspect
Strawmen. They aren't arguing that any automated tool should be suspect. They are arguing that an automated tool with sufficient computing power should be suspect. By Perplexity's reasoning, I should be able to set up a huge server farm and hit any website with 1,000,000 requests per second because 1 request is not seen as harmful. In this case, of course, the danger with AI is not a DOS attack but an attack against the way the internet is structured and the way website are supposed to work.
> This overblocking hurts everyone. Consider someone using AI to research medical conditions,
Of course you will put medical conditions in there: appeal to the hypothetical person with a medical problem, a rather contemptible and revolting argument.
> This undermines user choice
What happens to user choice when website designers stop making websites or writing for websites because the lack of direct interaction makes it no longer worthwile?
> An AI assistant works just like a human assistant.
That's like saying a Ferarri works like someone walking. Yes, they go from A to B, but the Ferarri can go 400km down a highway much faster than a human. So, no, it has fundamental speed and power differences that change the way the ecosystem works, and you can't ignore the ecosystem.
> This controversy reveals that Cloudflare's systems are fundamentally inadequate for distinguishing between legitimate AI assistants and actual threats.
As a website designer and writer, I consider all AI assistants to be actual threats, along with the entirety of Perplexity and all AI companies. And I'm not the only one: many content creators feel the same and hope your AI assistants are neutralized with as much extreme prejudice as possible.
> By Perplexity's reasoning, I should be able to set up a huge server farm and hit any website with 1,000,000 requests per second because 1 request is not seen as harmful.
That's a slippery slope all the was to absurd. They're not talking about millions of requests a second. They're talking about a browsing session (few page views) as a result of user's action. It's not even additional traffic and there's no extra concurrency - it's likely the same requests a user would make just with shorter delay.
> That's a slippery slope all the was to absurd. They're not talking about millions of requests a second. They're talking about a browsing session (few page views) [...]
My statement was meant as an analogy. I'm not saying an argument against Perplexity and agents is about requests per second. I'm saying there's an analogous argument: that the power of AI to transform the browsing experience is akin to the power of a server farm and thus a net negative. Therefore, your interpretation of what I was saying is wrong.
Strawmen. They aren't arguing that any automated tool should be suspect. They are arguing that an automated tool with sufficient computing power should be suspect. By Perplexity's reasoning, I should be able to set up a huge server farm and hit any website with 1,000,000 requests per second because 1 request is not seen as harmful. In this case, of course, the danger with AI is not a DOS attack but an attack against the way the internet is structured and the way website are supposed to work.
> This overblocking hurts everyone. Consider someone using AI to research medical conditions,
Of course you will put medical conditions in there: appeal to the hypothetical person with a medical problem, a rather contemptible and revolting argument.
> This undermines user choice
What happens to user choice when website designers stop making websites or writing for websites because the lack of direct interaction makes it no longer worthwile?
> An AI assistant works just like a human assistant.
That's like saying a Ferarri works like someone walking. Yes, they go from A to B, but the Ferarri can go 400km down a highway much faster than a human. So, no, it has fundamental speed and power differences that change the way the ecosystem works, and you can't ignore the ecosystem.
> This controversy reveals that Cloudflare's systems are fundamentally inadequate for distinguishing between legitimate AI assistants and actual threats.
As a website designer and writer, I consider all AI assistants to be actual threats, along with the entirety of Perplexity and all AI companies. And I'm not the only one: many content creators feel the same and hope your AI assistants are neutralized with as much extreme prejudice as possible.