Imagine for a moment what would happen if this rationale were extended to the criminal justice system. Due process is sanctified in law for a good reason. Incontestable assumptions of adversarial intent are the slow but sure path to the degradation of any community.
There will always be blind spots and malicious actors, no matter how you structure your policies on content moderation. Maintaining a thriving and productive community requires active, human effort. Automated systems can be used to counteract automated abuses, but at the end of the day, you need human discretion/judgement to fill those blind spots, adjust moderation policies, proactively identify troublemakers, and keep an eye on people toeing the line.
> Imagine for a moment what would happen if this rationale were extended to the criminal justice system.
It already is!
The criminal justice system is a perfect example of why total information transparency is a terrible idea: never talk to the cops even if they just want to "get one thing cleared up" - your intentions don't matter, you're being given more rope to hang yourself with.
It's an adversarial system where transparency gets you little, but gains your adversary a whole lot. You should not ever explain your every action and reasoning to the cops without your lawyer telling you when to STFU.
Due process is sanctified, but the criminal justice system is self-aware enough to recognize that self-incrimination is a hazard, and rightly does not place the burden on the investigated/accused, why should other adversarial system do less?
1. To keep people from cozying up to the electric fence. If you don't know where the fence is you'll probably not risk a shock trying to find it. There are other ways one can accomplish this like bringing the banhammer down on everyone near the fence every so often very publicly but point 2 kinda makes that suck.
2. To not make every single ban a dog and pony show when it's circulated around the blogspam sites.
I'm not gonna pass judgement as to whether it's a good thing or not but it's not at all surprising that companies plead the 5th in the court of public opinion.
Sorta related to (1) but not really: there are also more "advanced" detection techniques that most sites use to identify things like ban evasion and harassment using multiple accounts. If they say "we identified that you are the same person using this other account and have reason to believe you've created this new account solely to evade that ban" then people will start to learn what techniques are being used to identify multiple accounts and get better at evading detection.
There will always be blind spots and malicious actors, no matter how you structure your policies on content moderation. Maintaining a thriving and productive community requires active, human effort. Automated systems can be used to counteract automated abuses, but at the end of the day, you need human discretion/judgement to fill those blind spots, adjust moderation policies, proactively identify troublemakers, and keep an eye on people toeing the line.