> there will be NO relation between the topic of the content and whether you moderate it, because itʻs the specific posting behavior thatʻs a problem
I get why Yishan wants to believe this, but I also feel like the entire premise of this argument is then in some way against a straw man version of the problem people are trying to point to when they claim moderation is content-aware.
The issue, truly, isn't about what the platform moderates so much as the bias between when it bothers to moderate and when it doesn't.
If you have a platform that bothers to filter messages that "hate on" famous people but doesn't even notice messages that "hate on" normal people--even if the reason is just that almost no one sees the latter messages and so they don't have much impact and your filters don't catch it--you have a (brutal) class bias.
If you have a platform that bothers to filter people who are "repetitively" anti large classic tech companies for the evil things they do trying to amass money and yet doesn't filter people who are "repetitively" anti crypto companies for the evil things they do trying to amass money--even if it feels to you as the moderator that the person seems to have a point ;P--that is another bias.
The problem you see in moderation--and I've spent a LONG time both myself being a moderator and working with people who have spent their lives being moderators, both for forums and for live chat--is that moderation and verification of everything not only feels awkward but simply doesn't scale, and so you try to build mechanisms to moderate enough that the forum seems to have a high enough signal-to-noise ratio that people are happy and generally stay.
But the way you get that scale is by automating and triaging: you build mechanisms involving keyword filters and AI that attempt to find and flag low signal comments, and you rely on reports from users to direct later attention. The problem, though, is that these mechanisms inherently have biases, and those biases absolutely end up being inclusive of biases that are related to the content.
Yishan seems to be arguing that perfectly-unbiased moderation might seem biased to some people, but he isn't bothering to look at where or why moderation often isn't perfect to ensure that moderation actually works the way he claims it should, and I'm telling you: it never does, because moderation isn't omnipresent and cannot be equally applied to all relevant circumstances. He pays lip service to it in one place (throwing Facebook under the bus near the end of the thread), and yet fails to then realize this is the argument.
At the end of the day, real world moderation is certainly biased. And maybe that's OK! But we shouldn't pretend it isn't biased (as Yishan does here) or even that that bias is always in the public interest (as many others do). That bias may, in fact, be an important part of moderating... and yet, it can also be extremely evil and difficult to discern from "I was busy" or "we all make mistakes" as it is often subconscious or with the best of intentions.
I get why Yishan wants to believe this, but I also feel like the entire premise of this argument is then in some way against a straw man version of the problem people are trying to point to when they claim moderation is content-aware.
The issue, truly, isn't about what the platform moderates so much as the bias between when it bothers to moderate and when it doesn't.
If you have a platform that bothers to filter messages that "hate on" famous people but doesn't even notice messages that "hate on" normal people--even if the reason is just that almost no one sees the latter messages and so they don't have much impact and your filters don't catch it--you have a (brutal) class bias.
If you have a platform that bothers to filter people who are "repetitively" anti large classic tech companies for the evil things they do trying to amass money and yet doesn't filter people who are "repetitively" anti crypto companies for the evil things they do trying to amass money--even if it feels to you as the moderator that the person seems to have a point ;P--that is another bias.
The problem you see in moderation--and I've spent a LONG time both myself being a moderator and working with people who have spent their lives being moderators, both for forums and for live chat--is that moderation and verification of everything not only feels awkward but simply doesn't scale, and so you try to build mechanisms to moderate enough that the forum seems to have a high enough signal-to-noise ratio that people are happy and generally stay.
But the way you get that scale is by automating and triaging: you build mechanisms involving keyword filters and AI that attempt to find and flag low signal comments, and you rely on reports from users to direct later attention. The problem, though, is that these mechanisms inherently have biases, and those biases absolutely end up being inclusive of biases that are related to the content.
Yishan seems to be arguing that perfectly-unbiased moderation might seem biased to some people, but he isn't bothering to look at where or why moderation often isn't perfect to ensure that moderation actually works the way he claims it should, and I'm telling you: it never does, because moderation isn't omnipresent and cannot be equally applied to all relevant circumstances. He pays lip service to it in one place (throwing Facebook under the bus near the end of the thread), and yet fails to then realize this is the argument.
At the end of the day, real world moderation is certainly biased. And maybe that's OK! But we shouldn't pretend it isn't biased (as Yishan does here) or even that that bias is always in the public interest (as many others do). That bias may, in fact, be an important part of moderating... and yet, it can also be extremely evil and difficult to discern from "I was busy" or "we all make mistakes" as it is often subconscious or with the best of intentions.