Isn't it inconsistent to both say "moderation decisions are about behavior, not content", and "platforms can't justify moderation decisions because of privacy reasons".
It seems like you wouldn't need to reveal any details about the content of the behavior, but just say "look, this person posted X times, or was reported Y times", etc... I find the author to be really hand-wavy around why this part is difficult.
I work with confidential data, and we track personal information through our system and scrub it at the boundaries (say, when porting it from our primary DB to our systems for monitoring or analysis). I know many other industries (healthcare, education, government, payments) face very similar issues...
So why don't any social network companies already do this?
For one, giving specific examples gives censured users an excuse to do point-by-point rebuttals. In my experience, point-by-point rebuttals are one of the behaviors that should be considered bad behavior and moderated against because they encourage the participant to think only of each point individually and ignore the superlinear effect of every point taken together. For another, the user can latch on specific examples that seem innocuous out of context and allow them to complain that their censorship was obviously heavy handed, and if the user is remotely well known then it's famous person's word versus random other commenters trying to add context. The ultimate result is that people see supposed problems with moderation far more than anyone ever says "man I sure am glad that user's gone" so there's a general air of resentment against the moderation and belief in its ineffectiveness
Point-by-point rebuttals are probably very annoying for the moderators, but knowing what actually gets moderated makes working with the rules easier. Imagine acting in a society where an often-invisible police enforces secret laws, you can blindly appeal, but you don't get to know what they're alleging you did, you don't get to defend yourself, the court is held in secrecy, and it's being made up entirely from members of the police. Predicting what is desired and then being maximally conformist is the safe way, or you just roll the dice, act the way you want, and hope that the secret police is similar enough to you that they'll tolerate it.
It seems like you wouldn't need to reveal any details about the content of the behavior, but just say "look, this person posted X times, or was reported Y times", etc... I find the author to be really hand-wavy around why this part is difficult.
I work with confidential data, and we track personal information through our system and scrub it at the boundaries (say, when porting it from our primary DB to our systems for monitoring or analysis). I know many other industries (healthcare, education, government, payments) face very similar issues...
So why don't any social network companies already do this?