As I've said for a long time, I don't mind moderation, I just want to be in charge of what I see. Give me the tools that the moderators have, let me be able to filter out bots at some confidence level; let me see "removed" posts, banned accounts; don't mess with my searches unless I've asked for that explicitly.
I don't think that really deals with beheading videos, incitement to terrorism, campaigns to harass individuals and groups, child porn, and many cases where online communities document or facilitate crimes elsewhere.
Child porn is illegal. Are beheading videos illegal? Incitement to terrorism is probably a crime (though I'd argue that it should be looked at under the imminent lawless action test[1] as it's speech). So all of these would be removed and are not part of a moderation discussion.
As to "many cases where online communities document or facilitate crimes elsewhere", why criminalise the speech if the action is already criminalised?
That leaves only "Campaigns to harass individuals and groups". Why wouldn't moderation tools as powerful as the ones employed by Twitter's own moderators deal with that?
The problem here is that the default assumption is that everyone on the internet is under the jurisdiction of US law, when the majority in fact are not.
These are global platforms with global membership, simply stating that “if it is free speech in America it should be allowed” isn’t a workable concept.
How about saying that if it is free speech in America it should be allowed in America, but censored in countries where it is against the law? It seems very easy to say.
So different users aren’t able to see full threads based on their location? You’re seemingly randomly able to respond in some circumstances and not others?
When there are people all over the globe participating in the same discussion, you can’t realistically have an odd patchwork of rules. It’s very common for people on this forum, for example, to be commenting on their experiences in Europe, where free speech is heavily curtailed in comparison to the states. How do you manage such threads?
> "So different users aren’t able to see full threads based on their location? You’re seemingly randomly able to respond in some circumstances and not others?"
Of course. That is what they've demanded, so that is what they get.
> "When there are people all over the globe participating in the same discussion, you can’t realistically have an odd patchwork of rules. "
On the contrary: You must have this. As a matter of law. There is no alternative, other than withdrawing from those countries entirely and ignoring the issue of people accessing your site anyway (which is what happens in certain extreme situations, states under sanctions, etc)
> " It’s very common for people on this forum, for example, to be commenting on their experiences in Europe, where free speech is heavily curtailed in comparison to the states. How do you manage such threads? "
Here are the options:
1) Do not do business in those countries.
2) Provide different services for those countries to reflect their legal requirements.
There is no way to provide a globally consistent experience because laws are often in mutual conflict (one state will for example prohibit discussion of homosexuality and another state will prohibit discriminating on the basis of sexual preference)
That's correct and that's actually how it works right now (Germany has different speech laws and Twitter attempts to comply with them[1]). However, it is an American company and it's not unreasonable to follow the American law in America. I would also think it's quite possible to use the network effect of the service to bully places like Germany into allowing greater expression, or simply providing it on the sly by making it easy for Germans to access what they want. Although, I do see the EU is trying to do the same in reverse, probably to (as is its wont) to create a tech customs union that allows its own tech unicorns to appear (something it has failed miserably at, in part because of its restrictive laws).
If I had a tool that could (at least attempt to) filter out anti-semitism or Holocaust denial, then Germany could have that set to "on" to comply with the law. I'm all for democracies deciding what laws they want.
'x is illegal' is a cop-out (albeit often unintentional) and I wish people would stop using it. anything can be made illegal, are you just going to roll over if expressing an unpopular idea becomes a crime? Conversely, illegality doesn't deter a lot of people and many are skilled at playing with the envelope of legality, so absent any moderation you'll get lots of technically/arguably legal content that is designed or degrade or disrupt the operation of a target forum.
It's unhealthy to just throw every difficult problem at courts; the legal system is clumsy, unresponsive, and often tends to go to unwanted extremes due to a combination of technical ignorance, social frustration, and useless theatrics.
We're talking about a social media service adhering to one of the most liberal set of speech laws and norms in the world, not the imposition on the population of an unjust law. Tell me I can't say the word "gay" on threat of imprisonment and I'll say it more but that's not relevant to this discussion.
It's the "documentation of the crime" aspect of child pornography that makes it illegal. It is still technically illegal in parts of the US to possess, say, drawn illustrations of pornography featuring minors (what 日本人 call "lolicon") but the legal precedents are such that it can't really be prosecuted.
That is, it's not clear in the US you can ban something on the basis of it being immoral, you need to have the justification that it is "documentation of a crime".
What makes child porn illegal is the argument that anyone who views or distributes it is re-abusing the victim. Otherwise, it would be justifiably illegal to create but not to view, possess, or distribute. Yet all are illegal in the USA.
This does not stop the FBI from being a major child porn distributor, despite that meaning the FBI is re-abusing thousands of victims under this rubric.
> What makes child porn illegal is the argument that anyone who views or distributes it is re-abusing the victim.
That's what makes it illegal? What if it's done on a private forum that the victim never finds out about? What if the victim is, say, dead? I don't think those change the legality.
Here's there's a major difference between USA and EU law, and I daresay culture as well: how private information is viewed.
As far as I understand in the EU private information is part of the self. Thus, manipulating, exchanging, dealing with private information without the person's consent is by itself a kind of aggression or violation of their rights. Even if the person never finds out.
In the USA however private information is an asset. The aggression or violation of right only happens when it actually damages the victim's finances. So if the victim never finds out about discussions happening somewhere else in the world, well… no harm done I guess?
Both views are a little extreme in my opinion, but the correct view (rights are only violated once the victim's own life has been affected in any way), is next to impossible to establish: in many cases the chain of events that can eventually affect a person's life is impossible to trace. Because of that I tend to suggest caution, and lean towards the EU side of the issue.
Especially if it's the documentation of a crime as heinous as child abuse.
That's the rubric courts and legislatures in the USA have used.
It is, in general, really really difficult to pass speech laws in the USA because of that pesky First Amendment -- even if they're documentation of a crime. Famously, Joshua Moon of Kiwi Farms gleefully hosted the footage from the Christchurch shooting even when the actual Kiwis demanded its removal.
But if you can argue that procurement or distribution of the original material perpetuates the original crime, that is, if it constitutes criminal activity beyond speech -- then you can justify criminalizing such procurement or distribution. It's flimsy (and that makes it prone to potentially being overturned by some madlad Supreme Court in the future with zero fucks to give about the social blowbacks), but it does the job.
In other countries it's easy to pass laws banning speech based on its potential for ill social effects. Nazi propaganda and lolicon manga are criminalized in other countries, but still legal in the USA because they're victimless.
If this makes you wonder whether it's time to re-evaluate the First Amendment -- yes. Yes, it is.
I'm in favor of the First Amendment remaining at least this strong. None of the above things strike me as nearly as dangerous as "the ruling party being able to suppress criticism and opposition by claiming that their opponents' words have potential for ill social effects".
Well, based on https://cbldf.org/criminal-prosecutions-of-manga/, it seems you probably can beat the charges, but it will take years and an expensive legal defense. People have been prosecuted and usually take plea bargains, so some amount of jail time can be expected. Simple cases of "manga is child porn! yadda yadda" can probably be overlooked but if the police don't like you for some reason, getting arrested is definitely a risk. Although there is supposed to be "innocent until proven guilty" even getting arrested can disqualify you from many jobs.
> even getting arrested can disqualify you from many jobs.
That's something that I think is seriously wrong with the USA right now: the idea of an "arrest record", or at least the idea of it being accessible by anyone other than the police.
There are a number of situations where it is perfectly reasonable to arrest innocent people, then drop all charges. Let's say the cop arrive at a crime scene, there's a man on the ground lying in a pool of blood, and another man standing with a smoking gun holstered at their hip. Surely it would be reasonable to arrest the man that's still standing and confiscate their gun, at least for the time necessary to establish the facts?
But then once all charges has been cleared (say the dead guy had a knife and witnesses identify him as the aggressor), that arrest should be seen as nothing more as either a mistake or a necessary precaution. It's none of potential employer's business. In fact, I'd go as far as make it illegal to even ask for arrest records, or discriminate on that basis.
That's genuinely interesting (have an upvote) but a social media site's responsibility in a situation such as this is to assess legality, not prosecutability, hence it would be removed.
Anime image boards are not in a hurry to expunge "lolicon" images because they don't face any consequence from having them.
I wouldn't blame Tumbler from banning ero images a few years back because ero image of real people are a lot of trouble. You have child porn, revenge porn, etc. Pornography produced by professionals has documentation about provenance (every performer showed somebody their driver's license, birth certificate, probably got issued a 1099) if this was applied to people posting images from the wild they would say people's privacy is being violated.
I'm not here to debate the legality of child porn or lolicon images, and I fail to see the relevance of what you've written to the provision of moderation tools to the users of Twitter.
> Laws don't enforce themselves.
What has that got to do with Twitter? Please try to stay on track.
The vast majority of moderator removed comments and posts on Reddit have nothing to do with the illegal activities you mention.
The vast majority of removed comments are made to shape the conversations.
I think most people would be ok with letting admins remove illegal content, while allowing moderators shape content, as long as users could opt-in to seeing content the mods censored.
This is a win-win. If people don't want to see content they feel is offensive, they don't have to.
Legal vs illegal cannot be enforced on a private platform because the truth procedure for "legal vs illegal" involves a judge, lawyers, often waiting for years.
What you can enforce is "so and so says it is illegal" (accurate 90% or 99% or 99.9% of the time but not 100%) or some boundary that is so far away from illegal that you never have to use the ultimate truth procedure. The same approach works against civil lawsuits, boycotts and other pressure which can be brought to bear.
I think of a certain anime image board which contains content so offensive it can't even host ads for porn that stopped taking images of cosplayers or any real life people because it eliminated moderation problems that otherwise would be difficult.
There is also spam (should spam filters for email be banned because the violate the free speech of spammers?) and other forms of disingenuous communication. When you confront a troll inevitably they will make false comparisons (e.g. banning Kiwi Farms is like banning talk to the effect that trans women could damage the legitimacy of women's sports just when people are starting to watch women's sports)
On top of that there are other parties involved. That anime site I mention above has no ads and runs at very low cost but has sustainability problems because it used to sell memberships but got cut off by payment providers. You might be happy to read something many find offensive but an advertiser might not want to be seen next to it. The platform might want to do something charitable but hosting offensive talk isn't it.
> (should spam filters for email be banned because the violate the free speech of spammers?)
I submit that spam filters should be under the sole control of their end users. If I'm using a Yahoo or Gmail account (I'm not) I should have the option to disable the spam filter entirely, or to only use personal parameters that are trained on the mail only I received, and not email should ever be summarily blackholed without letting me know in some way. If an email bounces, the sender should know. If it's just filtered, it should be in the recipient's spam folder.
> because the truth procedure for "legal vs illegal" involves a judge
This part is not correct. Private companies block what they believe to be illegal activities in their systems constantly - in order to limit the legal liability of being an accomplice to a crime. This is the case in all industries - and is standard practice from banking, to travel, to hotels, to retail... it's commonplace for companies to block services.
For spam, I would recommend that it gets a separate filter-flag allowing users to toggle it and see spam content, separately toggled from moderated content.
Vouching for a comment makes it visible for everyone else. That means that dang reviews whether the comment breaks the rules and if it does he'll take away your ability to vouch in the future. Therefore, vouching is not a way to continue commenting on a sub-thread that breaks the rules.
There's "breaking the rules" and then there's "breaking the rules according to a certain interpretation" and then there's "I don't like what he said, so I'm going to interpret the rules in a way that justifies removing his post".
It all comes down to some guy telling me how to talk. I don't like it. Anybody who likes it has rocks in his head.
Since I was not born with a language, yes I've been told how to talk for a sizeable portion of my life.
In fact learning things like tact and politeness, especially as it relates to the society I live in, has been monumental in my success.
Do you go to your parents house and tell them to screw off? Do you go to work and open your mouth like a raging dumpster fire? Do you have no filter talking to your husband/wife/significant other? Simply put your addition to the discussion is that of an impudent child. "I want everything and I want to give up nothing" is how an individual becomes an outcast, and I severely doubt this is how you actually live outside the magical place known as the internet, though I may be surprised.
I think it's possible to know how to control one's tongue and also not like being told what to say or how to speak, quite possibly because one knows how to do that and because they have their own mind.
I mean, I also don't like that I'm not infinitely wealthy, have a limited lifespan, and am subject to the cruelties of entropy. My like or dislike of these has little to do with addressing the problems by all the above situations in a rational and realistic manner while considering the outcomes in granting myself unlimited godlike power in doing so.
And by moving the goalposts you aren't addressing my comment, but still…
What "godlike power" are you referring to? The ability to moderate what turns up in your own social media feed? The ability to respond to comments someone else has deemed to break rules. I would hope for a bit more than that for godlike power.
I may be unique in this regard, but I am aware of the fact that sometimes I make mistakes, and I don’t highly value all of my conversation, sometimes I just rant, or enjoy engaging in more superfluous conversation. Sometimes the best conversations aren’t “highly valuable”!
> It all comes down to some guy telling me how to talk.
Nobody is "telling you how to talk."
People are free to choose their terms for voluntary social interactions. You don't have a right to inflict yourself on others who wish not to interact with you.
I like it even less when 100 IQ people are not told how to talk, repeatedly - they render coherent conversation and decision-making impossible.
I'll give up some of my freedom, to limit everyone else's freedom every day of the week - the only concern I have is roughly the IQ of the people doing the limiting.
You can actually email dang/the mods and make your case. Make sure to read up on the extensive documentation he shares for how he moderates and how he thinks about moderation right here on HN first, plus any discussion on past suggestions. Mastering the search box helps. A lot of modern HN policy and functionality started as recurring suggestions that became experiments.
If it's possible to train models that show people the exact ads they like, then I have absolutely no doubt one could do the same to only show the content you like. Learning from downvoted and reported posts, etc.
So, basically, you want moderation as it's described in the article. So, arguably, you mind about moderation as much as it's author, since that's their point.
I read the article and second it, but I want more than that, I want the tools the current moderators have, not just ones they provide me. If they give me a tool, I want it to be as powerful as their tool, not a compromise. If I'm to moderate my own feed then why do I not get the tools of a moderator?
So you want a moderator to moderate. but then you also want to have tools to see what has been moderated away and unlock those? Right? So moderate yes, but also unmoderate by the users.
I don't know if this is what OP meant, but I really like your interpretation
Mods exist and can ban/lock/block people and content but users can see everything that was banned, removed or locked, as well as the reason why; what policy did the user violate?
I think the only exception would be actually illegal content; that should be removed entirely, but maybe keep a note from the mods in its place stating "illegal content".
That way users can actually scrutinise what the mods do and one doesn't wonder whether or not the mods removed a post because they are biased or for ligit reasons, and opinions are not entirely removed, as they are still readable, but you can't respond to them
In addition crap floods? If I submit half a billion posts do you really want that handled by moderation?
Being a server operator I've seen how bad the internet actually sucks, this may be something the user base does not have to experience directly. When 99.9% of incoming attempts are dropped or ban listed you start to learn how big the problem can be.
Spam can work the same way — that's how our email spam filters work. To use the OP's "censorship vs moderation" dichotomy, the current "censorship" regime would be like if your email filter marked an email as spam, and not only did not give you the option to disagree with the filter (i.e. mark as "Not Spam"), but didn't even give you the option to see the offending message to begin with.
Spam may still leak into our inboxes today, but the level of user control over email spam is generally a stable equilibrium, the level of outrage around spam filters — and to be clear, there are arguments to be made that spam filters are increasingly biased — is much MUCH lower than that around platform "censorship".
Spam is advertising, right? That doesn't need special protection. Flooding is like the heckler's veto so that could also be against the rules, it doesn't need special protection either.
As to moderation, why not be able to filter by several factors, like "confidence level this account is a spammer"? Or perhaps "limit tweets to X number per account", or "filter by chattiness". I have some accounts I follow (not on Twitter, I haven't used it logged in in years) that post a lot, I wish I could turn down the volume, so to speak.
What is spam... exactly? Especially when it comes to a 'generalized' forum. I mean would talking about Kanye be spam or not? It's this way with all celebrities, talking about them increases engagement and drives new business.
Are influencers advertising?
Confidence systems commonly fail across large generalized populations with focused subpopulations. Said subpopulations tend to be adversely affected by moderation because their use of communication differs from the generalized form.
That spam is advertising does not make all advertising spam.
We already have filters based on confidence for spam via email, with user feedback involvement too, so I don't need to define it, users of the service can define it for me.
Simply put we keep putting more and more and more filtering on the user with complete disregard for physical reality here, and ignore the costs.
The company that provides the service defines the moderation because the company pays for the servers. If you start posting 'bullshit' that doesn't eventually pay for the servers and/or drives users away money will be the moderator. There is no magic free servers out there in the world capable of unlimited space and processing power.
> Simply put we keep putting more and more and more filtering on the user
Who would be put upon by this? The average user doesn't have to be, they could use the default settings which are very anodyne. The rest of us get what we want, that's what the article stated. Who's finding this a burden?
As to the reality of things, Twitter's just been bought for billions and there's plenty of bullshit being posted there. That's the reality, and several people who've made a lot of money by working out how to balance value and costs think it can do better.
The old Something Awful forums did something similar. If someone posted something that was unacceptable, the comment would generally stay and the comment would get a tag saying “the user was banned for this post”. They also had a moderation history so you could go back and see mod comments on why they gave bans/probations.
here is the thing. this has been discussed on fediverse before and the general consensus is, if a post is deleted by a mod or something, that is "gone". they record a log of "deletion" but not what because it should not exist.....
I would say something more akin to SPAM scoring would be good.
Contextual filters/scanners would score a piece of content, give it a "score" based on what ever categorizations are being filtered (NSFW, Non-Inclusive Lang, Slurs, Disinfo, etc)
Then both the creator and the consumer should be able to see the score in transparent manner, with the consumer being able to set a threshold to filter out any post that is higher then what they choose
Free Speech Absolutist could set it to 0, Default could be 50, and go from there
I agree, this is the only part I'm doubtful upon whether I can see an individual's score, as it might create a prejudice against that individual that is felt by that individual ("I see you're a bot/troll/like hate speech…"). It also makes me wonder if individual centralised mods should be able to see more than that, but I digress.
Scores across a range of measures would be best, in my view.
Mastodon decided to go the guilt-by-assiciation route instead:
Instead of just leaving it to users or admins to block users or servers they didn’t want to deal with, a large subset decided to block for anyone who didn’t block certain other servers.
I agree with the spirit, but I think we have to consider the structure.
>> Give me the tools that the moderators have
Whatever tools a site like twitter or youtube gives you, (A) most people will never use them and (B) they still control how the tools work. These two are enough to achieve any censorship goal you might have, and enough to make censorship inevitable.
I don't think we get power to the people while Alphabet/Elon/Whatnot own the platform. It's a shame that FOSS failed on these fronts. But, the internet has produced powerful proofs of concept. The WWW itself, for the first 20 years. Wikipedia. Linux/gnu. Those really did give power to the people, and you can see how much better they were at dealing with censorship, disinformation and other 2020s infopolitics.
Wikipedia is a terrible example for the "zero censorship" crowd because stuff gets deleted or locked all the time. It's an example of how you can produce something useful despite a raging ideological battleground over all sorts of topics.
I didn't say "zero censorship." I don't even know what that means for an encyclopedia.
Wikipedia have a model for user generated content. It's much more resilient, open, unbiased and successful than social media. This isn't because they have some super nuanced, single-us distinction between moderation and censorship. They never really needed to split that hair.
They have a model for collaboratively editing an encyclopedia, including lots of details and special cases that deal with disagreement, discontent and ideological battlegrounds.
They also have a different organisational and power structure. Wikipedia doesn't exist to sell ads, or some other purpose above the creation of an encyclopedia. Users/editors have a lot of power. Things happen in the open.
Between those two, they've done much better than Alphabet/FB/Twitter/etc. Wikipedia is the ultimate prize for censorship, narrative wars, disinformation, campaigning, activism and such. Despite this, and despite far fewer resources it outperforms the commercial platforms. I don't think it's a coincidence.
I can point to particular pages that have failed in providing an accurate representation of the subject and are under the control of activists or interested groups.
I also get a bit tired of looking someone up and it has "so and so says this person is <insert bad thing>", claims that usually stack up about as well as that SPLC claim against Maajid Nawaz[1] did.
Given this, I find it hard to see how they're doing better than the other companies you mention.
I agree. If Twitter reopened its API (properly) then "userland" moderation tools would (could) be easier to implement and that might tackle this problem.
Is your concern answered by ensuring that the default experience of a forum is moderated, until someone explicitly takes off the covers? Otherwise I can't fathom where you disagree with GP.
What does that mean? Comments with certain tone set the mood anyway. That's how it works in general. It's just you'd prefer "certain" moods over others
Then pick a different medium. A single tabloid doesn't tarnish every single news industry. Rude mailing lists don't invalidate private email conversations. Also, conference calls are a thing.
Anyway, analogies are imperfect, please look in the direction where I am gesturing, not at my exact words.
The point here (and of the entire conversation) is that you shouldn't judge a medium by its worst imaginable actors as long as you're given the right tools that allow you to use that medium undisturbed, effectively putting them into a different silo.
Today twitter allows a very crude, imperfect approximation of this by following people that post decent content and setting the homepage to "latest posts" instead of "top tweets". Ideally we'd have better tools than that.
But the thing is, there's no "outrage Twitter" that's distinct from "calm Twitter." There's just Twitter. Since the value of a social network is in its population, the natural inclination will be towards a reduction of networks, not a proliferation of them.
I've been a reader and participant of the Bogleheads forum (on investing and personal finance) for well over a decade, and I think it's a shining example of good moderation in practice. My observation is that the key is preventing discussion from getting heated; really moderating the temperature (or noise intensity, excitement, etc.) of the place. Threads on topics that are known to get heated (including topics that can be relevant to investing and personal finance, such as politics and macroeconomics and cryptocurrencies in recent times) immediately get locked. Threads on topics that start out OK but end up getting heated also get locked, or the heated content gets surgically removed if main discussion can be salvaged. So the focus is entirely on high-quality information and thoughtful, low-emotion points and counter-points to think about. The merits of the discussions continue to attract and retain new fantastic participants who make them even better. Edited to add: The practice of moderation here at HN is similar, and has similar effects, which is why I'm here.
If they started removing low-emotion information and discussions that just didn't fit the Bogleheads philosophy, I think that would cross the line into censorship.
Anyway, it's clear that the Bogleheads forum model is the polar opposite of where Facebook, Twitter, and Reddit have gone to suck in the masses and increase their engagement by highlighting the most heated stuff and throwing gasoline onto the fire with likes, votes, and retweets. I think the mainstream social media companies have put themselves into a bind with this.
To be fair, the general civility of the members of Bogleheads is the polar opposite of that at FB/Twitter/Reddit.
Maybe moderation is part of that, but I’d argue the subject matter is already generally less polarizing/toxic than what’s on the other three platforms.
But your point is still valid re: how Bogleheads does moderation.
You're right, the people who are generally attracted to Bogleheads in the first place, and furthermore the people who are able to participate there without getting removed, are probably quite civil under their FB/Twitter/Reddit handles too. Or they're people who won't go anywhere near FB/Twitter/Reddit.
What's obviously very hard is allowing discussion of politics, macroeconomics, religion, race, etc. without it getting heated. Bogleheads doesn't even try.
> And it would make the avoid-harassment side happier, since they could set their filters to stronger than the default setting, and see even less harassment than they do now.
I highly doubt it.
I’m pretty sure typical harassment comes in the form of many similar messages by many different users joining a bandwagon. Moderation wouldn’t really be fast enough to stop that; indeed, Twitter’s current moderation scheme isn’t fast enough to stop it. But the current scheme is capable of penalizing people after the fact, particularly the organizer(s) of the bandwagon, and that creates some level of deterrence. An opt-out moderation scheme would be less effective as a deterrent, since the type of political influencers that tend to be involved in these things could likely easily convince their followers to opt out.
That may be a cost worth paying for the sake of free speech. But don’t expect it to make the anti-harassment side happy.
That said, it’s not like that side can only tolerate (what this post terms as) censorship. On the contrary, they seem to like Mastodon and its federated model. I do suspect that approach would not work as well at higher scale - not in a technical sense, but in terms of the ability to set and enforce norms across servers. But that’s total speculation, and I haven’t even used Mastodon myself…
Yep. I've got a small number of friends who are regularly harassed on Twitter (and elsewhere). One for their professional work on climate change. One for their professional work on racial history. And one for their transgender status while being professionally associated with a field that attracts an unusual number of alt-right people.
There are occasional repeat harassers. But the usual situation is "somebody posts about one of my friends to their circle and suddenly a gazillion hate messages arrive from a gazillion different people." The only option to prevent this would be "see zero dms or comments on your posts by people you haven't explicitly allowlisted," which works badly if communicating with an ever-shifting professional network is a part of your job.
Well but that would be great, I'm too young to have been part of that magical usenet era that people talk about with such reverence, and these days I have no idea how to get on there, if there's anyone on there I might care about, or whether it even still exists. So a new usenet with extra steps (and, uh, accessible through the http protocol) sounds great to me.
USENET never really solved spam. I remember the statistic that at one point one third of usenet traffic was spam and another third was spam cancels. Here's what people felt at the time: https://www.eyrie.org/~eagle/writing/rant.html
(the "cancel" message was hilarious since when invented it was unauthenticated, i.e. anyone could delete any post on any group in USENET! This had to be fixed:
> Moderation wouldn’t really be fast enough to stop that
Social media keep using this excuse for not trying. We can moderate spam in emails with a simple naive bayes classifier, why don't we just do that with comments? It could easily classify comments that are part of a bandwagon and flag them automaticly hiding them or for human review.
We are able to moderate email but the concepts we use to do so are never applied to comments, I don't know why, this seems like a solved problem.
If you're trying to use the SPAM model as some kind of example of success I believe you may have already failed.
In SMTP servers I've managed for clients we typically block anywhere from 80 to 99.999% (yes 10000 blocked to one success) messages. I'd call that MegaModeration if there was such a term.
And if you think email spam is solved then I don't believe you read HN often as there is a common complaint of "Gmail is blocking anything I send, I'm a low volume non-commercial sender"
In addition email filtering is extremely slow to react to new methods, generally taking hours depending on the reporting system.
Lastly, you've not thought about the problem much. How are you going to rapidly detect the difference between a fun meme that spreads virally versus an attack against an individual. Far more often you're going to be blocking something that's not a bad thing.
Fair concerns but I have trained a Naive bayes classifier on twitter data in the past using [1] a social study of categorised tweet to train the classifier and got around 85% accuracy. It was able to detect and properly classify rape threats as abusive but conversations about rape seed oil as non abusive. Considering the small data set and how little entropy there is between samples I consider it pretty useful.
I get that no machine learning is 100% perfect which is why it should be used as an indicator rather than the deciding factor.
I have had issues with gmail blocking emails but as you point out it was always because of ip reputation not over zealous Naive Bayes.
Training classifiers can also go off the rails under adversarial attack. This commonly showed up in our systems when people sent short emails that were more ambiguous. For example this tends to cause problems where malevolent users adopt dogwhistles co-opting the language of the attacked group. The attacked group commonly becomes the ones getting banned/blocked in these cases
Okay, I actually did laugh out loud a little at the ‘we are able to moderate email’, bit.
Spam filters are probably one of the single most consistently unreliable pieces of software I ever have to use; regardless of the email provider; or email client I use.
I have to check my junk folder like it’s my inbox.
On both Apple Mail and Outlook; with two different emails - email money transfers (EMTs) will get shoved in my junk box; despite the dozens of times I have marked said emails as not junk.
I’ll get spam emails, but I don’t get mail from newsletters I’ve actually signed up for.
Like…if you’re trying to use spam emails as an example of success; and even a model we should follow for…anything else; I’m going to laugh you out of the room and tell you to keep me the hell away from whatever tools you want to use with that technology.
Spam filtering software for email is at best useless; at its worst; mind numbing log frustrating. It’s a tool I’ll never trust.
That's the current system. ML plus humans to remove harmful content. And people like Elon are extremely upset about this. Heck, you even see the GOP complaining about spam filtering on gmail. Hard to say that this is a solved problem that everybody agrees works well.
A somewhat flawed solution to a harassment campaign is whitelists.
This is also the approach celebrities in general need to take as they get drowned in messages (elon musk could spend 24/7 reading messages sent to him and read only a tiny fraction), so harassment can probably be solved by whatever solution we come up with for celebs.
Can make some fairly elaborate "allow" rules depending on why you might want to read messages from non-contacts like "this person is a contact of a contact" or "this person has "IT" in their Twitter bio or "this person is on the 'good person' white list that Mr. Whitelist maintains for the community".
It seems to me this piece makes lots of assumptions that do not really apply to social media, where I'll use Twitter as the example, just like the author did.
People don't really have "conversations" on social media. Most activity is posting updates about oneself, but the far bigger part is screaming at each other and playing engagement games.
The goal of all this activity is not to debate, converse or exchange information. The goal is to win by being maximally controversial, as that's the behavior that is rewarded.
As such, Twitter is opposite to real life. If you'd talk and behave as people do on Twitter in the real world, you'd be ousted in a day or may even wake up in the hospital.
In a dynamic where bad faith is the default, you can't apply good faith principles.
It's massively complex to address. On the one hand, you have almost no accountability regarding your speech on Twitter, yet incidentally too much: mob attacks / cancel culture. Too free and too restricted at once.
Personally, I think what you can and cannot say is a massive distraction from the real issue: what gets amplified. Reasonable conversation is pointless and hot takes win. It should be the opposite, just like in real life.
> The goal of all this activity is not to debate, converse or exchange information. The goal is to win by being maximally controversial, as that's the behavior that is rewarded.
> the real issue: what gets amplified
But it can be (at least partially) fixed if you change the optimization function.
And Twitter's Birdwatch (that Elon recently got all excited about when it fact checked the White House: https://twitter.com/metaviv/status/1587884806020415491) actually does this "bridging-based ranking" for adding context on tweets.
I think this view is hopelessly naive in the real world. You could easily make "moderation" (as defined by the article) the same as "censorship" (as defined by the article) by producing a lot of false, semi-false and nonsense content to drown out the signal in the noise.
Our concepts of free speech, censorship and moderation are simply outdated on modern social media - when you have systems designed to encourage and spread low-effort, novel, emotional and manipulative content (e.g. twitter), no amount of "tweaks" to such systems can fix the problem.
Instead of trying to fix systems originally designed for marketing, why not actually design systems meant for disseminating and checking information from the ground up? I bet that would look way different compared to twitter or facebook.
It doesn't have to involve moderation or censorship - it could just mean giving disproporitonately more powerful voice to experts willing to explain disinformation, for example (rather than having their voice drown in the retweet popularity contest).
> Our concepts of free speech, censorship and moderation are simply outdated on modern social media - when you have systems designed to encourage and spread low-effort, novel, emotional and manipulative content (e.g. twitter), no amount of "tweaks" to such systems can fix the problem.
Hilariously enough, this is what people said about the invention of both writing and the printing press.
And it was true. Media changed what free speech means; the "yellow press" almost caused wars. Both humans and media have had to learn how to do better with this, and we are struggling to get accurate, unbiased content that isn't controlled or at least partially coloured by special interests.
I have no doubt we can do better, if we actually tried to build social media with the right tools and incentives.
No, he isn' spot on. Printing presses spread ideas and misinformation at a much more human speed than today's Internet based platforms. So, they're a poor analogy for the problem we're discussing.
(I didn't down vote him, BTW. His comment is relevant)
I dunno, Europe post the printing press was pretty crazy for a few decades. More generally I think that it's the rate of increase that causes the problems, not the baseline.
What does free speech mean if its drowned in novelty, half-truths and a popularity contest? Is it free speech if the network itself blocks people from elaborating by artificially limiting the number of characters, thereby causing only the most provocative and shallow takes to take hold?
I'd start with slapping Google and Apple really hard to stop them from even thinking to remove apps from their platforms that they don't like for one reason or another.
>Instead of trying to fix systems originally designed for marketing, why not actually design systems meant for disseminating and checking information from the ground up? I bet that would look way different compared to twitter or facebook.
There's a solution for this, based on prediction markets. Essentially experts make "bets" on various things and are rewarded for correct predictions. The more correct predictions they make, they more "points" they have to get their viewpoints broadcast. And conversely, quacks and charlatans that cannot model the world scientifically make few accurate predictions and get drowned out.
I'm not sure I buy that the MVP here is actually "viable". Suppose you're Reddit, and you have FatPeopleHate on your site, and you "ban" it, in the sense that you hide it from users who have not opted-in. Does that really provide the same level of enforcement as a true ban? It seems to me that the presence of that community on your site has effects that spread beyond the community itself, it shapes the way people interact even outside the soft-banned subreddit.
I'd be willing to bet that if you could somehow run an experiment in parallel where you had one Reddit with real bans, and one with soft bans, the quality and nature of interactions on the soft ban one would be much, much worse even outside of banned communities.
> It seems to me that the presence of that community on your site has effects that spread beyond the community itself, it shapes the way people interact even outside the soft-banned subreddit.
Reddit, itself, is, or at least used to be, a variety of diverse communities. I don't care about either /r/FatPeopleHate or /r/FatPeopleLove I don't consider myself a part of those communities. I subscribe to subreddits I want to track, and I am not a member of the ones that I don't.
Surely people on the disagreeable side of the psychological spectrum will gravitate towards some communities and people on the opposite side of that spectrum will gravitate towards other communities. Some communities are cross-cutting, and so have to be moderated in a different way altogether (which Reddit already accommodates). Other than that, communities have their own social protocols. Creating blanket rules / bans / restrictions across communities restricts the organic nature of human interaction and hamstrings it in a rather depressing way.
Many problems arise in the battle for "front page" or "trending" screens that try to blend content from multiple communities and invite competition or "raids" or what have you from opposing sides. Personally I hate such things, I have no desire to be manipulated by them and use browser extensions to block them. But given that they exist, it's again mainly a moderation / preferences problem. Give the control to the user over what they want to see.
> the presence of that community on your site has effects that spread beyond the community itself, it shapes the way people interact even outside the soft-banned subreddit
In my experience, this depends on the community size.
In a small community, "platforming" assholes (rather than "deplatforming" them) may act to retain the assholes as users (where they would otherwise leave for lack of platform); and then those asshole-users, since they're already there, may also interact in other, non-quarantined subforums on the site — to other users' detriment.
In a large community (society, really), e.g. Reddit or Twitter, the asshole-users are going to stick around either way, since people have multiple interests, and the site likely already gives them many other things they want besides just "a plaform to talk about their asshole opinions on." They weren't there primarily to be assholes; they just are assholes, but are going to stick around either way.
So, for large sites, the only real decision you're making by quarantining vs banning a certain sub-community that's full of assholes (rather than doing active moderation of certain discussion topics, regardless of where they occur) is whether the asshole-users' conversations mostly end up occurring in the quarantined forum, or spread out across the rest of the site where they can't be hidden.
It's a bit like prostitution regulation. Prostitution is going to happen in a city no matter what; it's just a question of whether such activity is "legible" or "illegible" to city government. Some cities choose to have an explicitly-designated red-light district and licensing for sex workers; these cities at least ensure that any activity associated with prostitution — e.g. human trafficking, gang violence, etc — occurs mostly within that district, where police presence can be focused. Most cities, though, choose to "protect their image" by having no such district. This option does not result in less prostitution; it only hides it throughout the city, making police investigation of crimes related to sex work much less likely to be reported, and much more difficult to investigate.
You're not only correct, but Scott effectively acknowledges this. One of the comments on his own Substack is from a person pointing out that his own moderation policies are not in line with this proposed MVP. He has rules that more or less reflect his personal preferences and bans anyone who doesn't follow them (though he leaves up the offending comment).
When asked why, the first reason is he uses Substack and they don't offer this as a feature, and when he was on his own self-hosted site, he didn't have the technical skill to implement it himself. But then he says that even if he could, he wouldn't, because he wants his community to reflect a certain ethos and character that creates a community he actually wants to be a part of.
How he doesn't see the contradiction here, I don't know. But this gets at the core of the issue. Virtually nobody actually wants a free-for-all, even one that is opt-in. But whereas some blogger with a devoted following is allowed the freedom to cultivate his garden, as he has described it in the past, when you get as big as Twitter, the larger public starts to feel like it's their community and they should get to decide, not the owners, or that it's even so big as to be this "de facto public square" people keep calling it, and now it has to just follow the same rules as a government, even though it is a private platform owned by people with their own preferences for what they think the character of the platform should be.
The only fairly large platform I can ever think of that really did adopt the more or less anything that won't get us shut down by the FBI is fair game policy is 4Chan. But if everyone who is so hung up on Twitter being too restrictive is mad about Twitter's policies, 4Chan still exists. Why not just go there? You can't even say it doesn't have reach. There are plenty of users there. Meaningful real world movements have started there. The only thing you lose is the real news doesn't follow and write about 4Chan nearly as much as they do Twitter.
> It seems to me that the presence of that community on your site has effects that spread beyond the community itself, it shapes the way people interact even outside the soft-banned subreddit.
This is sort of talking around an argument. You could say the same thing about a subreddit dedicated to re-electing a local alderman because of his policy on the maintenance of public parks. Speech is meant to inform, or to affect change.
The question is whether you're going to use an online annoyance argument to moderate controversy on a platform. If the justification for why you're going to moderate speech is that people who are not annoyed by that speech might react to it, you've moved squarely into making "genuine arguments for true censorship: that is, for blocking speech that both sides want to hear."
Right, so Scott presents this alternative MVP version of "moderation", and implies that it is sufficient to satisfy the goal of "ensuring that your customers like using your product". And that therefore efforts beyond that MVP land you into "censorship".
My point is that the actions he categorizes as "moderation" are in fact not sufficient to achieve the goal. Thus, even a platform who is purely concerned with providing a service will need to undertake actions he categorizes as "censorship" (or at least would have to come up with some unknown new system of moderation, since the one he proposes is insufficient).
Reddit does have a soft-ban process where they place subreddits in "quarantine". You have to click through a warning and/or subscribe to the subreddit to see its contents. AFAIK most quarantined subreddits get banned though. The real effect is to migrate content to other sites.
This very site is the closest implementation I’ve seen to what he described in the article. Moderation actions that are still visible for those who opt in.
I personally opt in to see all the flagged/dead comments. I would say 1-5% of them are mysteries as to why they are dead (meaning they have been automatically killed, not killed due to other users flagging).
But then the ones that deserve to be killed/banned are so, so egregious.
It’s just unfortunate that there will always be innocent people that get roped in with automatic moderation.
>I would say 1-5% of them are mysteries as to why they are dead (meaning they have been automatically killed, not killed due to other users flagging).
Some of these are people who have been shadowbanned, you can check by looking at all their comments in their profile and if they're shadowbanned (almost) all of them will be dead.
I agree with you and thats why I’m very interested in a job opportunity that sells moderation as a service.
As a former die hard member of [R]age Board for the Elites, I remember the use of the N-word being so prolific a moderator changed it to display “ice cream” with no work around. Seeing a post suddenly pop up calling somebody a stupid ice cream was hilarious.
I was a teenager in the wild days and Chan doesn’t disturb me even, I just don’t go there. Borderland Beat is real enough.
As an adult I feel that the ban hammer is an absolute necessity.
> I would say 1-5% of them are mysteries as to why they are dead (meaning they have been automatically killed, not killed due to other users flagging).
More than likely shadowbanned accounts. Some green accounts also seem to be automatically [dead] until they get couch’s, but I’m not sure the exact situations around that.
I don't want moderation OR censorship. I want proper labeling so that I can self-filter out things that I don't want to see. Just like the labels on cans of food tells you exactly what ingredients are found within the can so you can skip the ones with peanuts, gluten, or some other substance you don't want; I want labels (and levels) on data.
If I don't want to see profanity, I should be able to set my filter to exclude profane comments. If I don't want to see nudity, I can set that filter too. Just like movies get a certain rating (G, PG, R, etc.), we should be able to properly label data.
> I want proper labeling so that I can self-filter out things that I don't want to see.
This is basically what he describes in the article as a form of moderation:
"If you wanted to get fancy, you could have a bunch of filters - harassing content, sexually explicit content, conspiracy theories - and let people toggle which ones they wanted to see vs. avoid."
Who gets to choose what is labelled sexually explicit or a conspiracy? At the beginning of the war in Ukraine there were accusations of bot farms mass flagging people's streams as containing adult content to get them silenced.
Labels themselves would become an ideological battleground. Imagine what reddit would look like if anyone could freely and publicly tag a post with something like "Libtard" or "Chud".
Subjective labels being applied subjectively isn't as big of a problem though, since it'll probably be clear to most people just from the name that they're subjective. More objective labels will naturally be more useful, and therefore more popular. The only people who are going to use "Libtard" as a filter are those actively trying to create an echo chamber for themselves.
If your concern is with the labels themselves being used to convey a (possibly offensive) message, I think you could just have a way for people to hide specific labels and never see them again. Or maybe a way to label the labels as subjective, or just delete ones that are obvious flamebait.
I was thinking along the lines of the second scenario where the labels are both the target and vector of abuse. Moderating what labels are allowed opens you up to claims of censorship again.
People love to misuse tools meant for good, on Reddit I've been on the receiving end of the "reddit cares" self-harm notification because of some barely spicy comments.
I'm with you on this. It also occurs to me, though, that by rigorously labeling all offensive content, you give ai a huge boost in creating it with perfect fidelity.
Which inspires another really weird, super uncomfortable thought. If the CSAM producers had cheap, reliable methods of creating their awful content without the use of real people, would that reduce the harm done?
I can't remember the last time I felt so conflicted just asking a question, but there it is.
I hear you. But maybe AI shouldn't even be in the equation. It's expecting a robot to solve a people problem. We try to do this a lot in the technology industry. But no matter how great technology is, it's a mistake to think that it can fully solve social and communication problems.
I would settle for some simple categories like 'Profanity', 'Sex', 'Nudity', and 'Violence' with a level 0 to 9 with specific criteria for each level (zero meaning none, 9 = full of it). Let each content author 'self-report' initially but let others up or down vote if the author 'mis-labels' something.
The article's distinction between moderation and censorship feels like the difference between a freedom fighter and a terrorist - i.e. if you are sympathetic to their cause you use the more positive euphamism, but there really isn't an objective difference.
At most the distinction the article seems to be making is that moderation should be optional and censorship forced - you should be able to choose to see the dead comments if you want (nevermind that that is hardly the norm for "moderation" on the internet).
All i can think, is under that distinction, the mccarthyism in the US would probably be considered "moderation" not "censorship" despite probably being one of the most egregious examples of censorship in usa in the modern era. So i have trouble accepting that definition.
His definition is tightly tied to digital messaging, where it's technologically trivial and cheap to undo the message-hiding, and scales badly if at all beyond that domain.
> At most the distinction the article seems to be making is that moderation should be optional and censorship forced
I don't think this is the correct distinction he is making. He defines moderation as a receiver being able to choose whether they want to see certain content. He defines censorship as a third-party deciding if a receiver can see certain content whether or not they want it. McCarthyism would be censorship under that definition.
The aspect i was thinking of was how many artists were blacklisted, but still able to publish in less reputable places like playboy. You could say they were just blocked from high society but anyone interested was still able to read them.
No it isn’t different. At least it is not different in it’s mechanics. The activities of a moderator and a censor are identical. The intent and objective contain nuanced differences, but only of degree not kind.
Both moderation and censorship have the outcome of reducing what information parties are allowed to communicate, and a system is its inputs and outputs, thus they are the same thing.
This isn’t to say that it’s bad or good. The rhetoric of moderation is perceived better than censorship, but this is like the cops calling your interrogation an interview; it doesn’t actually change what’s happening.
That's a very thin line between moderation and censorship. When in doubt about which is which, think if if obeys freedom of speech or not.
Deleting a comment because it is insulting a person is moderation. Deleting a comment because you don't like it, it doesn't conform to your views or you find it outrageous, silly, inflammatory, false, fake is censorship.
It's maddeningly difficult to distinguish those thin lines in oneself. I've been working on that for years and it takes all the self-awareness I have. This makes me happy that HN is small (by internet standards) because how do you scale self-awarness? It seems almost an oxymoron.
Your comment brings this out because some subset of "outrageous, silly, inflammatory, false, fake" is right on the cusp, and making those calls (to moderate or not to moderate?) puts tremendous pressure on one's own feelings and beliefs. What helps one do it neutrally is self-awareness, but that is the scarcest thing there is. It takes a decade of hard work to distill a bit more. (Edit: and I'm not claiming to have much; just that it's needed.)
I'm uncomfortable with the "false" / "fake" end of your spectrum because we don't have a truth meter. Who am I to decide what's true or false or "mis" or "dis" for anybody else? I'm not taking on that karma.
"Inflammatory" is easier to work with because it's about predictable community effects and one can moderate for community cohesion. Moderating that way ends up excluding truths that the community can't withstand, but such truths probably exist for any community. Groups may even be defined by what they exclude. We can try to stretch those limits but there's only so much elasticity available.
I blanched when I read "Moderation is the normal business activity of ensuring that your customers like using your product" in the OP, but actually that's basically saying moderation is about community cohesion and I can't disagree. But the secret agenda, on HN at least, is to stretch it.
> ... because how do you scale self-awarness? It seems almost an oxymoron.
Well, in very small doses numbers help. It is easier for a small group to watch the blind spots of each member. As the numbers scale up to serious group sizes things seem to fall apart again as a hive-mind forms.
Which means that the sensible thing to do is to form a committee of intelligent people with good incentives, then go trustingly with what they suggest. Which is, coincidentally, a successful model that governments use. All the politics is generally a distraction from the real work being done by committees.
> It is easier for a small group to watch the blind spots of each member. As the numbers scale up to serious group sizes things seem to fall apart again as a hive-mind forms.
I agree, but that's not self-awareness—that's seeing other people's blind spots, which is much easier, in fact it happens automatically.
You're right that it falls apart at scale. Somehow mass blind spots take over. Can that be mitigated? That is the question.
Group dynamics seem to change qualitatively at each order of magnitude. Maybe the problem of "social media", i.e. internet group dynamics, is that we're dealing with orders of magnitude we've never seen before. That doesn't get worked out in just 10 or 20 years.
>As the numbers scale up to serious group sizes things seem to fall apart again as a hive-mind forms.
At the same time the hive mind is quite often a protective defense against insurgency in forms.
This seems to be a problem with the comments in this entire post. We're taking community as individuals doing individual things, and in small forums this is commonly true. But, when the group grows larger and money is on the line that assumption should be discarded. In astro-turfing for example, a seemingly large group of 'users' will direct communication on your forms via somewhat 'rational' communication, but possibly disliked by a lot of members. Then you'll notice a group that seemingly counters the astroturf to the level of absurdity that turns more 'hearts and minds' towards the astroturfers (guess what, the counter turfers were also the astroturfers).
You typically end up with one of two situations. The forum either takes on the ideas of the astroturf group and it becomes encoded in their ideals, or the fend it off, but in doing so embrace some of the extremism implanted by the astroturfing group in the first place.
Also, what happens to any group when 4chan decides to raid you for the lulz?
By internet standards. We're 2 or more orders of magnitude smaller than the marquee names. My guess (which I don't want to find out by experience) is that the pressure scales non-linearly, so a hundred times the users would mean who-knows-how-many-zeros more pressure.
HN feels mid-size in a good way. Most forums are a lot smaller, and then there are the few famous ones that are much much larger. There aren't that many in HN's order of magnitude. The mid range is a nice place to hang because although the problems are impossible, they're not utterly impossible. You can work with them around the margins.
There's no stats page but last I checked it was around 5M monthly unique users (depending on how you count them), perhaps 10M page views a day (including a guess at API traffic), and something like 1300 submissions (stories) and 13k comments a day.
The most interesting number is the 1300 submissions because that hasn't grown since 2011 - it just fluctuates. Everything else has been growing more or less linearly for a long time, which is how we like it.
Thanks dang. Submissions not growing could mean the the users are already finding and submitting most of the interesting stories out there, and there's not much more to find.
I'd be happy if folks just laid off their truth-o-meters as soon as breaking news erupts. Unambiguously inflammatory comments? Please moderate away. Fake news? Please slow right down until a single research cycle has possibly completed, and evidence can be provided based on rigorous studies (which take time).
> Deleting a comment because it is insulting a person is moderation.
Moderation is far more than that. Moderation depends on context - for example, deleting a comment like "$political-party is better than their opponents" from r/programming is "against freedom of speech", but is an example of good moderation, because political discourse is off topic on that forum.
Moderation is about setting the tone and scope of discussion. For many kinds of forums, this includes deleting comments that the moderators find outrageous, silly, inflammatory, and off-topic. Removing things that don't conform to their own views is however a faux pas, moderators are supposed to be neutral in the on-topic discussions, as their name suggests. False/fake are a more complex discussion, as there is no universal source of truth of course.
Now, for a completely open forum, such as Twitter or Facebook, moderation doesn't really make sense, since no discussion or tone is off-topic a priori (except of course for removing illegal speech).
But blocking political speech equally from one subreddit, is different then say blocking an article about a presidential son's laptop from multiple entire platforms.
As I said, I don't think this concept of moderation works when applied to an open discussion forum such as Facebook or Twitter, where nothing is actually off-topic, and where people generally feel that they are communicating with friends and followers and can thus set any tone their own community accepts.
But still, if the Hunter Biden laptop story were removed from the Linux Kernel Mailing List, from StackOverflow and from LWN.net (entire platforms), I wouldn't accuse these particular platforms of censorship.
"Obeys freedom of speech"? What does that mean? Particularly, what does it mean outside the US? I don't think Scott's definitions are great, but they're at least cutting across the right axis.
> Deleting a comment because it is insulting a person is moderation. Deleting a comment because you don't like it, it doesn't conform to your views or you find it outrageous, silly, inflammatory, false, fake is censorship.
I can't help but think this comes down to, its censorship if i disagree, moderation when i agree.
If someone posts something like "Elon Musk is an idiot", would deleting it be censorship or moderation? Musk is a person after all, but i suspect that many people would say deleting such a thing is censorship.
To give a more realistic to hacker news example - if there was a story about some company reselling modified GPL software for profit without following the gpl license, i would probably call them "bad" people. This is clearly an insult. I still think it would be a reasonable comment to make (hopefully a bit more fleshed out then just calling them evil of course).
Moderators will delete comments or ban people for insults against people they like/support, but then cheer on more aggressive insults against people they deem bad (it's suddenly become very trendy to lash out at Elon Musk on Reddit, for example - you'll score upvotes rather than subreddit bans for doing that)
I feel like something happened when the Internet became more "mainstream" that meant that I couldn't simply accept walking away from a platform.
Back in the day we might have a forum or a number of forums on a topic. Let's say it's Nascar Forums or whatever. I might not like the opinions of the moderators and that'd be that, I'd leave the site. I'd recognize that it's not reflective of the wider world.
Somehow Twitter and Reddit and various other social networks don't feel like that. I feel quite often that some subset people around me take opinions from Twitter etc. as being reflective of mainstream thought. When really it's still just a tiny microcosm of humanity.
I don't really use them any more, but I still have the sense that all sorts of social movements and bizarre (from my perspective) opinions and value frameworks are being born and spread there.
> Somehow Twitter and Reddit and various other social networks don't feel like that. I feel quite often that some subset people around me take opinions from Twitter etc. as being reflective of mainstream thought. When really it's still just a tiny microcosm of humanity.
More and more often I find myself in conversations with people and suddenly I feel like I'm reading a reddit or twitter thread, and I'll see the conversation follow an exact path that I've seen before, only now in real life.
It's a really strange feeling as someone who has been reading people's opinions on the internet for what feels like forever, and just recently seeing those opinions show up in everyday conversations with people I've known all this time.
Edit: It's especially jarring when you see these people say things like "in my opinion" or "i think", because now I start to wonder "do you really? Or did you just see somebody else say that?" Not that all my thoughts are original, but I don't take credit for things like that.
This also presumes the site being 'moderated' is operating in a vacuum.
The reason why moderation exists on sites like Twitter / Facebook is due to:
1. Laws (e.g. child porn, abuse, harassment, illegal speech like Tiananmen square in China).
2. Advertisers - the real customers.
3. Public opinion
In that order, with a pretty big gap between #2 and #3. Don't comply with laws? Out of business tomorrow. Don't do what advertisers want? Out of business this year. Don't do what the general public wants? Maybe out of business in a few years, maybe not, it depends.
The methods proposed in this post are great for dealing with issues around public opinion, but do very little to appease governments or advertisers.
If it is not not on your server, than you don't get to complain.
Moderators were initially tasked with keeping threads on topic, enforcing predefined community standards, and parsing irrelevant detractors.
Dark patterns are now mostly just used to manipulate people, and attenuate conduct to fall into line with group think biases. This policy drives up engagement, traffic, and profits.
Most truly smart people I met, were often rather prickly characters more concerned with data than being popular. ;)
> If it is not not on your server, than you don't get to complain.
That is a rather naive slogan to be repeating in 2022.
Once your servers become the de facto public square, we absolutely get to complain. Not even talking about how your server is running on top of a huge amount of infrastructure that was created by our society, enabled by principles and laws that have been discovered and refined across generations. Your server does not exist in a vacuum.
Democracy requires a healthy public square to survive and thrive, and that is more important than some overly simplistic notion of private property.
> Once your servers become the de facto public square
Twitter ? It's far from a public square, even in the US (outside of the US it barely exists)
Also, if an online public square is a pre requisite for democracy: it should be a public utility, not something owned by a company who's incentive are diametrically opposed to the interests of the users
I live in Europe and I can tell you that Twitter absolutely became the public square for politicians at all levels of government, in the various countries and in the EU. It is also where all journalists hang out, and many other type of actors too (academic researchers for example, all sorts of activists, etc). And I am pretty sure it is the same in the US.
I'm sorry but you do not know what you are talking about.
> I'm sorry but you do not know what you are talking about.
Big claim...
Look at the stats, how can it be a democracy pre requisite and a public square when not even 10% of your country's population is on it ? (and probably 30% of these are bots, and another 30% are inactive)
It's an online bubble of polarised people looking for attention, not a public square and not representative of anything
> It's an online bubble of polarised people looking for attention, not a public square and not representative of anything
I find that people who write things like this are expressing a distaste for politics taking place on Twitter. They tend to be expressing how they think things should be, and I can even sympathize, but that has no bearing on how things are.
How things actually are: the vast majority of politicians of any importance in the west (and probably not only) have staff dedicated to maintaining their Twitter presence. Most institutions use Twitter as an important communication channel, including the various institution the make up the EU, the USA and the UN. You can check this for yourself. If you actually talk with journalists, you will understand that Twitter is now central to everything they do, and that trickles down to everyone else.
10% is actually a huge number. The majority of people do not have a public voice nor an interest in actively participating in politics. Politics is made of "polarised people looking for attention". Always has been.
You have to be living under a rock to not be aware of all the major geopolitical incidents that take place on Twitter. It was the main communication channel for Donald Trump. We just recently witnessed the incident between Musk and Zelenskyy. A lot of interactions happened there during Brexit. I could go on, and on, and on.
You must confuse noise and mumbling rants with actual politics then
It's not a public square of constructive discussion, it's a public square in which everyone, most of them being absolutely incompetent/uneducated in the subject, has a megaphone and screams their version of reality
Not really, a bunch of politicians and journalists decided to go to some rich guys private mansion and now we get to see the fallout of using private property rather than ensuring actual public owned places exist.
So are we talking about US companies, because the US has a very very strong right to private property and freedom (and control of) that speech on private property. I don't see this changing anytime soon, nor do I even have an idea of what a legal model of what you're suggesting would look like in the US.
Outside the US this problem typically gets even worse. For one, why is your country depending on a (generally) US company for its freedom of speech? And two, outside the US freedom of speech laws are typically significantly different than the US model.
I didn't claim that it was (or wasn't) a free speech issue. What I said is that "if it is not your server you don't get to complaint" is an overly simplistic (and untrue) take on this issue.
In fact there is already some amount of legislation on this, for example in the EU.
There is a lot to unpack, but monopolistic competitive firms tend to consolidate markets. However, reducing all phenomena to a false-dichotomy is a common bias, as it ignores how some prefer an echo-chamber of sycophantic content.
Have a wonderful day, and here is an up-vote... lol... =)
You're posting this on a heavily-moderated forum. Which "Dark patterns" do you think are in use here, and why do you tolerate it vs. having the same discussion on 4chan or wherever?
This kind of comment always runs afoul of reality. Moderated discussion is the rule and not the exception because it works. It results in more desirable content that attracts more users. Parler and Gab and Truth[1] couldn't beat Twitter; 4chan couldn't beat reddit, 8kun couldn't even beat 4chan. Going back farther, USENET was fundamentally unmoderatable by design, and it drowned itself in a torrent of spam.
The less moderation, the less utility. Everywhere.
>If it is not not on your server, than you don't get to complain.
Yeah this worked 20+ years ago but it doesn't work now.
There are a small handful of monopolies of places to go on the internet. You are basically suggesting "well make your own forum and then you can control it!". Come on, you know that doesn't make any sense in this day and age.
>Most truly smart people I met, were often rather prickly characters more concerned with data than being popular. ;)
I've also found people who think they are smart but really don't know what they are talking about because they are living in a complete bubble and ignoring reality tend to end their posts with smiley faces.
First: the dictionary defines censorship differently. AstralCodexTen's definition even seems to ignore the fact that e.g. Zuckerberg and Musk are very much "people in power". And it adds "customers" to the definition. In what perverse mindset is a speaker a seller?
Second: is this about freedom of speech? If it is, say so, because moderation nor censorship exclusively define that. Muddying the debate by giving some weird definition of two concepts isn't going to help that.
I want to understand this but don't. Can you explain? Specifically I don't understand the bit about definitions and the bit about freedom of speech. Aren't all moderation and all censorship about freedom of speech, or the lack thereof?
I'll try to elaborate. The article distinguishes between moderation and censorship. It does so on the basis of two definitions the author made up. However, a discussion about either these definitions or the distinction between them doesn't make much sense per se, unless the elephant in the room is freedom of speech. So, I wondered: is this article really about that? The examples at the end would suggest it is.
But freedom of speech is not neatly defined by (the negation of) the definition of censorship, or moderation, either the one from the dictionary or the one from the blog post. It's a term that (in the USA) is defined by law and jurisprudence, and is open to some debate, and in other places is just missing and use losely in debates about reform.
If the author wants to use his/her definitions to state a position in one of those debates, fine, but say so.
Everyone involved wants to characterise things their own way. For example this article says that it’s the “pro-censorship” side that conflates moderation and censorship. In my experience this also happens the other way around for similar but opposing reasons but this is ignored because the author has picked “a side”.
So we get lots of local definitions of censorship and moderation depending on the flavour of views the writer wants to present. They all tend to be reasonable in context but mean everyone is talking passed one another.
Essentially everyone is trying to argue over the ground of what moderation should be so it doesn’t get lumped into the “evil” censorship. But because this is largely just opinion everyone tries to make theirs look more official and factual.
> Aren't all moderation and all censorship about freedom of speech, or the lack thereof?
Freedom of speech is a right that concerns you, a citizen, and public authorities. Censorship, in return, is when that right is interfered with by [the public authority] blocking your speech.
Moderation is when a [private entity] is blocking your speech. There is no public right that is interfered with in that case. You have the right to say what you want without public authorities interfering, but you don't have the right to say what you want in my house.
(Note: this definition is different from the one used in the article)
Whether or not Twitter is infrastructure, and therefore moderation equals censorship would be different debate.
This is a very limited, legalistic point of view, and only applies in the USA.
In reality, freedom of speech is a principle that is more or less well-defined, and that is more or less codified into law in certain countries (the first ammendment to the US Constitution being the most famous example, but many countries have similar, though more limited, rights).
Viewed as a principle, it not only applies to the relationship between the individual and the government, it can be applied to all human groups. We can say that WeChat is worse for freedom of speech than Facebook, even though both are private enterprises and are not within the scope of any freedom of speech laws in most jurisdictions.
The reasons why freedom of speech is viewed as a virtue, at least in European-inspired thought, is not exclusively related to the relationship between the citizen and the state - it is about ensuring good ideas are heard even if that means bad ones are heard as well, ensuring that unpopular bad ideas of powerful people (within some group) can be challenged by the majority of the group, ensuring that minorities who are harmed by some decision get a chance to let everyone else in the group know about the harm.
These apply just as much to you vs the state as they apply to you vs your local church, your village, your tribe, your gaming clan, your company etc. For various reasons, each of these groups may decide that these reasons are not as important as others, while still wishing for some amount of freedom of speech (for example, a church will often not tolerate obvious blasphemy, but may still tolerate criticism of the church leaders, or vigorous discussion of the implications of scripture).
So, I am well within my rights to complain that my state or my company or my church or HN doesn't encourage freedom of speech enough, even though none of these institutions is bound by the freedom of speech clause of the US constitution. Also, I can even claim that the US constitution itself, or the SC interpretation of it, doesn't respect freedom of speech enough if I were to disagree with any decision on the matter - the principle of freedom of speech is separate from the US law.
> This is a very limited, legalistic point of view, and only applies in the USA.
I'm not an US citizen, and it applies to where I live as well, so there's that.
Within your framework, I agree with your views and conclusion though. My post was intentionally targeted as legalistic point of view, but I agree that this can be generalized.
One thing I see missing from the conversation is that fact that large corporations are becoming agents of the state, via fear, regulation, and fear of regulation.
I can see Social Media CEO saying "well, I believe in free expression and I would otherwise want to allow this message, but I'm worried what the government will do to me if I do allow it. So instead I will block it."
When a platform blocks some content, some call it "censorship", while others say "hey, it's a private company and they can block what they want to. It's called 'moderation'" - but this may in fact not be something the platform wants to block - it's indirect government censorship.
I wonder how Elon will handle this. Will he be cowed by the government into censoring content they don't like? Or will he ignore them and take his chances?
It's not just the government companies need to fear, it's also other companies. Kiwifarms wasn't taken down by government action, there's been a massive conspiracy between a combination of criminals and infrastructure providers to try to kick them off the internet on the orders of journalists.
I think this is different. I hadn't heard of Kiwifarms before, but I looked it up in 10sec. Looks like they were a forum for people to actively harass others - which has always been and should remain illegal.
"Specific Person is a real jerk! And they probably eat live frogs for fun too! What a loser!" - As long as Specific Person can effectively ignore this - meaning it's not forced into their feed, overwhelming their inbox, etc - then that's probably fine, even if untrue.
"Specific Person might live near XYZ. Definitely DON'T put any turds in their mailbox ;-) " - Pretty clearly an invitation to real-world harassment, which is not okay and should not be abetted by any platform.
In social media, moderation of this style is practically equivalent to censorship because the visibility of the content is determined algorithmically: if the algo decides to "hide" your posts they become invisible (or much less visible) in other people's feeds. The fact that they're still visible for the die hard followers that refresh your page to see them is mostly irrelevant.
Your speech is effectively censored by the moderators because you cannot use that tool as intended, to reach audiences with the same ease other types of speech can.
It's like requiring a newspaper to "moderate" all articles by a certain author publishing them in the form of a short notice "If you are interested in the writings of Mr. Smith, who may or may have not published an article for today's issue, then please send a self-addressed enevelope to PO-Box ....".
>Your speech is effectively censored by the moderators because you cannot use that tool as intended, to reach audiences with the same ease other types of speech can.
But you can still stand outside your home and say whatever you want. You can print up and distribute flyers. You can set up your own web site too.
While I despise the business models and actions of pretty much all the "social" media actors, they are not required to provide you with a platform.
You can still say whatever it is you want to say, but those private actors have no responsibility to act as a megaphone for you.
True in the absolute, but irrelevant. Censorship is always contextual, in practice censors fall short of the 1984ish "ideal" of total domination of the individual.
This dichotomy (free speech for all, but no one is required to offer a platform) works in liberal societies because you have a diversity of publishers.
You can't come to my house, sit on my couch, eat my food and watch my television unless I say you can. If you try to do so without my permission, I'm within my rights to throw you out and, if you resist, use force to remove you.
How is usage of a private company's private server resources any different?
> How is usage of a private company's private server resources any different?
If you push that argument to the extreme, why should anyone be allowed to publish anything in our county? They can always go someplace else and speak their mind, but we don't want it around here.
What I'm saying is that exists a gradient of power between the dictatorial (state censorship) and inconsequential (your couch), and we have a social contract that allows the same suppression of speech in the latter but disallows it in the former. We call the consequential type "censorship" but it's the same basic action, there is a gray area where private agents working under the authority of the state can be just as powerful censors as the state itself.
For example, if private banks deny services to a newspaper hostile to the government, I could take your argument and spin it around, they are "free" to publish on their own pocket cash, private banks are not forced to "offer a platform". But we clearly understand that publication will lose advertisers, fail commercial and the interests behind the banking ban will have succeeded in suppressing free speech - suggesting them to distribute leaflets will not help.
The cut-off point where private suppression becomes consequential censorship is, in my opinion, when the gatekeepers of speech are centralized and oligopolistic, like for example social media and, unlike for example, traditional print media. A single publication denying publication is perfectly fine, as long as others exists and have reasonable similar access to distributions channels. With the death of traditional print media and the highly concentrated nature of the visual media and internet space, this less the case.
You can of course publish on your blog with zero traffic, but you are effectively shut out of the relevant distribution channels, you are effectively distributing leaflets in your corner of the street.
>You can of course publish on your blog with zero traffic, but you are effectively shut out of the relevant distribution channels, you are effectively distributing leaflets in your corner of the street.
Exactly. And no one. Zero people. No person or entity is required to provide you with a platform, megaphone or audience.
That doesn't stop you from saying what you want. Freedom to express yourself does not entitle you to an audience. Full stop.
Edit: To clarify, your free speech rights do not trump my free speech (which includes not hosting your speech on my private property). Yes, today's social media has inordinate influence due to network effects. But those are for-profit corporations who owe you nothing.
Get that through your head. They owe you nothing.
Personally, I despise those corporations. And I voted with my feet and wallet and left nearly a decade ago. But my distaste for them and their business models doesn't trump their free speech and property rights. Nor should it.
Your rights do not supersede those of others, except on your private property. Facebook's (or Twitter or YouTube, etc., etc., etc,.) servers are their private property.
Want a public square? Then set up a public square. Those corporate, for-profit entities are not that.
> Get that through your head. They owe you nothing.
If you want to be abrasive on tangents, you can definitely be that guy, but that was not the subject, rather the nature of censorship. You completely disregard my example of iligitemate private censorship by the banks, intended to point out the problem, only to forcefully restate a extremist ideological position that doesn't really function in any true society. Ok...
>> Get that through your head. They owe you nothing.
>If you want to be abrasive on tangents, you can definitely be that guy, but that was not the subject, rather the nature of censorship. You completely disregard my example of iligitemate private censorship by the banks, intended to point out the problem, only to forcefully restate a extremist ideological position that doesn't really function in any true society. Ok...
Abrasive or not, it's not a tangent. It's the central point.
I didn't disregard anything -- rather, I didn't address the tangent you were off on.
Yes, censorship is bad. There. I addressed your tangent.
However, mine is not an extremist position at all. Freedom of expression and property rights are core elements of Western civilization.
Clearly, we're talking past each other. Which is too bad.
But I'll restate my main thesis once more: You can say whatever you want. But you are not entitled to an audience. And here's the proof.
I think something that really bothers me about this discussion about moderation is how many people approach this debate like a new born baby. They have an idea and then speculate on how it fixes everything. There's never any discussion of what exists in the real world. ACX here is essentially describing some key attributes of reddit. Each sub-reddit has it's own moderation team that decides what's acceptable and then you opt-in. This is pretty close to what ACX is proposing.
So let's look at what happened in reality. Almost immediately sub-reddits pop up that are at the very least attempting to skirt the law, and often directly breaching the law- popular topics on reddit included creative interpretations of the age of consent for example, or indeed the requirement for consent at all. Oh and because anyone can create one these communities, the site turns into whack-a-mole.
The second thing that happened was communities popped up pretty much for the sole purpose of harassing's other communities. But enabling this sort of market place of moderation, you are providing a mechanism for a group of people to organize a way to attack your own platform. So now you have to step back in and we're back to censorship.
I also think that this article completely mischaracterizes what the free speech side of the debate want.
>Each sub-reddit has it's own moderation team that decides what's acceptable and then you opt-in. This is pretty close to what ACX is proposing.
No, it really isn't.
Differences:
1) Reddit is super ban happy, and there is no way to view banned content. Ban reasons include slurs, political opinions, as well as no reasons at all.
2) Subreddits are not filters over the same content, they have (mostly) different content.
3) There is a fractal abundance of user-moderated subreddit; yes, there is some bad culture in some of them. This is not what ACX is proposing. He is proposing 2-20 filters, ran by the company, not by volunteers, with a specific purpose and clearly defined.
I really don't see how ACX's proposal can cause illegal behavior or harassment that is not already there.
You're making a false equivalence with reddit, then pointing out reddit has negative emergent properties.
On r/wallstreetbets there is an automod that proactively deletes people's comments to save them from Reddit's Orwellian "Anti-Evil" foot soldiers.
"Reddit has a paid team called Anti-Evil Operations (part of the "Trust" & "Safety" team) which goes around permanently banning accounts for saying bad words. We made automod block them so you don't lose your account for saying a word and getting reported. It's not our rule, it's the entire website now, we're just trying to look out for our people. Sorry."
Reddit is super ban happy today. That's because of the complete trash fire that resulted of their original policy. They literally just had straight child porn on the site for a long time before they finally had to actually bite the bullet and fix the platform.
Reddit didn't have child porn in the open at any time that I'm aware of (I joined in 2008). What it did have were subreddits catering to pedophiles with "barely legal" content, which repeatedly were found to contain child porn distribution rings operating via PMs.
I personally gave up on Reddit, they are too trigger happy. Gave up on anything political 9 years ago, but 3 years ago gave up on practical things like credit card miles, traveling with US phone, etc. It's at best read-only for me.
I have observed an awful lot of Eternal September effect in these debates. I suspect it might be easy for people who have been living on the Internet for a long time to miss the ways in which their intuitions don't mesh with somebody new to the space. Leads to a lot of two ships passing in the night debate.
Fresh ideas are always welcome, but the people who are trying to maintain working forums have been at the process for a long time now and can draw on experience all the way back to the BBS days.
>I have observed an awful lot of Eternal September effect in these debates. I suspect it might be easy for people who have been living on the Internet for a long time to miss the ways in which their intuitions don't mesh with somebody new to the space. Leads to a lot of two ships passing in the night debate.
I don't disagree with your point, there's quite a bit of knowledge around building communities and moderation that's been around and honed for at least a generation. And we should take that knowledge and build on and around it.
That said, folks have been going on about "Eternal September" for decades. Granted, people are born all the time, but they've grown up in the age of the Internet.
As such, it seems to me that at some point (if not now, when?) we need to get away from that particular excuse.
Anyone born before the Internet (myself included) has had a long time to figure things out, and anyone born in the Internet's wake is immersed in it from a fairly young age.
So why do we continue to use "Eternal September" as a foil?
It's entirely possible I'm missing something important, and if I am, please do enlighten me. Thanks!
You grew up in the age of elevators and have undoubtedly been completely immersed in them more or less your entire life. Do you think you know more or less about elevators than somebody who lived through the initial transition towards them?
It's a fun example because of how wrong Hollywood (and intuition) gets this one. You're on an elevator and an evil terrorist cuts the cables! Oh no! What happens next!? Not much, besides you being annoyed at probably being stuck somewhere in between floors. People had to be persuaded that the technology was safe and so Elisha Otis' [1] regular demonstrations of his safety stopping invention is a big part of the reason of why elevators were able to take off. It's practically impossible to make an elevator fall down a shaft.
Now us growing up with them simply take everything for granted to the point we have absolutely no clue at all about what we're using, but always have used it, so just assume it must be okay as is.
Because before the Eternal September, it was HARD to participate. So, virtually nobody did it, and those that did tended to all resemble each other. Post Eternal September, it's so easy little children are doing it before they can do basic math. So now the 'great unwashed masses' come in and, like any other commons, 'ruin it'.
> Because before the Eternal September, it was HARD to participate. So, virtually nobody did it
This is an important point, I think. There's a generational aspect to this. Those of us who came of age prior to the internet (and especially social media) being ubiquitous don't really have an expectation that we're owed a forum where we can just say anything that's on our mind. As one of those olds, whenever I hear people complaining about "censorship" on whatever social media platform it kind of sounds entitled to my ears. We didn't expect to have a platform prior to about 2005 or so. We didn't have 'followers'. We discussed politics with a few friends in a bar over drinks. But now so many people seem to expect these private companies to provide them with a platform where they should be able to say whatever they want. Freedom of speech doesn't guarantee you a platform for that speech.
> Now, like electricity and water, it's become so fundamentally entwined with modern living that folks see it (maybe rightfully) as a common right.
It doesn't feel like it's fundamentally entwined like electricity or water - It would be tough to live without electricity or water. But I live just fine without social media - in fact, I think my quality of life has gone up after deleting my twitter account back in May. And to a large degree, I think we're worse off as a society than we were prior to the emergence of social media.
I feel you are exactly missing the point of what Eternal September means.
Yes, there is some knowledge for some internet savvy types who grew up with the internet, but a lot of people are casual users. Many people still feel anonymity gives them carte blanche to be a jerk, or worse.
The amount of effort to be online is zero, but the amount of effort of people to behave is sometimes also zero (or low), of course depending on context. HN is a lot more civilized, but if it stopped being moderated it would in time be a nasty place as well.
> Many people still feel anonymity gives them carte blanche to be a jerk,
I don't think it's even anonymity, for some, indirect communication is enough: I once had a roommate who would leave unpleasant messages on the answering machine, but would be perfectly nice in person (on the same topic, even).
This is why I left facebook a few years ago: people who, in person, were reasonnable and nice friends spewing hatred online. I decided I'd rather not hear from those people very often and keep good memories of them (and the occasional contact) than just turn my back on them.
That's all true, but it's not because of some "Eternal September"[0] effect.
It's because there are assholes everywhere. They are small in number, but they are pretty evenly spread throughout the population. Regardless of ethnicity, socio-economic status, age or any other demographic detail, they are everywhere.
And they always have been, and likely always will be.
I suppose that social media dynamic allows them to disproportionately visit their douchebaggery on the rest of us, but that's not "Eternal September." That's just humanity.
There has been a general coarsening of the culture which has gotten worse since the 2010s, Donald Trump was certainly a part of it.
I was talking about it with my wife this morning and she thinks that people have been getting more concerned about the homeless colony in a nearby city because the people who live there have been getting angrier and nastier. Other people down our road have put up signs that say "SLOW THE FUCK DOWN!"
There are the nihilistic forms of protest such as the people who are attacking paintings in museums to protest climate change. (Why don't they blow up a gas station?)
And of course there are the people on the right and left who believe they can "create their own reality" whether it is about the 2020 election or vaccines or about gender.
> There are the nihilistic forms of protest such as the people who are attacking paintings in museums to protest climate change.
So as somebody who noticed this bit of drama, and looked into it, I can explain. It's actually all very simple. Here goes:
It's a stunt!
Yup, they say that much. They tried protesting, they tried blocking roads, but were making page 10 of the newspaper. So they came out with some dramatic, outrageous plan that they knew wouldn't do harm (they planned this well in advance, and glued themselves to glass, not to the actual painting) but would be weird enough for people to talk about it. Plus there's a degree of symbolism in it.
> (Why don't they blow up a gas station?)
Because you can't protest oil infrastructure in any effective way. Blow up something? That's terrorism. Glue yourself to a gas pump? You'll get insulted and probably dragged off, plus gas stations are kind of meaningless and replaceable and often not anywhere very interesting. Protest at oil infrastructure? It's typically remotely located, and secured. You won't be noticed before you're removed. Block Shell's HQ? Good luck blocking a huge building with multiple entrances and security.
Point being there's nothing oil related I can think of where you could cause some sort of disturbance, quickly get attention, have the press get to you before you got forcefully removed from there, and have the story be interesting enough to have a prominent place in the news.
They also threw food on some of the paintings - whether or not they were aware of a protective glass pane beforehand is unknown - and at least in one case glued themselves to a 16th century picture frame, itself a priceless cultural artefact.
The people who think the Jan 6 attack was a good idea will add it to the list of other things leftists do that they think justify the Jan 6 attack.
For that matter I'd say that a lot of what "Black Lives Matter" does is also nihilistic. That is, there is not a lot of expectation that things will change because their ideology doesn't believe that things can change and because it won't look at the variables that could be changed to make a difference. What I do know is that some investigator will come around in 20 years and ask "why is this neighborhood a food desert?" but the odds are worse than 50% that they'll conclude that "it used to have a supermarket but it got burned down in a riot" is part of the answer. In the meantime conservatives will deny that the concept of a "food desert" is meaningful at all and also say that Jan 6 was OK because leftists are always burning down their neighborhoods and getting away with it -- except you (almost) never get away with burning down your neighborhood in terms of the lasting damage it does to your community unless your community is in the gentrification fast track, see
(It might be the sample I see, but I know a few right-wingers who admit that there is a lot of craziness on their side but it is justified by what the other side does whereas I never hear from leftists that it's justifiable to say that "A trans woman is indistinguishable from a natural woman" because of something stupid a conservative did.)
What do you mean? They got what they wanted, more or less. They're a group of people organized around an idea, figured they weren't getting attention, so they went to look for a way to get some. That's all there is to it.
I think you're expecting some sort of special significance here. No, it's not complicated or even special.
On its own it doesn't. If you need to recruit people to your cause though you need people to know you exist and there's somewhere they can join.
> Giving up saving the planet for the goal of getting attention is fundamentally nihilistic.
Er, how are they giving up?
What they're doing is regularly shouting "Save the planet!" at people. Only this time they picked a weirder way to do it, because nobody was paying attention to the more normal ways they had to say it.
"Propaganda of the deed" is as likely to make people think climate change protestors are crazy and just make them close their earflaps as it is to motivate more people to take desperate nihilistic actions. This spectacle at best convinces people to tune out.
It's got to me more like this.
You have to tell the ESG people that what matters about Exxon Mobil is (1) they have to stop fact investing in producing oil that other people burn, (2) it wouldn't matter if they became a "net zero" company by pumping CO₂ from their oil refineries in the ground and using synthetic fuels in their trucks, (3) it doesn't matter how many women they get on the board.
People who are concerned about climate change in the US should be concerned about institutional reform in the Democratic party. Namely, we shouldn't be in situations like
where a lunatic that could be beaten by a ham sandwich could win because the Democrats don't think that Pennsylvania deserves a senator who can verbally communicate effectively. (e.g. out of everybody in the state Philadelphia could get somebody in the top 1% of verbal communication skills as a Senator, why do they have to get somebody who is disabled?)
I haven't the faintest idea of what you're talking about.
Again, I think you're under the impression that this particular event was supposed to be in some way Meaningful. Part of some grand strategy or a big movement or something. I'm telling you it's not.
As far as I can tell, https://juststopoil.org came into existence around February this year. They're just a small, new group formed around opposition to Big Oil that's trying to make some noise. This paintings thing is attempt #25, and it just happens to be weird enough to make the news, but not fundamentally different to the 24 that came before it.
In fact they tried previously gluing themselves to microphone at a news agency:
I see no indication that this is part of some grand strategy from the Democrats or something. No, it's just a small group doing a weird thing and getting news coverage because weird thing is weird.
Edit: And in fact, Just Stop Oil is UK based, so they have nothing to do with the US Democrats or Pennsylvania.
If it's helpful, this organization has in fact actively sabotaged oil infrastructure in the past to protest and no one gave a single shit. They had a whole week where they decommissioned several pumps back in August. I think its helpful instead of asking "why don't they <obvious>" to assume someone has already tried it.
> Almost immediately sub-reddits pop up that are at the very least attempting to skirt the law, and often directly breaching the law- popular topics on reddit included creative interpretations of the age of consent for example, or indeed the requirement for consent at all. Oh and because anyone can create one these communities, the site turns into whack-a-mole.
Twitter is already a whack-a-mole, but for a range of content that's much broader than just illegal content. A change like this would reduce their moderation burden.
> The second thing that happened was communities popped up pretty much for the sole purpose of harassing's other communities. But enabling this sort of market place of moderation, you are providing a mechanism for a group of people to organize a way to attack your own platform. So now you have to step back in and we're back to censorship.
You can ban harassing behaviour without banning open discussions.
Finally, I don't think the ACX proposal is exactly like reddit. Reddit still has moderation imposed by a third party, this moderation configuration is in your control.
> You can ban harassing behaviour without banning open discussions.
You can. But you'll still have people screaming about how they were actually silenced for their political views. Which is exactly the situation we have today.
Banning harassing behaviour doesn't necessarily entail banning people. You can also make the reasons for suppression publicly visible and so auditable to expose any such lies.
More transparent systems with less suppression or banning are clearly possible, but commercial entities don't want to hold themselves to strict rules which is why they keep the rules and processes opaque. This same trend is seen in both social media and app stores.
> Each sub-reddit has it's own moderation team that decides what's acceptable and then you opt-in.
It's a great concept, though it's worth pointing out that there's considerable overlap of moderators between subreddits (a.k.a. powermods).
In effect, you end up with a single system applied across hundreds of subreddits which may-or-may-not be appropriate, and if you happen to earn the ire of a powermod you find yourself banned from all the subreddits they moderate.
There's way way more censorship on Reddit than I think more people realize. Mods shadow delete your post so you can still see it but no one else does. Unless you have a habit of logging out and checking you won't notice when a post gets deleted.
Mostly quit Reddit when I realized about 5% of my posts were shadow deleted for holding the wrong opinion.
> Mods shadow delete your post so you can still see it but no one else does.
Mass taggers have historically been abused to ban or shadow-ban users who've posted in "bad" subreddits.
If you argued with someone in r/the Donald, you'd magically be unable to participate in a large swath of unrelated communities. Trying to appeal the bans would often result in you being permanently muted or receiving a snarky response from the mods saying it's your fault for engaging in said 'bad' communities.
Nah, this was a local sub and I participated regularly. That's why it was so shocking. I had hundreds to thousands of posts over a few years there and there was no indication ever that I was doing anything problematic. It was specifically the posts about local politics (zoning and such) that went against the zeitgest that were shadow deleted.
There were some grassroots efforts around 2015 to make the mod log public and transparent (so it'd say what was removed, by who, and optionally why), but it was unfortunately opt-in and never gained large adoption.
They tried getting rid of that in Voat, and it was such a cesspool that nobody sane used it, and the owner couldn't keep it up and shut it down. /r/TheDonald at one point tried to migrate after whining about Reddit's moderation and came crawling back because they couldn't stomach it.
Yeah, Reddit's moderation system is far from ideal, but we've seen experimentally that it's definitely better than not having it.
Apparently /r/TheDonald was very used to being in a safe space. Voat didn't cater to that, and TheDonald couldn't take that so eventually they returned to Reddit.
I think a key problem with Reddit is that when you create a subreddit (and obtain the final say on all moderation decisions) you also get the name of the subreddit and URL forever. This means that early arrivers have the advantage of taking all the best subreddit names.
Well if you do leave it unmoderated for long, other users can appeal to admins to get control instead, assuming they have a plan on doing something with it. Not unlike a domain name in a way, just without the paywall.
> Well if you do leave it unmoderated for long, other users can appeal to admins to get control instead
Historically the r/RedditRequest process only considered whether the moderator was completely inactive from Reddit. There could be a dead subreddit that hadn't been touched in years or a flourishing subreddit whose top mod was completely MIA, there was nothing you could do if the top mod was still active on Reddit — even if you could prove they were just squatting.
A very funny example of this is that r/trees was long ago set up by marijuana users to discuss pot-related topics. So later r/marijuanaenthusiasts was set up for discussion of actual trees and dendrology.
There is nothing wrong with the Reddit approach of having different communities with different moderation policies (well, except for the many edge cases you point out.) But presumably the unspoken motivation behind TFA is moderation on Twitter which is a very unique site that does not have specific “walled garden communities” with their own moderation, and is uniquely porous. If you want to use a moderated (ad-supported) site that lets each user define their own community, then you either need some global base moderation policies or else you need aggressive client-side filtering (and advertisers had better trust that it works effortlessly, since it will be their ads showing up next to the genocide posts.)
I agree entirely. The new born baby approach is a bane of our time more broadly. A stupid descendant of an important, enlightenment idea.
I think if you look at real-world examples with an actual history like reddit... you find that reality is complicated. All those problematic reddit dynamics that you describe exist. But, there were also some advantages/successes to their "moderation" approach.
Above all, these approaches aren't just good/bad or successful/failed. There's a ton of texture. The moderation approach dictates a lot about the platform's character, and that isn't captured by binaries or spectrums.
> ACX here is essentially describing some key attributes of reddit. Each sub-reddit has it's own moderation team that decides what's acceptable and then you opt-in. This is pretty close to what ACX is proposing.
This is like saying "no moderation is _essentially_ the same as moderation because you can just choose not to read posts." I suppose it's simplistically true if you squint hard enough and actively ignore the issues people care about, but in that case you're not left with a particularly useful statement.
Let's look at the proposal vs. how Reddit currently works. Let's say you have a sub called /r/soda, there's a rule where you can't "promote sodas," and they'll ban you for rule violations if you say "Coke is my favorite" but not if you say "Pepsi is my favorite" (selective enforcement of rules, even by site administrators is common on Reddit). 45% of the users love Coke, 30% love Pepsi, but 100% of the posts about what soda people live are about Pepsi.
So with the proposal you make a post about how much you love Coke, notice that the post is deleted, then choose to ignore moderation and see all the other posts by other Coke users of the sub that have had a similar journey. You continue to discuss things with many of the people on the sub like you did before.
With the current way Reddit works, you get banned and then start your own sub. But no one knows about your sub, the vast majority of new subs die, and even the ones that are moderately successful take years of work to gain a community. No one in /r/soda might even realize that "Coke is my favorite" posts are banned if they hadn't made such posts themselves, since there's no way to see what's banned and what isn't. The users there are kept completely ignorant of the need to create another sub.
So now you spend hours trying to promote your sub in various places and creating enough content for it that people who visit will actually use it and not just see a dead sub and move on. If you're lucky, and with a lot of work, in a year you might be able to reach a small fraction of the audience that was in /r/soda, and tell that small group of people "Coke is my favorite."
And even then, Reddit admins can look at you askance and decide to shut down your sub. I've seen multiple subs say "We can't even have a friendly discussion about [particular_topic] because Reddit admins have said they'll shut us down if we do." Even things that other subs are allowed to talk about (again, the rules are applied rather arbitrarily).
I can't see how the proposal is like Reddit in any meaningful way.
> So with the proposal you make a post about how much you love Coke, notice that the post is deleted, then choose to ignore moderation and see all the other posts by other Coke users of the sub that have had a similar journey. You continue to discuss things with many of the people on the sub like you did before.
That’s a nice outcome, but also leave you vulnerable to outsiders deciding to ruin your sub by flooding it with discussions of table tennis or racism or arguing about moderation.
But you make a good point if the differences between the OP and Reddit.
I think Reddit is a terrible example. The moderators are volunteers, the rules and their application seem entirely arbitrary, and there is no way to opt out.
The key point the author of the article makes is the difference between moderation and censorship: you can opt-in to see moderated content, but you're unilaterally prevented from seeing censored content.
What Reddit does (removing posts, comments, banning accounts) falls under the definition of censorship here -- within the platform itself, obviously.
I don't believe that anybody who moderates content on reddit would actually say "it's fine for you to say _x_, but I just don't want you to say it here". EVERY reddit moderator wants _x_ not only not to be said, but not to be thought, either. They make do with the tools they have to eradicate _x_ as a concept and are disappointed that they can't go farther.
What frustrates me is when people argue about solutions before they clarify their language. The value of that ACX post is in the second - it clarifies the language - it shows that we can distinguish (at least in some fuzzy way) between moderation and censorship - and that is the very first step in analysing solutions.
Reddit is awful. The whole system is designed to create a groupthink. Downvoting of alternate opinions, post throttling and over zealous moderators banning people for wrongthink. Actual discussion of unpopular opinions is impossible. This creates a userbase with a very similar mindset, and so the problem just compounds itself.
(This is for anything with a political slant to it, I still find it useful for niche subjects, say mycology)
That can happen but the relative frequencies matter a lot. What I’ve seen at least an order of magnitude more frequently is that someone comes in with some tedious repeat of e.g. recent Fox News talking points, perhaps even literally copy-pasted, and then whines about downvoting because clearly the problem is that other people weren’t taking seriously their regurgitation of something which has debunked many times already. This is especially common in places like science or economics subreddits where a hefty fraction of these aren’t controversial takes but simply run afoul of measurable reality.
I’ve also seen a ton of cases where people expressed disagreement or contrarian positions but did so in a respectful and fact-aware manner and had positive interactions because they were respectful of the community.
I've found that when posting a popular opinion, you can have absolute minimum effort fluff like "racism is bad" and get plenty of upvotes, but for controversial opinions you need to tread extremely lightly. You need disclaimers and careful wording and references etc. to avoid being downvoted. In many cases that's not even enough.
Positive interactions are certainly possible and do happen, but the site is heavily heavily tilted towards groupthink. Fighting it is an uphill battle.
Users rarely deviate from the established upvote/downvote patterns. In fact, I'd go as far as saying many users don't even read the comments before voting.
When two users are having a heated argument, it's common for a third person to respond to the 'right' person with an innocuous comment and be heavily downvoted for it.
It’s definitely easier to go along with the status quo, but where in life is that not true? Things like academic debates have a lot of rules and structure trying to reduce that but even there it’s understood that certain positions are harder to argue than others.
But you have tedious repeated things upvoted to the moon as long as it is the right kind of wrong, so that is often all you see in popular subreddits when sorting by popular.
Definitely - I’m not saying it’s perfect, just that it’s not as simple as portrayed in the comment I replied to. It’s just part of human nature that challenging the status quo is harder than stroking it: nobody changes the world by saying “<local sports team> is the best” but it’s easy to warm a crowd up that way, too.
A subreddit is a community - if you don’t like the norms, go to a different one. It’s like going to someone’s party and loudly asking why they all like such lousy music – nothing positive is going to come from it. In many cases, it’s not even entertainingly weird - more like “you should try something good. You probably haven’t heard of my favorite band before but look them up. Nickleback.”
Note also how I mentioned people repeating low-effort arguments. The tedium comes from the stream of people who come, repeat someone else’s idea, aren’t prepared or willing to engage intellectually, and whine about censorship when nobody finds that compelling. Anyone who spends much time in a particular forum can recognize that and see that there’ll be very little value from engaging. We see that a lot here where people complain that HN is biased against cryptocurrency because the response to “have you accepted our lord and savior bitcoin into your heart?” was not well received by people who remember the exact same claims being made a decade ago.
I remember when I started using Reddit, I've read the instructions (as one does, right? RIGHT?), one of which was that you're not supposed to downvote something you don't like, rather downvoting is for content that doesn't add anything to the conversation (I think that may have been a tooltip for the downvote arrow). So I wrote some unpopular opinions like for example I compared adblock to piracy with some arguments for why it's similar and… Got downvoted to hell! :D
This experience as well as a rather low discussion level on Reddit made me resign from using it. Hard to find a replacement, however; I like to use Stack Exchange, as a very dry form of communication that focuses on merit.
It wasn't supposed to be that way. Even the Reddiquette page told people not to downvote simply because they disagree. But nobody reads Reddiquette, and these days most redditors think disagreement is the purpose of downvotes.
That being said, you'd have to be naive to think downvoting for disagreement doesn't happen on HN.
> post throttling
This is only a thing for new accounts as an anti-spam measure.
> over zealous moderators banning people for wrongthink
I think it's wrong to blame reddit for this. This will be a problem on ANY site that allows users to create their own communities within it.
Downvoting used to not matter because ratios were clearly visible. Sorting by controversial still does this to an extent, but shadowbans and outright censorship have mostly removed those metrics.
Religion, politics, and discussion of the giant pumpkin create group think and mobs. It's been this way before the internet and will be that way after it's gone. Any group that contains more than one side that thinks they are right and that won't change their mind no matter the evidence will lead to this.
It didn't used to be. It used to be pretty good, but a handful of censorious mods insisted that they needed tools to fight exactly the same sorts of things that OP is insisting that moderation is for - illegal content, real harassment - and then immediately started using those tools to purge political enemies.
Moderation isn't supposed to prevent illegal content, law enforcement is. So that's out of scope. The harassment problem is supposed to prevent harassment, but this is just a failure of reddit to provide the correct moderation tools to block organized harassment, not a failure of the concept.
Do you want cops filtering through all online sites looking for child porn? Do you think that's a good use for their time?
It's the threat of law enforcement that leads people who run websites to remove illegal content.
Generically (to, say, please advertisers) that is an expectation that sites are going to be proactive about removing offensive (or illegal) material. Simply responding on a "whack-a-mole" basis is not good enough. I ran a site that had something like 1-in-10,000 offensive (not illegal... but images of dead nazis, people with terrible tumors on their genitals, etc.) images and that was not clean enough for Adsense. From the viewpoint of quality control, particularly the Deming viewpoint of statistical quality control, it is an absolute bear of a problem to find offensive images at that level -- and look at how many people write a paper about some A.I. program that gets 70% accuracy is state of the art.
Precisely, It's like the author never understood the original definitions, but think their interpretation of the world creates them anew. It's a dictionary, not the bible.
Moderation as "we modulate other people's behaviors for you and your feelings" is justifying the act of censorship using other terms. These rationalists aren't half as smart as they think they are, or they wouldn't need so many words and novel interpretations.
>Almost immediately sub-reddits pop up that are at the very least attempting to skirt the law, and often directly breaching the law- popular topics on reddit included creative interpretations of the age of consent for example, or indeed the requirement for consent at all.
What do you mean? It's not in any way illegal to discuss such topics.
LOL, tell that to the NSA/FBI and see how they feel about those conversations when they need to dig up dirt on you. :P
It’s not technically illegal to have those conversations, but it’s in some kind of a grey area, because if you’re having conversations like those; the immediate question is of course why…it’s tough to find reasons to bring up that topic other than the obvious.
I always thought the original moderation method of Slashdot better; a simple slider that adjusted the thread based on the quality of conversation you desired.
The quality metric is your own view of the user-provided reasons for moderation.
Posts on Slashdot have a score between -1 and +5. They can be modded up or down by randomly selected average users who get 5, 10 or 15 'karma' to apply. Karma is applied along with the set of negative or positive reasons (flamebait, troll, insightful, informative). You can tune the slider to show you posts rated in any range between -1 and +5, and apply special mods to certain reasons (e.g. make 'troll' mods drag things down more, or either get all the funny posts or push them down) to customize this further. All posts are visible if you browse at -1. Logged in users default to a score of 1, or 2 if they have a history of positive contributions and elect to check the box to add 1 to their score. Anonymous users (who are now required to be logged in as well, but who select 'post anonymously) start at 0 always. It's possible to have your account fall hard enough that your posts start at -1 as well.
I believe they said they'd only ever actually removed a handful of posts due to some lawsuit or another (might've been that 09 F9 11 02 key? I don't remember any more).
There's also meta-moderation, where anyone can vote on whether the mods that were applied are fair or not.
I feel this piece, like a lot of moderation/censorship rhetoric, starts from a disingenuous place.
Free speech, moderation, editing, censorship, propaganda, and such do not have clear definitions. The terms have a history. Social media is new, and most of the nuance needs to be invented/debated. There aren't a priori definitions.
This article is defining censorship as X and moderation as Y... Actually, it provides 2 unrelated definitions.
Definition 1 seems to be that moderation is "normal business activity" and censorship is "abnormal, people-in-power activity" on behalf of "3rd parties," mostly governments.
Definition 2, the article's "moderation MVP" implies that opt-out filters represent "moderation" while outright content removal is, presumably, censorship.
IMO this is completely ridiculous, especially the China example. China's censorship already does, work like this article's "moderation MVP". Internet users can, with some additional effort, view "banned content" by using a VPN. In practice, most people use the default, firewalled internet most of the time.
Youtube's censorship is, similarly, built of the same stuff. Content can be age-gated, demonetized or buried. Sure, there is some space between banned and penalized... but no one is going to see it and posting it is bad for your youtuber career to post it. This discourages most of it.
IMO, the difference between censorship and moderation is power, and power alone. A small web forum can do whatever it likes and it's moderation. If a government, medium monopoly, cartel, cabal or whatnot do it.... it is censorship. If a book is banned from a book stall, that's moderation. If it is banned from amazon... that's censorship.
If amazon have a settings toggle where you can unhide banned books does not change anything that matters. A book that amazon won't sell is a book that probably won't be printed in the first place. That's how censorship actually works. It's not just about filtering bad content. It's about disincentivizing it's existence entirely. Toggles work just fine for that.
I find it problematic when the prerogative to speak entails a necessary monetary cost to be paid by someone else for amplification and discovery, and that money is private money. Engineers aren’t plentiful or cheap.
If hypothetically every metaphorical YouTube should close for business because perhaps governments shut down ad funding, or if YouTube starts charging money, is my prerogative of speech in peril?
And if AT&T and all the other phone companies converge on the position that I must pay big bucks to talk to people, is that censorship? It’s not like I can easily find a free version of AT&T.
If I am enormously underpowered that I cannot bid for speaking time on TV, is that censorship? I’m basically an incompetent David bidding against Goliaths.
“Right to Speech” does not equal “Right to Distribution”.
You are free to say what you want without going to jail, physical punishment, or fines (unless your speech is part of committing some other criminal behavior—such as fraud—or civil tort).
But nobody is obligated to provide you the means of distributing your speech.
Thats not to say that there aren’t asymmetric means of disseminating ideas or messages over third-party distribution channels. But you’ve got to be savvy enough to do that, or separately powerful enough to buy your own distribution.
Even owning distribution is pointless if you can’t communicate your ideas in a way that attracts listeners. “Right to Speak” doesn’t equal “Right to be heard”.
>If hypothetically every metaphorical YouTube should close for business because perhaps governments shut down ad funding, or if YouTube starts charging money, is my prerogative of speech in peril?
No because it's not related to your specific content.
Even if you were to make the argument that X content doesn't make money and costs too much, if someone pulls the trigger without giving you recourse to resolve issues, then it is violation of free speech.
In your example, if AT&T tells you that you must pay more money to say certain topics, then it's a violation.
If it costs money, it costs money, nothing wrong with that. The issue is intent.
The counterpoint to this is that putting in a barrier that only some people can pass can work as censorship. If you raise the price very high, then only people with lots of money can use it. This means all (or at the vast majority of) of the content is that which is desirable by the very rich. So by raising the price to a high level you are censoring content that is of interest to the poor but not the rich.
This is actually exactly how big media/big politics operates.
Why does specificity of intent matter? If the government on mere whim decided that I can't use the phone, is that somehow not a violation of free speech? In this scenario, is it any less of a violation just because the government acted on a whim as opposed to any specific intent to manipulate conversation?
So this is kind of what I mean by "no a priori definition."
Social media is new. The "right" to broadcast was almost theoretical before the internet. It wasn't what free speech was about.
IMO, we don't have free speech at all on fb/youtube/etc... currently They can close your account and take away your right. They don't need a court and it's all up to them. You have individual speech on those sites.
One day I'll have to write a thing about freedom of speech vs freedom of reach: what right to people have if any to machine distribution or algorithmic suggestion to other people?
Look... IMO, these tend to go the wrong way from the first sentence. Almost any polemic on this topic starts by assuming or implying that Freedom of Speech means X or that censorship means Y.
The reality is that Freedom of Reach means something totally different than it did 25 years ago. Freedom of the Press and Freedom of Speech didn't use to be the same thing.
We can't keep going to the past and pretend that early republican politicians, early liberal philosophers or early modern lawyers have the answer for everything rights-related. It's ridiculous to extrapolate what Free Speech means in the era of TWitter and Youtube from the early modern era's thoughts on pony mail and leafleting.
What Freedoms we have, or should have, now that technology enables them, is a question for people of now to decide.
> We can't keep going to the past and pretend that early republican politicians, early liberal philosophers or early modern lawyers have the answer for everything rights-related
Indeed. But people like those things, and use them as anchors for their own political views. Ninteenth-century views of freedom of speech excluded huge areas of material under "obscenity", much of which simply isn't obscene now in the west. Such as "information about contraception".
I think that approach overemphasises and embeds the power of these platforms.
Twitter (let’s face it we’re talking about Twitter) is not the world. It’s certainly a popular place for people to yell at each other and increase the general level of aggravation in the world. But it isn’t the world. If someone is moderated off twitter, their ability to speak is impacted, but only to one audience and in one way. Their ability to speak to me is unaffected entirely because I think Twitter is a giant waste of everyone’s time and energy. They can speak elsewhere, other platforms can serve their needs, and if they are popular enough then they’ll take the users and the attention from Twitter. Regulation here would only entrench the platform.
Twitter it not the public square, it’s some private company’s arguing arena.
>> Twitter (let’s face it we’re talking about Twitter) is not the world
Twitter is popular with journalists, politicians and such. Hence all the attention. For most people, facebook and youtube are the important part.
IMO, youtube is the most important medium today. It's effectively the free-to-air TV of the internet. It has a terrible, clunky, disrespectful and illiberal approach to content moderation. In fact, it's pretty similar to state censorship methods... ambiguous rules, selective enforcement, whipping boys. Makes Twitter look good.
>It's not a public commons if it's owned by a private individual or company...
>Yes it's popular and yes there's a lot of people on it and using it, but that doesn't make it a public commons, its ownership does.
Exactly. Folks who complain about the (lack of) moderation on some corporation's platform are, for the most part, certainly welcome to do so.
However, those corporate platforms (unlike public platforms) have no responsibility to host anything they don't want to host.
They are not your government. They are not your friends. They are not a public square. They are businesses whose goal is profit. And that goal isn't necessarily a bad goal either.
However, the business models of those corporate platforms are dependent on showing ads to those who use those platforms. That creates a variety of perverse incentives, including (but not limited to) boosting engagement by pushing outrage and fear buttons to keep folks on the platform, watching the ads.
And so I ask, does the above sound like a public square? It certainly doesn't to me. Rather, it sounds like a bunch of corporate actors taking whatever steps (regardless of impact on discourse) to maximize profit.
Again, that's not inherently a bad thing. But it doesn't (and never will) fit the bill for a "public square."
I genuinely don't think it is. At the risk of paraphrasing one of the replies below, and probably to repeat myself - it's more of an arena showing ads, in which the contestants are encouraged by onlookers and the organiser to argue and aggravate, to express thoughts in such short-form that nuance and understanding are lost, to create spectacle. For profit.
Regardless of moderation or censorship, a public square would/should operate quite differently.
Moderation is a special case/form of censorship. In many cases, it's a desired or willful filter as the article suggests, but it is censorship of information.
Censorship doesn't have to be forced, it can be agreed to but it's still censorship. Rebranding things to look fuzzy and give positive perception doesn't change the underlying principle.
Manipulation of information be it omission, selection picking, burying in piles of noise, etc are all manipulative tactics most of which are used to follow the spirit/intent of censorship. It happens in restricted environments like China but also happens in less restrictive environments like the US, the method of approach simply changes around what's legal and possible. One could argue censorship approaches in free speech environments are the most resilient because they rely less on the difficult tight controls of information flow nation states like China leverage.
If Random House declines to publish your book because they don’t think it will sell, is that censorship?
If I run a sci-fi bookstore, and I choose not to stock your book about political philosophy, is that censorship?
If it write an article that reviews your book (wherein, necessarily, I pick and choose what parts of your book I talk about, and also paraphrase [is that the same as “manipulating information”]), is that censorship?
When there is simply too much information for any person to consume, and even too much to be able to _evaluate whether To consume_, what does _not having censorship_ look like?
>If Random House declines to publish your book because they don’t think it will sell, is that censorship?
Yes. You're being censored in this case by the will of markets or perception of the will of markets (consumers at large), less so by the store owner due to systemic constraints they must operate in. Markets indirectly represent the will of mass consumers. There's a reason we have minority protections in government and chose a republic structure over pure democracy, to prevent oppression of the voices of the few by the masses.
>If I run a sci-fi bookstore, and I choose not to stock your book about political philosophy, is that censorship?
If you intentionally chose to ommit the book and it wasn't due to chance omission, perhaps because you hate the author, then yes, it's censorship.
>If it write an article that reviews your book (wherein, necessarily, I pick and choose what parts of your book I talk about, and also paraphrase [is that the same as “manipulating information”]), is that censorship?
It depends on how you choose that information and present it. Is it a representative sample of the book or are you intentionally cherry picking pieces of information, especially out of context, to represent a preconceived opinion you want to portray and not an actual summary? If so, the yes, it's censorship. If not, then no, it's not censorship.
I agree there are logistical constraints that makes reductionism a requirement. The key differences in all of these cases is intent. It's difficult to prove but the question isn't if you had to reduce information for logistic purposes but how and why you chose what to reduce. Did you reduce information for your advantage? Then chances are, it's censorship.
>>If I run a sci-fi bookstore, and I choose not to stock your book about political philosophy, is that censorship?
>If you intentionally chose to ommit the book and it wasn't due to chance omission, perhaps because you hate the author, then yes, it's censorship.
They already told you, the reason the book is not published is because it is off-topic. The bookstore sells Sci-Fi books, and they choose not to sell other genres.
If that is censorship then this definition of censorship isn't useful for any discussion we're having right now.
The way you're defining censorship, it's literally impossible to run a bookstore without censoring. You can't carry every book, you have to pick and choose which ones are worth promoting to your customers. If you carry a random selection of books, then your store will be filled with dreck that no one wants to read and you will go out of business.
Well, that's sort of an underlying problem with markets in general though, now isn't it? The mass of consumers create momentum that shift what is and isn't viable. It happens in all sorts of products, phones for example shifted to non-removable batteries.
Regardless to your personal stance on this issue, the mass of consumers were fine with it and therefore demand has dictated largely what's reasonably available to your average citizen--phones with difficult to swap batteries. There are still some options but they're scarce and require tradeoffs from most flagship phones.
That's with something less (yet increasingly) significant as a smart phone. When your product is information such as books, or education, such as in universities, suddenly you have markets and the masses dictating what's available and indirectly censoring content. Heck, it happens in science all the time these days. There are dominant groups and names in fields who hold significant sway, they often influence funding agencies and ultimately influence where scientists can viably perform research (unless they can self fund their work).
But yes, markets can and do censor based on demand. I don't blame a small bookstore owner in this context for censoring, it's a systemic issue they have little power over. As you point out they have to pick and choose based on demand signal, they're running a business after all, yet if the only source of the information can be obtained through bookstores, suddenly markets are indirectly dictating to bookstores who indirectly dictate to consumers what information is available.
Sometimes we want this effect, we want markets to help us pressure and bubble up certain products or solutions to the top, other times we might not want this to be the case (as in free speech and flow of information).
You're welcome to define censorship that broadly, if you wish, but if you do that it loses most of its negative connotation and becomes no big deal. If I take your definition of censorship, then censorship is fine and I have no issue with it. Which is a problem if that attitude meanders back to the actually bad kinds.
I am very bad at content classification on an open plattform. When i dont know the sender, i can barely tell about his or her intentions. Being bad at finding out about the intentions of a news article applies probably to you and everybody else as well. This is a big problem with social media that governments and companies are using to their advantage.
With classical media agencies you have a bigger chance at guessing their political views and intentions. Whether you like them or not is up to you. Most of these media agencies also take responsibility for the release of a piece of news.
This responsibility is missing with "individuals" who post on social media, they vouch for nothing and can only lose their account.
The right to post to an unlimited amount of people should be earned in my opinion. Right now the algorithms of the social media platforms even encourage controversial or aggressive posts which is the worst that can happen.
You can list 1000 reasons why moderation is different from censorship, but people in power can achieve the same censorship goal with tools provided by moderation.
What you're talking about is clearly censorship and not moderation by the OP's definition - "blocking speech that both sides want to hear".
Your link is actually great for his point: people in power (OnlyFans and Meta) blocked something that both sides (their competitors, and their competitors' users) wanted to hear - otherwise they wouldn't have needed to block it. (For the sake of argument, I'm assuming that the lawsuit's claims are true - I have no idea if they are but it's not pertinent to this point.)
That's why the distinction made by the dictionary definition AFAIK is not made by form, but by actor. Censorship is done by a public authority, moderation is done by a private entity.
I've said elsewhere that Twitter might be public infrastructure. The distinction is still clear though.
(Me personally, I think it should not be allowed for individuals to own the wealth of small nations, much less command it. But that's a different topic)
Well, I didn't say 'government censorship', and I think it's redundant. Which might be just me, but you can Google "difference moderation censorship" and see that this is a generally accepted view.
The author of the article tried hard on distinguish Moderation from Censorship, here is something I want to point out:
When you talk about censorship, you must also talk about freedom of speech which usually is a well-defined legal term in countries that supports it. On the other hand, moderation is a form of management. Depends on why and how the action is carried out, moderation could be better or worse than censorship.
And,
> If the Chinese government couldn’t censor - only moderate
This is exactly how most censorship is implemented in China, through moderation. See the funny part there?
The point is, I don't think the word Moderation must inherently better than Censorship, it all depends on the why and hows.
Huh? China abducts and disappears human rights lawyers, activists, and sellers of banned books, to ensure that ideas are disappeared from the public sphere. You seem to be suggesting that that is one in the same as moderation and there's no distinction to be drawn.
If there's no distinction to be drawn because you're asserting that moderation means that something different in China, I think that is you rather than the author that's using terms in non-standard ways.
There is another issue here which is the external pressures platforms face.
Platforms will always be judged by journalists based on their lowest common denominator of users. Journalists will purposefully turn off all filters, find the worst comments, and say X platform is all like that bad person
This behaviour is then happily used by politicians to push platform owners in certain directions, possibly with the threat of regulation.
It also might mean coordinated leaving of advertisers, killing the platform.
So while in theory this article is right, there’s external power games that really play into the free speech issue.
One aspect of the "moderation is censorship" debate that I haven't seen discussed is that a lot of the time, what we're looking at is a campaign of targeted harassment against somebody. And then if they are blocked, the person doing the harassment will complain about censorship or sometimes "cancel culture".
But, the whole point of a harassment campaign is to silence someone- to intimidate or bully them until they shut up. What is that if not censorship by other means?
What I'm saying is that sometimes one person's freedom of expression has to be limited in order to protect the freedom of expression of someone else. And there's a balance to be struck there.
But the point is: there is no neutral position here. If you refuse to make a decision about when free expression become unacceptable harassment, you are still making a decision: you are saying that the person with the worst behavior will be the only one whose voice can be heard.
The same thing happens in the outside world, of course: deregulation, for example, does not generally bring freedom, it only shifts the power to whoever has the most money. The principle is the same. You either collectively make a decision about what behavior is tolerable, or you allow the person with the biggest stick to make all the rules. There is no opting out of the decision.
His argument that we should have opt in moderation rather than censorship is largely based on consent, but it misses a third category of communications which is where two parties are engaged in a consensual communication about a third party who does not consent. Basically the CSAM scenario, but also non consensual adult porn, snuff, libel etc.
There needs to be some way of dealing with this that respects the rights of the person who is being talked about, and that has to involve some censorship.
>His argument that we should have opt in moderation rather than censorship is largely based on consent, but it misses a third category of communications which is where two parties are engaged in a consensual communication about a third party who does not consent. Basically the CSAM scenario, but also non consensual adult porn, snuff, libel etc.
>There needs to be some way of dealing with this that respects the rights of the person who is being talked about, and that has to involve some censorship.
IANAL, but isn't exchanging CSAM, non-consensual adult porn and snuff videos, in fact criminal action in most jurisdictions?
If that's true, legal action can be taken against those involved.
As for libel (let's call it defamation to make it more inclusive), there are legal avenues (civil litigation and, in very rare cases, criminal charges) which can be pursued there too.
Are those avenues insufficient in your view? If so, what would you suggest, other than the current legal regime, in such situations?
we demand reasonable levels of due diligence from owners of private businesses when criminal activity is concerned. If you run a business that sells stolen goods, someone runs a drug ring out of your restaurant or you serve alcohol to minors you have a big problem.
This is so because law enforcement can only ever act after the fact and would of course be completely overburdened if every private actor was willfully ignorant of what goes on in their establishments. Not to mention that this is also to our benefit because without that level of civic involvement as a first line of defense the logical conclusion is a police/legal state involved in every transaction. Which is literally what you see in countries with weak civil societies but big tech firms. if neither the people nor business owners take responsibility, who is left?
>we demand reasonable levels of due diligence from owners of private businesses when criminal activity is concerned. If you run a business that sells stolen goods, someone runs a drug ring out of your restaurant or you serve alcohol to minors you have a big problem.
Absolutely.
And as I understand it, many of those social media companies do a piss poor job in policing the kinds of criminal activity mentioned by GP.
That might be an area where targeted regulation could be useful.
But the larger discourse around moderation tends to be focused on political actors (both legitimate and otherwise -- I'm not going to get into a political discussion here, as it's tangential to my point and not likely to spark worthwhile interactions) and the slights they claim are disadvantaging them.
In my view, that's the wrong discussion. We should be much more focused on the very real criminal and tortious conduct that pretty much runs rampant on those platforms.
I voted with my feet a long time ago and don't give my attention to those sites, but that only helps me and doesn't address the larger issues.
As I mentioned in another (tangentially related) discussion[0]:
The best-case scenario in my mind would be more decentralization of
discussion forums. That gives us both the best and worst of both worlds:
Folks can express themselves freely in forums that are accepting of those
types of expression, while limiting the impact of mis/dis-information to
those who actively seek it out.
Which may well be a good idea in this domain as well. Smaller, more focused and decentralized forums are more likely to have decent moderation (as those involved actually have some interest in the topic(s) at hand) regimes, and those that cater to criminal activity are isolated (and both more difficult to find and more vulnerable to being taken down) from the majority of folks.
It's not a good solution, but it's becoming clear that moderation of huge forums like Facebook/Instagram/Twitter/etc. isn't really practical.
If you accept that premise, what options (other than decentralization) could address these issues effectively?
I'm hoping we'll get to a point where not making a good faith effort to be factually correct will be seen as equally bad as censorship.
They're both just tactics used to decieve.
The internet is this amazing tool for building knowledge and we seen to be arguing about who is allowed to tell lies rather than collaborating on how to discover truth.
It's simple basic things like citing sources that need to become norms.
Citing sources doesn't solve the problem on its own, but we're in such a poor state at the moment that it (and other very basic changed) would still be a massive improvement.
You already get people citing things they clearly haven't read, but again, that's still better than not even citing something as it gives a basis to work towards the truth.
I've been in plenty of discussions where a cited source exists as a basis to work toward nothing.
It's a common tactic to use citations to get the person you are arguing with to walk in circles. It's a war of attrition: eventually the other party gives up on deconstructing and criticizing your citations, and you claim victory. This is closely related to the "ball is in your court" fallacy.
But if both parties are actually invested in critical thought, citations can be an opportunity instead of a roadblock. That still requires the effort of everyone involved.
I love showdead on HN, but I'd like to see this taken a step further on big networks like twitter, with user moderated blocklists where I could opt in or out, and also a way to see a list of blocklists I'm on.
Of course the real problem is that advertisers would probably bail if twitter had showdead, but maybe twitter can solve that problem with the new twitter $8 thing.
Here's a moderation idea: treat people as grownups and allow everything (everything permitted by the law, that is).
Just let individuals ban whoever they want from THEIR view.
If you want to be super-fancy, you could then see if some account X is banned by many of individual users from appearing on their feed, and give individuals an option to have those automatically banned from their own feeds after some threshold percentage.
So, if X is a jerk/spammer and individual many discussion group users have banned them (from their own view), give users the option to automagically have X banned from their own feed too once they hit say 10% of other members banning them.
This off-loads bannin a little, and as long as individual users have the option to check which those "auto-banned" are and e.g. except them from being auto-banned for them, it still maintains freedom.
In HN with showdead etc, I've never seen any "dead" comments that I couldn't just have as regular comments and just ignore on my own...
Well it turns out, people aren't grownups. So your idea pretty much fails there. Wait, you think I'm wrong, well, prepare for a thousands of posts and flamewars on this discussion! Oh it's also going to spill over into the gardening discussion that generally gets 5 post a day, but now will see 600 because of our firestorm.
https://en.wikipedia.org/wiki/Brandolini%27s_law demands moderation. Community and society demand moderation. Hell, I'd even go as far and say physics demands it. The internet breaks our ideas of social norms on moderation by taking distance and anonymity and shoving them in the same place all at once. And much like if you take groups of conflicting fundamentalist religious groups and put them together, the enviable violence outbreak affects everyone around.
I'm not sure where exactly you think my idea fails.
I didn't say to expect they'll behave like grownups in that they wont post anything immature, bad, etc. I said "treat people as grownups", that is, as capable on seeing something they don't like or find offensive or whatever. And if they're not capable, that's on them.
So, if a discussion becomes a flamewar with "thousands of posts", so be it. Members can always ignore it.
So, if the thousands of posts are from the same small number people (over-posting) and others find those annoying, then can chose to invidividually to ban them, or snooze them, or not.
But, if the thousands of posts are by thousants of members (and not bots), then why shouldn't they be left to continue to post and discuss this way, even if its a flame war? They're having fun, and others can ignore or ban them.
Now, if they verbally abuse someone though (e.g. threaten their life, dox them, and such), well, that could be moderated and members who do that could be banned. The rest of opinion, whether deemed controversial, unpopular, misinformation, or bullshit, can stay.
I don't care much about "Brandolini's law". Who is the arbiter of what's bullshit and why are they? The moderator? Well, that's tautological (they're arbiter of non-bullshit merely because they have the power to moderate).
The problem you have here is one of physics. You as a human exist only because of a staggeringly massive number of filters that have allowed you to pass (at least from a non-theist view). Brandolini's law applied to evolution is Darwinism. Simply put, if you focused on bullshit rather than survival you were dead.
Coming back to computer physics, simply put we don't have access to unlimited energy and storage space. I can generate trash faster than you can install servers to keep it, and much faster than anyone can afford to pay for the space. Companies that do not control spam simply go out of business, industrial Darwinism.
You can ignore physics as much as you want, but it's not ignoring you.
>Simply put, if you focused on bullshit rather than survival you were dead.
Which is neither here, nor there, as the stakes in a discussion forum or media are not "survival". Nor is the danger from something you don't like (or tons of them) life threatening.
>Coming back to computer physics, simply put we don't have access to unlimited energy and storage space. I can generate trash faster than you can install servers to keep it,
Again, neither here nor there. That is about spam, our subject is moderation. Gmail, for example, also has spam filters, but we don't consider it moderation...
>as the stakes in a discussion forum or media are not "survival".
I mean, as discussion forms commonly dox people, or brigade and convince members to go kill people IRL I really think maybe you're incorrect.
>Gmail, for example, also has spam filters, but we don't consider it moderation...
We whom? This has been debated on HN for as long as HN existed. Most would consider it moderation, but seemingly as a whole we have given up the battle as spammers are a plague of locust that will consume all.
>Again, neither here nor there.
Handwaves away physics, good way to accept technical reality of the situation here.
>I mean, as discussion forms commonly dox people, or brigade and convince members to go kill people IRL I really think maybe you're incorrect.
Yeah, but I covered that: "Now, if they verbally abuse someone though (e.g. threaten their life, dox them, and such), well, that could be moderated and members who do that could be banned. The rest of opinion, whether deemed controversial, unpopular, misinformation, or bullshit, can stay."
>We whom? This has been debated on HN for as long as HN existed. Most would consider it moderation
Has it? I'm here for almost as long as HN existed, and I don't remember this being debated. It might have been debated a couple of times in 15 or more years, but it's not like it's some common HN discussion.
I also doubt "most" would consider spam the same issue as the kind of moderation we're talkin about, or that even enough people think it's the same kind of thing as moderation of ideas and opinions and such. In fact, I'd go on to say that people who care for free speech still want spam filters - and don't view this as contradictory or care about the latter.
1. Blocking people is reactive. It means that everybody still sees the first time somebody DMs them calling them a slur. If you instead take the approach of "block everybody that the ML system thinks is alt-right" or "block every post that the ML system thinks is spam" then you are right back at the fun problems of false positives and defaults.
2. People aren't just concerned about their personal experiences on these services. Advertisers are concerned about their ads showing right next to posts calling jewish people evil. Citizens are concerned about the radicalization effect such that even if I don't see conspiracy posts about liberals eating babies, those vortexes still lead to social harm.
>1. Blocking people is reactive. It means that everybody still sees the first time somebody DMs them calling them a slur.
Well, that's the "treat people as grown ups part". In that: treat them as if they can read something they disagree with the "first time" and they wont melt.
Calling people slurs or violent threats etc could always still be banned - first time you do it, you're out, or three strikes, or similar.
That's unrelated to content (whether the content is controversial or some disagrees with the view, etc), and easy to implement and check.
>Advertisers are concerned about their ads
Sucks to be them then! Advertisers shouldn't stiffle speech.
Disney also didn't like to be associated with gay content, not that long ago. And all kind of partisan political views could be pushed for or against by advertisers. They should not have such a say.
In fact I think they should not be allowed by law to be picky on placement on any forum of speech (magazines, social media, etc) where they like to have their ads in.
Either they shun the medium altogether, or they buy slots that can appear whenever, alongside whatever. This way also people know it's not the advertisers choice or responsibility of being alongside X post, as they can only buy slots on the whole medium wholesale.
> Well, that's the "treat people as grown ups part". In that: treat them as if they can read something they disagree with the "first time" and they wont melt.
It isn't one time. You get a "first time" with each new harasser. It becomes a regular occurrence that when you open your inbox somebody is there shitting on you.
> Calling people slurs or violent threats etc could always still be banned - first time you do it, you're out, or three strikes, or similar.
Why? The whole point of the idea is that people don't get banned. Returning to "well, sufficiently bad users will be banned" is just returning to the state today with people completely disagreeing about what "sufficiently bad users" means.
> Sucks to be them then! Advertisers shouldn't stiffle speech.
The "public forums" (twitter, youtube, facebook) are all ad supported. Without advertisers those products simply die.
>Why? The whole point of the idea is that people don't get banned.
The whole point of whose idea? I'm discussing the subject of moderation, as in, not being moderated or banned for content.
Not the subject of not being banned for anything, ever. That is, spam, bots, personal threats, cp, could always be banned, and I'd be fine with it.
>is just returning to the state today with people completely disagreeing about what "sufficiently bad users" means.
The disagreement occurs because this is based on beliefs and ideas. But this idea or that idea, based on ideology, partisanship, etc....
If instead the banning was solely based on the type of content (e.g. no spam, threats, cp, automated mass posting) then there's infinitely less room for disagreement. Something either is spam or is not. Either is a threat or not. CP or not, and most people can agree on that.
Even if not everybody agrees on whether X is spam ("I think it's good, because it informs us about a product we didn't know about"), it's much much less than people disagreeing on what's a bad take on politics, or "disinformation", or such, and much freer speech.
>The "public forums" (twitter, youtube, facebook) are all ad supported. Without advertisers those products simply die.
> Not the subject of not being banned for anything, ever. That is, spam, bots, personal threats, cp, could always be banned, and I'd be fine with it.
When the things you think are banworthy get banned then you are fine with it, yes. Upthread you listed slurs as one of these reasons. A large number of people complaining about "censorship" do not think that using slurs or even calling people slurs is banworthy. So you'll run into that problem.
We already see people complaining about bans "based on the type of content." The idea that somehow other kinds of moderation are the problem and that if we only stop that kind then everybody will be happy is simply not based in fact.
>When the things you think are banworthy get banned then you are fine with it, yes
You make it sound like I'd just said something is "fine" because it's to my taste - no matter how bad it might be otherwise. I think the snark is a little misplaced, though, as one could say exactly the same if they proposed an (objectively) good or perfect or best-compromise solution.
So what matters is whether it's actually that: a good solution. Not whether it's to the taste of the person proposing it (which, any solution would always be). So at best the snark above is based on a truism/tautology.
>A large number of people complaining about "censorship" do not think that using slurs or even calling people slurs is banworthy. So you'll run into that problem.
Here's the thing: I'm not sure it's that big of a number of people. I'm also pretty sure "a number of people" also think spam, cp, violent threats are not banworthy, but I don't think it's "a large number" either.
Which is why I think banning slurs, spam, and other such things is OK, and doesn't have to do with freedom of speech - you can still express the same ideas, even the most unpopular and controversial ones, without slurs, spam, cp, and so on.
>We already see people complaining about bans "based on the type of content."
Some people will complain about anything and everything - I'm sure that some are even against the invention of fire or in favor of farting in elevators. Satisfying everyone can't ever be the measure of a good proposal.
The best solution is about a good compromise that doesn't hurt the core issue of free speech, and not only doesn't stiffle, but even helps discussion (e.g. you can't have free speech if you get death threats for it, as people will be afraid to speak - so banning "violent threats" content makes sense. Similarly, you can't have free speech if the forum is filled with advertising spam and penis enlargement and "get rich quick ads". So banning spam will help the discussion, not stiffle it).
You have the theoretical and the applied. Moderators are people. Algorithms are obtuse and bias can be built in. The solution? Various different sites and outlets with different approaches to moderation (or lack there of) allowing for freedom of expression as well as ideologically "safe" areas in the aggregate.
The real threat and conflation of moderation and censorship is when centralized sites like reddit or facebook put a standardize layer that is enforced across topics and domains. When taken to the point of infrastructure removing site's abilities to exist (such as the stormfront having their domain suspended) then we've clearly veered into censorship. People can complain about moderation on a site or forum (I mean I have), but when the moderation is not contained to the forum, and the site seeks to supplant the distributed world wide web itself, then the line is an arbitrary one.
We aren’t even having a candid, good faith public conversation about all this.
Nothing in content moderation would necessitate banning public figures with heterodox views from your platform. Hell, that’s not even a requirement for censorship.
Banning is for trolls and bots and bad faith actors, which is definitionally never someone with >1million followers.
That choice is about publicly punishing someone and most importantly, distancing yourself personally from that person and whatever they said so you don’t get confronted at a cocktail party.
So yea, when you open up a site to all legal content, you immediately are flooded with people at the very edge or just over every law.
Similarly, when you ban Alex Jones, you pretty soon end up banning everyone who you disagree with.
Most of what is going on right now in social media isn’t moderation or censorship. It’s just being lame and awful and lacking principles and self awareness.
I'm not saying that every time I hear about someone being banned its for these reasons but its often enough that I become suspicious it may be someone trying to argue for "The Bell Curve" or that slaves were happy or some other ridiculous racist idea.
Tons of people with >1Million views are trolls and extremely bad faith actors. That's often how they became so popular. Look at Tucker Carlson. If you don't ban them you may be encouraging others to act in bad faith to gain popularity.
If you don't ban Alex Jones you are letting him spread his conspiratorial racist views and harassment campaigns on your platform.
This is a very shallow take on the subject, that glosses over some of the most contentious issues related to freedom of speech on private platforms and only presents the most clear cut cases.
There are many situations where a post being visible to anyone is harmful to someone else. We can rationally weigh the value of freedom of speech versus expected harm to that individual and come up on both ends, but we can't ignore this simple fact when discussing this issue.
The only example mentioned in the post is child pornography, but there are more others: revenge porn, doxxing, smear campaigns to name a few. If my ex wants to send an explicit video of me, and someone else wants to view it, I am still harmed by the fact that the platform makes this possible, even if my moderation filters don't show it to me. Similarly, if someone is spreading my address or saying I eat children, the fact that I can choose not to see this doesn't protect me from the consequences of others reading these messages.
Again, not saying that I believe it obvious that such messages must be removed. My only point is that the "solution" that Scott proposes for avoiding harassment is a partial solution at best, but more realistically, entirely useless. It basically only helps for very low level harassment, such as not wanting to see someone cuss.
The proposed solution also doesn't do much for free speech. It's still censorship, IMO, for any definition of censorship that matters.
Content A is filtered-by-default, demonetized or otherwise discouraged and distribution-suppressed. Content B is not. That's effectively how all propaganda works. Even a totalitarian state can't really prevent access to content. They make inconvenient to access, and unwise to produce. That's enough.
In fact, modern propagandists intentionally leave a "steam valve." China isn't that worried about shutting down VPNs or whatnot. Firewalled-by-default is enough and overdoing it can be counterproductive.
I think the solution is awful. It doesn't work at all for the most important cases. If someone is being abused, libeled & harassed by a belligerent ex... hiding revenge porn behind and "adult content" filter isn't good enough.
If a platform is filtering political content, the fact that it can, technically, be accessed by enabling "harmful content" does not make it less censorious.
YouTube is in fact a great example of this being a real problem - they infamously chose to reduce the visibility of non-mainstream news channels, drastically cutting their viewership while not removing a single video from their platform. They also often de-monetize videos for even mentioning certain words or subjects, greatly dis-incentivizing anyone from discussing them (e.g. rape can't be discussed on YouTube if you want to make any money from your video).
...and just like in a government censorship context, ambiguity and fear do a lot of the work. Rules are not clearly defined or consistently enforced. You don't necessarily even know that you are being disciplined. It's best to just stay far away from controversial material entirely.
On youtube, it's had the curious side-effect of specialization.
It's not worth occasionally discussing political content, social issues or controversial content. A youtuber risks harming their income/success by taking a wrong step. Meanwhile, you kind of need to be specialized in order to know where the lines are.
For example, the Ukraine/war content youtube allows, bans or demonetizes currently is not the same as it was 3 or 8 months ago. The rules aren't written, and you need to be immersed and current to even guess what they are.
Same for sexual violence, Trump or any other highly contested moderation issue. You really need to be a specialist to (a) stay within moderation lines and (b) be worth the risk.
The irony of posters on here —a website that’s a pretty decent example of moderation leading to a better user experience — posting about how we must abolish the mods is never lost on me.
You do know that they actually permanently remove content on here sometimes, right? I love reading banned users posts! It is not the only tier of moderation at play though.
No, we don't. We only delete things outright when the author asks us to (and typically, when it didn't get any replies—otherwise we try redacting, reassigning to a different username, and/or other things).
I stand corrected! For whatever reason I remember seeing a lot of blanked-out flagged posts when kiwifarms went down, but I could be remembering things wrong.
Showdead shows algorithmically shadowbanned accounts and domains and [most] flagged comments. It doesn't let accounts that Dang has withdrawn posting privileges from continue to post.
Banned users can still post. Their posts are autokilled.
There have been a handful (fewer than 10) extreme cases over the years where we temporarily blocked someone from posting, but that almost never happens—it's an emergency measure.
For anyone wondering why we allow certain banned accounts to keep posting, even though what they post is so dreadful, the answer is that if we didn't, they would just create a new account and that new account would start off unbanned, which would be a step backwards.
The point of debate here is how to divide moderation from censorship.
I'd argue that size and power matter most. How you moderate is a technicality. It makes the difference between good and bad moderation, but it doesn't make the difference between moderation and censorship. This article's tips might make your moderation better. They will not make censorship into moderation.
HN's moderation is moderation because HN isn't a medium monopoly like meta, twitter or alphabet. If HN's moderation, intentionally or incidentally, suppresses negative opinions about tensorflow... that's still not censorship. It might be biased moderation, but the web is big and local biases are OK.
It's OK to have a newspaper, webforum or whatnot that supports the christian democrats and ridicules socialists. It's not OK if all the newspapers must do this. That's twitters problem. "Moderation" applies to the medium as a whole.
Anarchy does not want "no rules" it wants "no rulers."
I agree that moderation is necessary. That does not mean that "moderation" on youtube is not censorship. Both can be true. Maybe we can't have free speech, medium monopolies and a pleasant user experience. One has to give.
HN works because it is a tech forum and can ban religion/politics as it sees fit. We get lots of signal and filter out what we'd otherwise consider noise.
The issue is this doesn't work in generalist situations. Where my signal is your noise, or vice versa, people tend to do one of two things. Filter your noise, or increase their signal.
And thus goes back to the problem of giants. The noise battles we see will use every tool available to attempt to win, legal, political, or illegal. This is where splitting up the giants into smaller control zones with varied views tends to help with moderation.
Which is why I’ve taken the view that the actual solution is in the antitrust space, and not the moderation regulation space.
The problem isn’t that Twitter, Facebook, etc moderate in a way that’s biased, it’s that no entity should be so powerful that their biased moderation becomes a problem for society as a whole.
I don't understand the distinction. If you are on a tech-oriented forum and post some celebrity news with no tech angle and the mods remove it, is that censorship?
After reading Yishan's post this morning. I've been convinced to believe the same as GP. Censorship is about content; moderation is about behavior. These are orthogonal concepts and for reasons related to mod/user framing, not always distinguishable.
The hypothetical is too reductive to be helpful in making that decision. There are other datapoints and social framing that would be needed to answer your question. As it stands, it's like having one equation and ten unknowns and asking what the solution is - it depends.
There are some interesting ideas here, but it leaves the actual problem unaddressed. Lots of people are instinctually drawn toward the "edgy" because of a very human instinct to seek secret knowledge. This completely overrides the part of the brain that asks, "Is it good for me or for society that I immerse myself in this environment?". We as a society have yet to find a good way to mitigate this "consumer of secret knowledge" cachet, which on the internet has created a conspiracy theory flywheel.
> Spam is nothing close to either of those, yet everyone agrees: yes, itʻs okay to moderate (censor) spam. Why is this? Because it has no value? Because itʻs sometimes false? Certainly itʻs not causing offline harm. No, no, and no.
Yes, because it has no value, and because spam is basically financially motivated harassment, and harassment shouldn't be allowed.
Moderation on internet services is just to remove spam. People who think it's for something else are brain dead. Yes, 99% of internet services do think moderation is for whatever little idea they have, but 99% of internet services are poor quality services in the first place. We have never left the dot com bubble.
The problem with the big platforms applying platform-wide moderation is exactly that they are too damn big. Platform-wide moderation could as well be censorship in disguise, but hey it’s apparently ok, since it’s a private company (albeit an oligopoly) doing it. Censorship is only when government is doing it, right? Right?
> Proponents of censorship have decided it’s easier to conflate censorship and moderation, and then argue for moderation.
You know it’s a weak article from this strawman. The author could have addressed the ‘digital town square’ that was directly listed as the reason for Elon intervening in Twitter but has deliberately chosen not to.
But I'm starting to get his point about moderation being based on repetitive behaviour, and not content. I wonder if that's why he keeps interrupting himself about his trees.
(For the record, I appreciate his trees project; don't see this as an attack on that.)
The problem with Twitter and Youtube, at least for me, is that they apply double standards so blatantly. I mean, how can you keep for years Khamenei's tweets that called for "eradicating" a people, while banning an account just because "language is violence"?
Even if there ends up being no difference between the two, being able to opt in to see more of the deleted material makes the moderators much more accountable.
I have no idea if this community would be improved with less moderation because I'm blind to most of its results.
You can actually choose to see dead comments from your profile, which shows the results of some moderation decisions (I believe, but may be wrong, that some comments are also completely deleted, such as outright spam).
Nice idea of having the opt-in option to see the content which goes against the provider policy. The issue with China however is that in that scenario, they would probably track who have this option enabled and then persecute them.
Did you read the article? Removing illegal content (like how to make a bomb) is a censorship, because sender and receiver are happy with it. We need to bring arguments like "the good of a society" to justify removing illegal content.
But harassment, threats and suchlike no one wants to get. Sender may be happy to send it, while receiver is not happy to receive. We need not to resort to a good for a society explaining, why these should be banned.
> But harassment, threats and suchlike no one wants to get. Sender may be happy to send it, while receiver is not happy to receive. We need not to resort to a good for a society explaining, why these should be banned.
How many people like getting criticism of any form? Should we ban all criticism?
I don't think the difference is all that clear cut, or at least there is a lot of behaviour that has overlap where it is unclear which side things fall.
Like take a protest (for whatever issue you want). Is that criticism or harrasment? Or both?
And that's how you end up with a Nazi bar. Sure a swastica tattoo isn't illegal; nor is saying death to jews. But most people don't want to drink in that environment so if you don't kick (censor) the nazi's out then your business will not get non-nazi patronage (and maybe go under). The "free market" has determined that censorship is desired.
Eh, kind of. The distinction between moderation and censorship he uses doesn't really exist in a physical space. You can't set a flag in your profile to not see Nazis at the next table in the bar. Also, he explicitly avoids the topic of whether Nazis should be censored so they can't spread harmful ideas rather than just because they're unpleasant to be around, which is also applicable to the bar situation even if GP didn't explicitly cite it.
Nazi is a such an overused label that it is bordering hilarious. My personal experience shows that people wearing swastika tattoos are no more dangerous than those dressed openly LGBTQ+.
The difference is that a bar is a physical place, you cannot isolate yourself from the Nazi on the next stool and shut your ears against his tirades.
In the digital space, filters work just fine. Your messages will live in the same database as the Nazi's, but you can be completely isolated from seeing them if you set up your filters accordingly.
This would work of nazi did not went out of their way to harass target groups. However, they do it constantly until those leave. And then they spread lied about those who left or attempt to move it into IRL.
These groups really like to spread lies about people and events. They love to manipulate third parties or institutions into attacking you too.
Active harassment should probably be a bannable offense, few people will dispute that. What they will dispute is the mechanism used for banning; AI is notoriously bad at discerning human intentions.
> Sure a swastica tattoo isn't illegal; nor is saying death to jews.
Idk where u from, but here in Russia it's not only socially unacceptable but is also a literal crime which falls under incitement of violence and rehabilitation of Nazi ideology.
There are plenty of neonazi in Russia. White supremacy is not exactly uncommon there. (And I am not even speaking about what is done by that state abroad here. Just internal groups.)
This is laughably incorrect. White supremacy is a Western concept.
Russians looking top-down on Caucasians (from Caucasus, not in the American sense of the word) or Central Asians is a thing, but there are nuances, e.g. the relationship with Chechens.
Or gypsies or blacks (the few they met) or non Russian whites ... especially Ukrainians. And the straight up hate toward anything lgbt including whole imported cultural war rhetorics. There is quite overlap in ideology, except it whole deal is more aggressive in Russia. I mean, I am talking about people who self define as such.
Like very seriously, I am not saying Russia has Aryan supremacists. But it has strong enough and big enough groups that are so similar in everything except who gets to be on top, that "white supremacy" is perfectly good descriptor in English speaking forum. And neonazi fits perfectly their group behavior. Down to similar symbolism and preferred music.
Story time: Chinese Intranet has changed gradually from "hard" censorship to "soft" methods, posts still get deleted from time to time, but for a broader range of topics, gov't agencies choose to flood the social media with spam posts, mangling with recommenders and messing with sorting options. Technically the gov't didn't delete a thing, but yeah it will be PITA to find useful information and people can not formulate discussions.
>>avoid-harassment side happier, since they could set their filters to stronger than the default setting,
I don't think it actually would make them happy or happier
This group has a problem with the content existing at all, self moderation tools have been suggested and implemented in other contexts, and in a limited degree on twitter today (mute and block) and that is not seen as "good enough"
The group that opposes free speech does not just want to be in a self guided safe space no they want to ensure no one can says things they have deemed hate speech or misinformation. Many of this group also want to go further and punish people outside of the platform they spoke incorrectly on
So to imply self moderation tools is the solution completely misunderstands the goals of the "avoid-harassment side" which is to control and narrow the Overton window
Absolutely, and equally on the other side of things the free speech side don't just want to be able to say whatever they want - they want the right to speech in front of people who don't want to hear it. People who don't like moderation on Twitter can go off to gab, gettr, telegram, Truth, 4chan or a tonne of other venues. But they don't just want to say what they want, they want the people they hate to hear it. The person shouting slurs at AOC on twitter isn't satisfied with calling AOC names if they don't think she will see it.
>>equally on the other side of things the free speech side don't just want to be able to say whatever they want - they want the right to speech in front of people who don't want to hear it.
While I am sure that is true in some circumstances, I believe that is less common than my original statement
>People who don't like moderation on Twitter can go off to gab, gettr, telegram, Truth, 4chan or a tonne of other venues.
All of those sites have various moderation rules, the key difference here is the people that control the moderation are likely of a different political leaning to you so you view them as being "unmoderated" because you do not like the content that is allowed there.
Gab as an example started out as a "free speech" platform, but now has pretty intensive moderation rules especially around Adult content. This cost them alot of good will from Free Speech Absolutists, and libertarians.
>>The person shouting slurs at AOC on twitter isn't satisfied with calling AOC names if they don't think she will see it.
AOC is somewhat of a different case, without addressing your red herring of slurs. AOC is an elected official, as such the bar should be set higher for elected official in that they have an ethical obligation to hear from the people they represent.
You went from asserting its uncommon for people to be advocating for a right to be heard at the beginning of your reply, to actually asserting that is your position at the end of it.
I just don't think it's a reasonable position, no one has an ethical obligation to make themselves endure racist and misogynist abuse. And you might call it a red herring, but there's overwhelming evidence that that's what AOC is exposed to under the system you advocate.
Free speech is a right to speak, not a right to insist other people listen.
>>but there's overwhelming evidence that that's what AOC is exposed
No there really is not because she and many others have the habitat of calling all criticism "harassment", and then posting a couple of example many of which are not even harassment
>no one has an ethical obligation to make themselves endure racist and misogynist abuse.
Sure, but we would first need to settle on a definition of what is "racist" and "misogynist" because if you use AOC definition of those word I can assure you we do not agree on what would be considered "racist" and "misogynist", because AOC thinks someone saying "We need strong border security" is racist.
I mean... if you want to try and claim that AOC hasn't been harassed, that's fine. But I don't think anyone is going to take you seriously, no matter your definition of racism or misogyny. I decided to go AOC's twitter page and view the replies to the first tweet I saw, I scrolled past 8 replies before I found the following
>How does it feel to know that in 2 weeks you will be voted out? What a loser you are. The People of NY hate you. After you lose the election, you should disappear forever. Go to Puerto Rico fix you abuelas roof and stay living there
Now, I personally think the racist trope of "go back to where you came from" is pretty obvious. I also think that the fact that I can find such comments so easily is fairly telling. But let's ignore that and just point out that people have literally been jailed for harassing AOC and sending her death threats.
So let's back up, you can make a judgement about the extent to which you value free speech, but you have to do that grounded in reality.
Yeah see I think you've sort of painted yourself into a corner with this, where the only way of defending your position is to pretend something obviously true isn't true. If you're going to define racism in a way that excludes comments that are so well established that they have wikipedia pages about them[1] and are literally listed in the category "Racism in the United States", then I think you've effectively said there's nothing that could possibly be said that is racist.
>Go to Puerto Rico fix you abuelas roof and stay living there
I don't think it's a leap to relate that to go back where you came from, and I don't think other people think that either - since the wikipedia page literally cites as an example of this racist trope.... Donald Trump saying
>Why don't they go back and help fix the totally broken and crime infested places from which they came...
Who did he say that in reference to? AOC.
So it's not really a leap is it. But what I'm interested in, is that you claim to read a different meaning into this, so be specific, what do you think the person posting in reply to AOC meant by that?
I find the distinction between moderation and censorship helpful.
I'm more sympathetic to censorship than the author--I've seen what e.g. vaccine misinformation can do to radicalize the average person.
One valuable tool the author doesn't spend much time on is defaults. We can have a heavily moderated, uncensored platform which still prevents mis/disinformation for the vast majority of users. E.g. HN hides all sorts of nonsense by default, but still allows you to reveal it with showdead. The same folks who are prone to disinformation are often too unsophisticated to dig into their settings, and having only a single "showdead" filter means you not only see your favorite Lizard People posts, you also get inundated with all the other nonsense.
"Censorship is the abnormal activity ensuring that people in power approve of the information on your platform, regardless of what your customers want. If the sender wants to send a message and the receiver wants to receive it, but some third party bans the exchange of information, that’s censorship."
Meh. It's full of loaded terms. "Abnormal": if you engage in it regularly, then it's normal, not abnormal. "People in power": you mean like moderators?
Moderation is a form of censorship. Is moderation good? Well, it's a question of degree. Some moderators become gatekeeping Nazis, just as some posters become raving lunatics. So it's a question of finding a BALANCE between freedom of expression and letting the clowns run amok.
The article completely misses the point. Some middlemen like Twitter or Facebook does not want to be associated with certain images or point of view, perhaps because of advertising or user demographics or whatever reason, and they decide which content is ok or not. Both sender and receiver can want to have Trump tweets, but Twitter will not want that so they block it.
I don’t think Twitter is wrong, and it is not really different from Apple not letting pornography into the App Store, but it is still deeply troubling to me at some level. And it is neither moderation like discussed in the article or censorship as discussed in the article.
Right. Multi-billion dollar companies need to stop pretending they care about freedom of speech or user comfort. They care about money. What threatens their bottom line isn't allowed on their platform, fullstop. If that's ethnic slurs in the EU, they'll remove that. If it's pictures of a gay wedding in Saudi Arabia, they'll remove that.
I mean kind of, depending a bit what the mob get enraged over and they will do whatever to make the issue goes away ASAP.
More importantly I guess your main point is multi-billion dollar company mostly cares about money. To which I would say sure, what else is new? But maybe this is news to some people.
If they instituted the suggestions in the article and some journalist subsequently wrote a piece with "Anti-semitic tweets found next to Coke ads" or whatnot, wouldn't we all just shrug and say "then change your filters"? It becomes as much a non-story as "Children watching telly at midnight exposed to sexual content paid for by BigCo ads".
> Some middlemen like Twitter or Facebook does not want to be
associated with certain images or point of view, perhaps because of
advertising or user demographics
And that's the precise point at which they cease to be a "middleman"
and become a publisher.
Apparently you've actually published work yourself? Did your publisher print, bind and release every single word submitted to it without any human intervention at all, and then ad hoc remove a small proportion of the content after objections and bar a small proportion of users from further submissions? If not, why are you pretending that two completely different processes are the same?
Nobody argues that owners of physical premises should choose between accepting full responsibility for every action on their premises or being utterly powerless to eject any person for any reason.
Hi, I’m a published author and I can assure you my publisher did not attempt to change opinions or other ideas, whether subjective or objective, in the resulting work. Rather they were concerned with brevity and storytelling.
So there was an actual editorial process before publication, not publication of everything submitted to it by default and subsequent ad-hoc removal of a few things that most offended its customers?
I'll take your word for it that none of your opinions or ideas were controversial enough to upset your publisher, but do you feel the editorial process would have been completely uninterested in removing opinions or other ideas before publication if your draft contained frequent asides praising the Third Reich or suggesting that the practice of software development would be greatly improved by only allowing men to participate in it?
Well, assuming we're still discussing the issue of whether removing a subset of user-generated content makes website owners publishers, then I think it's quite reasonable to ask whether your publisher would remain solely concerned with brevity and style if they received submissions incorporating the same range of opinions and ideas as social media.
The tendency of open internet discussion to inexorably tend towards talking about Nazis (and the Godwin corollary that when open internet discussion reaches Twitter-scale, it inevitably attracts participants wanting to defend the Nazis) was the point. Publishers don't have to deal with that bullshit, which is one of the reasons why publishing isn't remotely similar to forum moderation. If they did, they'd be a lot more censorious on the ideas and opinions side.
Sure, there are absolutely no neo-Nazis or kids cosplaying being neo-Nazis or else remotely analogous to neo-Nazism on social media. How silly of me to imagine that social media moderators were deciding if and how to deal with ideas and opinions more likely to be deemed hateful or objectionable by their customers and illegal by certain jurisdictions than the manuscripts your publisher solicits.
That’s maybe true but also outside the point of the article. I was actually thinking of another case, the Hunter Biden laptop. All major news publishers decided not to report on it just before the election even though the story was obviously true and newsworthy. This was actually done by publishers, but still the important point here is that a story did not reach audiences even though maybe at least some of them wanted to see it and some people wanted to tell it like journalists in nytimes.
The blockade was so effective that I thought it was a hoax until recently.
I took the opposite from the article. Twitter wants censorship but calls it moderation and never offers a choice. They put spam, harassment and Hunter Biden in the same category.
We are well beyond just twitter moderation now. Currently kiwi farms has several ISPs black holing traffic to the site. When the lowest levels of internet infrastructure are getting in to censorship, we have a major problem.
> A minimum viable product for moderation without censorship is for a platform to do exactly the same thing they’re doing now...but have an opt-in setting
There's one problem with that. Often times, the product itself is the moderated version.
Letting users use a product with moderation turned off would be giving them what they want, but it would not be giving them "the product".
Some people want to sell their custom mechanical keyboards. They think, "I know where a bunch of potential customers are, /r/mk/!"
Queue the sub getting flooded with sales posts, and the moderators banning such posts. Now they have rule #2 and moderate accordingly.
OP's solution is to castrate the moderators by instead of allowing them to remove posts, only allowing them to hide posts. Then users who want to break the rules can simply toggle them.
But we already have a better solution: Just go somewhere else! Right there in the text of rule #2, there is a link to /r/mechmarket/, which is its own subreddit, moderated for the buying and selling of mechanical keyboards.
---
But, of course, that doesn't work with social media. There is only one Facebook. Only one Twitter. They are global namespaces. There is no room for traditional moderation. And that is its own problem.
It's hard to have meaningful conversation when every participant in your social circle, or even in the world is standing at their own soapbox. It's as useless as a daily company-wide meeting.
---
We don't just need to fight disinformation. We need to fight for information, for discussion. We need to show people who are busily engaged in identity politics that there are more interesting conversations to be had. Social media is a really poorly equipped space for those conversations, because it isn't moderated.
IMO being informed is about your own research. Either making sure you follow trustworthy people and/or checking their sources.
Wondering if the account is real or fake? Check the person's website, wikipedia page, look if there are more accounts with this name.
People are willfully disinformed. Do you trust information just because it's being liked many times? Well, being popular doesn't mean it's right or healthy.
In Russia many people don't think their government is doing anything wrong. Why? TV says so. Their coworkers do. But you have Youtube, Telegram. Most media who's websites are blocked have YT channels or their articles can be read on Telegram with quick view. No VPN needed.
In my POV many moderation/censorship arguments boil down to the desire to be walled off from bad info vs being able to make individual choices. Russian government is doing the former by blocking independent media.
What this whole debate totally misses in the USA at least, is that it even only exists because of a bigger violation and distortion of the Constitution, the freedom so implicit that it crossed the minds of the founders of America even less than having to protect the right to keep and bear arms, the right to free association, being allowed and able to chose who one wants to associate with and who one does not want to associate with. We are not able to do such a fundamental thing in America today, by law (de jure, Sotomayor), as it has been prohibited through perversion and in violation of the Constitution, through the amendment of that very Constitution the prohibition is a violation of.
Almost all other arguments over moderation v. censorship are a derivative of the most fundamental freedom, freedom to choose, control over one’s own life. That natural human right simply does not exist in America (and moist western places) anymore, effectively all the “civil rights amendments” have done the opposite of their stated objectives, they have institutionalized federal government enslavement, total domination of federal government control over your life. You no longer get to choose who you associate with, your slave master federal government decides on who you are allowed to associate with. You are as free as you are permitted; an inherent contradiction.
All this other debate of moderation and censorship is meaningless noise, merely beating around the bush to discuss rearranging the deck chairs on the titanic.
And for all our foreign friends, whether they are in America acting as if they are American or outside of America, all of these matters related to the US Constitution are very relevant to you too, whether you understand it or not, because all the freedom you have and think you have is a direct derivative of the founders of the USA creating the Constitution and declaring themselves free of the slavery of monarchy. Most people have just taken things so for granted or it is all so abstract that they do not understand any of it, because not even a person of European background can be American without understanding these things properly, let alone someone without European ethic, historical, and cultural background.
That may offend people hearing it, but it does not make any of it less true. In fact it is the “moderation” that is inherent censorship, which even prevents the system from self-correcting, i.e., moderating, because it is really just perversion, i.e., distortion, being called moderation.
There would be nothing to discuss about this topic if the world dominating tyrannical evil of forced association did not exist. So I propose we address that instead unless there point is just making noises as the titanic sinks and we are heading back towards what is effectively neo-monarchy.
moderation is when the user can select what he wishes to see, everything else is censorship, you could see it among ops on irc, reddit mods and platform algos
Elon is clear moderation will continue. It's quite obvious and simple to provide examples where twitter did censor subjects. Hunter biden laptop, freedom convoy, or babylon bee?
This censorship is what forced conservatives to build new platforms. In so doing they discovered far greater censorship. Google, Apple, and Amazon all responding within the day to deplatform parler? The only entity large enough to make those 3 entities jump within a day is the US government.
The "Hell", "nightmare", or "disaster" everyone is complaining about is the US government is who has been censoring speech. The reason there's a large delay on unbanning obvious accounts like babylon bee... twitter/elon can't do it.
Whatever happened to the standard definition of censorship?
Which is that it's decreed by a government or similar institution, and that it is enforced by law? That it's about suppressing ideas/material in a whole society?
Whatever privately run sites/newspapers/organizations do, it's other things -- editorial policy, moderation, curation. But by definition it's not censorship. In the same way that having a crappy job isn't slavery, and a serial killer isn't committing genocide. Privately banning certain content on a single site, or newspaper, is editorial policy, end of story.
Words mean things, and a lot of people have fought long and hard against actual censorship, such as the Comstock laws. Let's not cheapen freedom from censorship by turning it into "but I want to say whatever I want anywhere to anyone who wants to listen!", which is what OP is proposing. Freedom from censorship is about having the right to speak -- but it's not, and never has been, about making anyone else give you a platform for it. And this distinction is vitally important.
That’s not the standard definition of censorship. From Wikipedia:
> Censorship is the suppression of speech, public communication, or other information. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient".[2][3][4] Censorship can be conducted by governments,[5] private institutions and other controlling bodies
I can paste more, but if you just google “define censorship” I don’t think there’s a result on the first page that supports your claim.
XKCD 1357 is how old? And yet people continue to get it wrong.
It’s not censorship unless the government itself is doing the censorship and making people face criminal consequences for disobeying.
If a private entity is doing it, even at the request of a government, it’s not censorship or a violation of free speech unless they were going to face legal consequences for ignoring the government’s request.
The discussion here is about censorship. Infringing the first amendment is censorship, but the converse isn't true. Plenty of things are censorship without infringing on the first amendment.
As an example, Bezos hypothetically preventing his newspaper from publishing negative stories about him is censorship. He is censoring his editors in this hypothetical.
As another example, if Bezos says "everyone who calls me a stupid-head will get kicked of AWS", that would be an attack on free-speech.
I’m not very concerned about the semantics of the word censorship. The definition of a word in the dictionary doesn’t also define the boundaries of what behaviors are acceptable.
My point is that unless something violates the first amendment, I’m ok with it. If Bezos kicks people he doesn’t like off of AWS, I’m ok with it. It’s not a public space. The owner of the private space makes the rules. Just like I can kick people out of my house for similar reasons.
Don’t like it? Host it yourself. If the government tries to censor your self hosted content, then I’ll get up in arms.
> My point is that unless something violates the first amendment, I’m ok with it.
Fair enough, not sure that is the point of the XKCD comic though. I believe the point of that comic was "banning racists from twitter is not a first-amendment issue".
I believe the public discourse is affected by much more than the government. To keep that public discourse free enough is important for democracy to function. Hence I fear more than just government trying to repress certain forms of speech.
That doesn't mean I want to legally ban all such repression. I also don't believe democracy would be better off if actual neo-nazi's were unbanned on twitter.
Instead, I think its important we keep track of repression of free-speech, discuss what people consider acceptable, and reach some rough concensus. Based on this, we can then develop either alternative platforms, or make some well-thought out new laws.
What you said, plus xkcd is a webcomic. I happen to enjoy it as much as the next nerd, but can you imagine saying "batman issue xyz exists, and yet we still have petty theft in the streets!"
That’s just changing the definition of censorship to muddle the argument. The idea that censorship can only be done by governments (not even by private entities on behest of government) is to make the definition so narrow as to be effectively useless. Even in China most of the censorship is done by “private” companies.
While the XKCD explains that your “United States 1st amendment rights” are not be violated by non-governmental censorship, it doesn’t change the fact it is censorship. Everyone is in favor of censorship (like Randall) so long as only things they disagree with are censored.
I moderated r/electricvehicles for a year as a sort of benevolent dictator while all the other mods were more or less checked out.
Automoderator rules took care of 90% of the spammy issues. Some things were obvious and chronic, and like porn vs. art, "I know it when I see it." Incivility was pretty easy to identify and call out, but there were a few people that toed the line and would seep toxicity into the sub rather than dump toxicity flamewar-style. They'd never do any one thing to get themselves banned, as they'd very carefully adhere to the letter of the law (sub rules). For a while there was a lot of SPAC hyping, to the point that I had to create a rule just for that. More on that later.
Topic-wise, most of what people viewed as toxic was the drama around Tesla vs. the rest of the industry. There would be people wrapping around the pole on EPA range, charging infrastructure, fit-and-finish, software features, sound isolation, handling characteristics, straight-line 0-60, buying experience, their opinion of Elon Musk, FSD, the existence of a steering wheel, etc. etc. People would often appeal to the mods to try to either take sides or tone down the heat.
Occasionally people would pop in with an opinion on hydrogen fuel cell vehicles, which to me seemed like a reasonable topic to discuss in a forum about alternative-energy propulsion systems for vehicles, and they'd get shouted out of the forum as not "EV" enough. (Or worse, "Hydrogen is a pipe dream that will never happen to shut up about it already!") Gatekeepers would insist that the only thing anyone could talk about was passenger vehicles with large battery packs, or maybe a picture of an electric bus every once in a while. Posts about electric bikes or boats would get ignored or called out as, "Not the right kind of vehicle for this sub."
Inevitably the mod queue would get filled up with reports for such topics the gatekeepers didn't think are "on-topic" enough. This was the grey area where I had to make a call as a moderator. If a SPAC hype post got downvoted about as much as an electric bicycle post, by what criteria could I justify removing the SPAC post and letting votes decide what happens with the e-bike post?
My solution was to pop it up to a meta-conversation in the forum. "Let's talk about the rules. I've noticed posts with this characteristic or that. What would we think of disallowing posts like this and allowing posts like that?" There would be opinions on both sides, but ultimately I had to rely on my own judgment of "What's reasonable?" when making the final call on the rules.
Moderation of a public forum is very much a human problem, and there will always be corner cases. It reminded me a lot of what I learned in a graduate class I took on intellectual property law back when I was in school. There will always be a contour, and there will always be "test" cases that push and pull on the boundaries. Having rules ("laws") that set the groundwork for decisions is important. The process for establishing (and changing) those rules should be transparent and inclusive. No rule is going to have 100% support from all sides, but to build a system that works, we need to be able to agree on the process, respect the rules that we converge upon, and challenge rules in a civil manner that become obsolete as the world moves forward.
Glad to see someone suggesting, or at least hinting at, competitive moderation over the same message stream. That's probably part of the future. However,
> preferably by offering moderation too cheap to meter
this is not happening. Moderation, at least good moderation, is inherently labor intensive. It requires judgment, especially when people start specifically trying to find your corner cases, which is basically as soon as you start trying to scale. So good moderation will always be expensive (for someone, if not directly to the bulk of users, thanks HN mods).
There's a good article from early internet when China's firewall was very weak.
Even though it was easy to get around the firewall and perhaps even legal at the time, barely anyone was. People weren't using proxies to know about Tank Man when China was arguably only "moderating". People called it censorship.
This is just to point out the naivety of using it's possible but it's not default is not 'censorship' when talking China. Viewing shadow banned is a ok middle on HN, the obvious hole is you have to have a login. I don't want gore on TikTok but I do want guns. It's very complex.
So much cope; I'd rather this political faction was just honest and just admitted they'd want power over Twitter to ban Trump supporters because that is the sacrament of Democracy. It comes to that in final analysis, friend-enemy distinction.
What this whole debate totally misses in the USA at least, is that it even only exists because of a bigger violation and distortion of the Constitution, the freedom so implicit that it crossed the minds of the founders of America even less than having to protect the right to keep and bear arms, the right to free association, being allowed and able to chose who one wants to associate with and who one fires not want to associate with. We are not able to do such a fundamental thing in America today, by law (de jure, Sotomayor), as it had been prohibited through perversion and in violation of the Constitution through the amendment off that very Constitution.
Almost all other arguments over moderation v. censorship are a derivative of the most fundamental freedom, freedom to choose, control over one’s own life. That natural human right simply does not exist in America (and moist western places) anymore, effectively all the “civil rights amendments” have done the opposite off their states objectives, they have institutionalized federal government enslavement, total domination off federal government control over your life. You no longer get to chose shoo you associate with, you’re slave master federal government decides on who you are allowed to associate with. You are as free as you are permitted.
All this other debate of moderation and censorship is meaningless noise, merely beating around the bush to discuss rearranging the deck chairs on the titanic.
And for all our foreign friends, whether they are in America acting as if they are American or outside of America, all of these matters related to the Constitution are very relevant to you too, whether you understand it or not, because all the freedom you have and think you have is a direct derivative of the founders of the USA creating the Constitution. Most people have just taken it so for granted or it is all so abstract that they do not understand any of it, because not even a person of European background can be American without understanding these things properly, let alone someone without European ethic and cultural background.
That may offend people hearing it, but it does not make any of it less true. In fact it is the “moderation” that is inherent censorship, which even prevents the system from self-correcting, i.e., moderating, because it is really just perversion, i.e., distortion, being called moderation.
There would be nothing to discuss about this topic if the world dominating tyrannical evil of forced association did not exist.
We tried it the other way for a while and found that if people are completely free to choose who they associate with (specifically in the business space), it compromises other national ideals, such as meritocracy.
The curtailments on freedom of association are very narrow and focused on specific constraints.
But this country was founded on an understanding that there are some fundamental freedoms of choice that aren't there if you want to have a functioning society work together. And some of the ones that the founders believed would be acceptable we fought a bloody war to remove. The freedom to choose is inalienable but (much as we still have a right to liberty and jails at the same time) inalienable rights can be curtailed in the name of having a functioning society.
I believe they are referring to the Civil Rights Act, and specifically its restrictions on business and hiring practices from denying service or opportunity based on the color of one's skin, one's region, one's sex, etc.
I did. There reference to the “civil rights amendments”. It’s not as narrow as the “Civil Rights Act” as someone else commented, but that is also included as a consequence of inferior law.
To simplify it for you, you inherently cannot have the right of free association if there is no means to freely associate, because the ability to do so has been taken from you by force of perverse law.
I find it quite curious to live in a world where people do not understand that they are slaves, probably due to the fact that they have been conditioned to understand slavery as only being possible when there are chains and “black” skin involved, i.e., conditioning. I have never met a single other person who really understands that slavery is a mostly mental conditioning, chains and related iconography are merely just that, icons or a symbolic representation that abstracts away what slavery really is.
Most slaves, even in the deepest South American jungle plantation never wore chains. Chains ares unnecessary once you have properly trained your slaves to their condition. That applies to western slaves that make up the majority of western nation countries, likely including you too, as well as the slaves all over the world producing things in farms and mines so higher level slaves (you?) can, e.g., feel virtuous and privileged by being obedient and rewarded and, e.g., by driving electric vehicles.
Slavery never ended, folks, it just pivoted the business model and if you are reading this here, you are just a more privileged slave, maybe even a slave master.
Power to the people.