All: this is an interesting submission—it contains some of the most interesting writing about moderation that I've seen in a long time*. If you're going to comment, please make sure you've read and understand his argument and are engaging with it.
If you dislike long-form Twitter, here you go: https://threadreaderapp.com/thread/1586955288061452289.html - and please don't comment about that here. I know it can be annoying, but so is having the same offtopic complaints upvoted to the top of every such thread. This is why we added the site guideline: "Please don't complain about tangential annoyances—e.g. article or website formats" (and yes, this comment is also doing this. Sorry.)
Similarly, please resist being baited by the sales interludes in the OP. They're also offtopic and, yes, annoying, but this is why we added the site guideline "Please don't pick the most provocative thing in an article to complain about—find something interesting to respond to instead."
In the US, where Twitter & Facebook are dominant, the current consensus in the public mind is that political polarization and radicalization are driven by the social media algorithms. However, I have always felt that this explanation was lacking. Here in Brazil we have many of the same problems but the dominant social media are Whatsapp group chats, which have no algorithms whatsoever (other than invisible spam filters). I think Yishan is hitting the nail on the head by focusing the discussion on user behavior instead of on the content itself.
One of the things rarely touched on about Twitter / FB et al are that they are transmission platforms and then a discovery / recommendation layer on top.
The "algorithm" is this layer on top and it is assumed that this actively sorts people into their bubbles and people passively follow - there is much discussion about splitting the companies AT&T style to improve matters.
But countries where much of the discourse is on WhatsApp do not have WhatsApp to do this recommendation - it is done IRL (organically) - and people actively sort themselves.
The problem is not (just) the social media companies. It lies in us.
The solution if we are all mired in the gutter of social media, is to look up and reach for the stars.
I think this gets almost all the way there but not quite — there is one more vital point:
How we act depends on our environment and incentives.
It is possible to build environments and incentives that make us better versions of ourselves. Just like GPT-3, we can all be primed (and we all are primed all the time, by every system we use).
The way we got from small tribes to huge civilizations is by figuring out how to create those systems and environments.
So it's not about "reaching for the stars" or complaining about how humanity is too flawed. It's about carefully building the systems that take us to those stars!
But there are communities that make it work and I believe these are negatively affected by general rules we try to establish for social media through some systems.
I don't believe any system can be a solution, it isn't a requirement for a lot of communities either. I don't know what differentiates these groups from others, probably more detachment from content and statements. There is also simply a difference between people that embraced social media to put themselves out there and ghosts that have multiple pseudonyms. Content creators are a different beast, they have to be more public on the net, but that comes with different problems again.
I believe it is behavior and education that would make social media work, but not with the usual approaches. I don't think crass expressions with forbidden words or topics are a problem, on the contrary they can be therapeutic. Just saying because this will be the first thing some people will try to change. Ban some language, ban some content, the usual stuff.
- by “failure of algorithm”, the vocal minority actually mean “lack of algorithmic oppressions and treatments according to alignments of a speech with respect to academic methodologies and values”.
- average people are not “good”; many are collectivist with varying capacity of understanding individualism and logic. They cannot function normally where constant virtue signaling, prominent display of self established identities, said alignments above, are required, such as on Twitter. In such environments, people feels and expresses pain, and makes effort to recreate their default operating environments, overcoming systems if need be.
- introducing such normal but “incapable” people - in fact honest and naive and just not post-grad types - into social media had caused the current mess, described by the vocal minority as algorithm failures and echo chamber effects, and by the mainstream peoples as elitisms and sometimes conspiracies.
Algorithmically oppressing and brainwashing users to align with such values would be possible, I think(and sometimes I’d think about trying it for my interests; imagine a world where every pixel seems to have had 0x008000 subtracted - it’s my weird personal preference that I don’t like high saturations of green), but an important question of ethics has to be discussed before we’d push for it, especially with respect to political speeches, I also think.
How do you go about determining what is collaborative or "bridging" discourse, though? That seems like a tricky task. You have to first identify the topic being discussed and then make assumptions based on past user metrics about what their biases are. Seems like you would have to have a lot of pre-existing data specific to each user before you could proceed. Nascent social networks couldn't pull this off.
This also seems to be gameable. Suppose you have blue and green camps as described in the linked paper. And if content gets ranked high when it gets approval from both blue and green users then one of the camps may decide to promote their opinion by purposefully negatively engaging with the opposite content in order to bury it.
This seems no different from "popularity based" ranking mechanisms (e.g. Reddit) where the downvote functionality can be used to suppress other content.
Maybe the assumption is that both camps will be abusing the negative interactions? But you can always abuse more.
As despicable as Facebook is, I wish someone there would just come out and say “have you looked at yourself in the mirror?”
It’s mindboggling to see them passively accept the questioning without mentioning that people are wolves and sheeples, and you don’t really need Facebook to join the two.
Platforms like Facebook only bring us closer but the rotten core is within us.
If there is a lot of food, people will lean towards sharing their food, and helping others - even anonymously in a truly "altruistic" sense.
If everyone is starving, people will lean towards violence and stealing.
Does this "expose the rotten core within us?", or is it just saying we have the capacity for both? If we were truly completely rotten we wouldn't share in either case.
The fact is that environment and circumstance are inexorably tied in with our behaviour - a fact that we seem to wish to see negatively, our ideal being a perfect person who operates with true altruism in all circumstances regardless of personal cost.
Throwing our hands up and saying "go look at yourself in the mirror" is missing the big picture in my view, which is that if you want a good behavioural outcome, the environment and context of the behaviour is one of the biggest factor, and is a big target to attempt to improve. If more people are operating in a positive environment, you get more positivity.
Personal accountability, yes, is a thing, but is quite a lot more difficult to instil and improve from within a smaller slice of the world, like an app, and is more of a greater societal concern or otherwise would require propagation of an ideal with a reach that is quite hard to achieve through your app.
I agree people are not rotten to the core but we keep blaming system and thus get rid of any personal accountability. Both are important. But so far it seems people have forgotten "looking at mirror". Everyone keeps blaming everyone else. So many conflicts would automatically get diffused if people looked within. Nobody wants to do that because it's hard. Blaming others and system is easy. Platforms providing healthy environment is equally important to personal accountability.
Also comparing our social media squabbles to food security is not right imho.
People havent forgotten anything. People are generally the same as they have always been except perhaps on average they are more educated at a global level.
If there is a lot of food, people will lean towards sharing their food, and helping others - even anonymously in a truly "altruistic" sense.
If everyone is starving, people will lean towards violence and stealing.
This seems like game theory, and there are historical incidents, like WWII where people were put into forced famines and didn’t act this way. People aren’t purely self interested or “logical” there is something else there besides self interest
>As despicable as Facebook is, I wish someone there would just come out and say “have you looked at yourself in the mirror?”
That's orthogonal. Human nature is what it is, we have to work with what we have, and at best change that slowly within a culture/society.
Facebook on the other hand, and how it operates, is almost infinitely malleable, and can be changed with just programmers working on the change.
And it's not some neutral playground that's "only bring us closer" but an active agent, which has policies to create echo bubbles, stir up engagement, and use distraction and partisanship for maximum profit, and whole teams dedicated to it.
So, no it's not like a knife that can be used to cut a cake or kill a person, and it's "just up to us how we use it".
It's more like an automatic riffle with a laser guide careful designed for maximum hurt, and promoted as such to customers (in this case, advertisers), with product teams devoted to stirring up conflict to improve the riffle selling business...
It would be, if the Green Lantern ring came with teams doings psychological studies how to fuck with your mind for profit and increase your spending time with it, and if its business model was dependent on framing how you discuss and what news and stories you see...
> The problem is not (just) the social media companies. It lies in us.
my take is that social networks opened a new 'space' and just like every new spaces, old rules / regulations / institutions got thrown out to enjoy the newfound freedom, until people experience the same issues.. aka the need for organization and will thus reinvent similar rules they went away from. It's a kind of cycle.
such freedom itself is not a benefit, it's more like removing brakes from your car to save weight and maintenance cost.
It's not like friendster and myspace caused these issues. It's the social media platforms that are absolutely designed to cause polarization to drive up platform metrics.
It's the difference between a carefully tended forest and a forest where the "tending" made sure there was flammable underbrush spread everywhere. Sure it is the match that started the forest fire, but there is a world of difference between the two.
I wouldn't be surprised if future generations looked at these few decades and wondered "crazy times". You are right this is new and it would take decades before we master this new "space".
This explanation would also indicate why we had less of this issue in the pre-Internet era of broadcast media.
Broadcast is centralizing the conversation, and to maximize viewership / listenership they are encouraged to talk about a broad variety of things... But they're the ones doing the talking.
Apart from choosing to tune into specialized shows, there's less self-sorting possible when the channels and content are finite. You can't easily just tune into the stories you want to hear on the six o'clock news; someone else is deciding relevance and the topics chosen to be relevant are seen by everyone.
But either introduce bidirectional communication or the modern cable dynamic of a thousand channels, and that homogenization goes away.
Did we have less problems in the pre internet era?
We had WW1 and 2, Vietnam. Civil wars, internment camps, genocide.
Lots of these were approved and even celebrated by the majority of the population when all they had was broadcast media telling them what to think about.
I think it's wild how much people romanticize "traditional media." Broadcast and print media is and always has been awful.
I'm not talking about fewer issues in general, I'm talking about less of this issue: the issue that people are self-radicalizing by sorting into echo chambers where their priors are re-enforced instead of being compelled to hear someone else's worldview or a broadcaster-controlled consensus worldview.
Broadcaster-controlled consensus worldviews can be faulty. But we have not replaced those faults with the self-sorting of the listener, because listener worldviews are also faulty.
Sorry, but this isn't "organically". There is a lot of money being spent to produce viral content and there a lot of mobile phone farms to spread this content. "Organic" is just the last mile.
Brazilian here. This explanation is indeed lacking. You cannot look at social media platforms in an isolate manner: there is overlap between users, and it's not like all users are equal - a small minority (which accepts new entrants through viral tickets) is the one with more weigh on creating/disseminating content. Saying whatsapp is dominant is a little lacking: it has the greatest number of users, DAO, etc, yes... But whatsapp is more fore social-bubbles: it's hard to interact with people you don't know (unless you join a group, but then again you will only be there if it's accessible through your bubble); and also there are no paid ads - we know there is an industry of fake newsvertisements where people PAY to establish top-down content by force.
So it goes something like this for a user, using a real anecdote as an example:
1) Get exposed to content they never seen before, like let's say "The UN in reality is run by satanist pedophiles". This is more likely to happen outside of one's bubble - therefore not on whatsapp.
2) Get exposed again and again, either to the content, or high-reach influencers, or ads, or a mixture of the 3.
3) Search google for "UN satanist pedophile". Now you can read "real theory" by "very legitimate people" in long form explaining the proofs, history, motivation, etc for the conspiracy.
4) Radicalization
5) Now you need validation for your behavior/beliefs.
6) You identify and approach or get approached by liked-minded lunatics.
7) They invite you to the whatsapp group "only for believers".
8) Now you feel part of a group and ready to "act".
The same kind of polarizing, brinkmanship-like content that people complain comes from social media algorithms is present on spades on every non-algorithmic community I've spent time on: from HN, to Usenet, to Matrix/Discord rooms, to 4chan. I used to agree with this consensus but even after trying to concertedly avoid these platforms I find this kind of content throughout every English-speaking community even on topics that aren't politics. I've changed my viewpoint; while algorithms don't help, I think getting angry at them is just shooting the messenger.
The problem is not the algorithms. It is the internet itself.
The internet enables multicast communication, whereas before we could only afford broadcast communication. Broadcast requires the same message to be sent to all, so it forces that message to be moderate and non-polarizing.
We always harbored the ability to be polarized according to our beliefs, but the technology didn't yet exist to allow that.
I believe the underlying problem is the actual human programming deep in our genes. People tend to get organized in clans (i.e. football teams in Europe, political parties etc) and to host polarized opinions such as conspiracy theories that tend to question the status quo.
There are two reasons for that:
1. Psychological Reason: It is tempting for an non educated man to adopt an opinion that derides existing knowledge because these guys now feel taller and not at a disadvantage when confronted with the more elaborate and complex nature of the truth or with educated people.
2. Biological Reason: People polarize against common accepted truths (i.e. Covid) as a safety precaution measure against humanity annihilation due to Lemming behavior. So in the case the majority belief ends up in a catastrophic event the 10%, 5%, 1%, 0.001% minority will survive.
This seems to imply that the educated man is always right and should not be questioned by others. Regardless of the education level of the questioner (or of the educated man), I can't see that being a good thing.
Educated people can be wrong alright. However really educated people are rarely absolute in their positions because they know that the world is not black and white and almost always the answer is "it depends" and "let's see it from the other side".
However polarizing views are attracting to non educated minds because they often give their believers simplistic views, potential enemies and pseudo-arguments that can nail the other "educated" or even just common-sense people. It's frightening really to see in action the amount of flattering it goes on in such cases.
I'm afraid to say that I disagree with you wholeheartedly. I have watched, several times over the past few years, experts and non-expert well-educated people actively deride scepticism of any nature. That is, it doesn't matter what the expertise of the person or group is, what their experience is, whether their concerns have been used elsewhere by those they criticise, or whether their concerns were once seen as completely reasonable and thus natural questions to ask - they should not be asking them, hence, they are not an expert or they are stupid.
I see this from utopianists, who believe that "if only these people would get out of the way we could fix this problem and reach utopia! If they are against us they must either be evil, stupid, or both, as our position is good (quality), good (morally), and reasonable."
The next step is cancellation. I suppose we're lucky that cancellation is as far as society is willing to go at this point but we know that these things can get much worse.
As such, I must side with strong scepticism of any view regardless of expertise (i.e. argument by authority as a fallacy is the default position) and any question, by anyone, of any standing, must be taken seriously. As J. S. Mill put it:
> If all mankind minus one, were of one opinion, and only one person were of the contrary opinion, mankind would be no more justified in silencing that one person, than he, if he had the power, would be justified in silencing mankind.
Do you think you (and your ingroup members) do not suffer from any of these sort of problems, or suffer from them substially less on an absolute scale?
I have caught myself many times to crave for the simplistic easy answer and the ready made explanation. The one that tingles your ego suggesting: "How can nobody thought of that? Wow! I am really smart or all other are dump which is about the same.".
In order to escape its allure it takes one or more of the below:
a) self discipline which is not possible to have for all matters and domains
b) external factors that force you to delve in to the thing at hand (a job)
c) in specific instances you make it a hobby and you gain more intimate knowledge that can't be simplistic black and white.
i.e. - id est - "that is" (i.e. a clarifying restatement of what was just said)
e.g. - exempli gratia - "for example" ("I like to post comments about language and usage of English, e.g. clarifying the use of certain Latin abbreviations")
The algorithms are tuned to vastly extend the time users spend on a network. They are going to moderate stuff if it improves the time spent on that network.
Of course there are political reasons for which algorithms also do content filtering and censorship but the money reason trumps all: the more time you spend on a network, the more ads you see, the more time you spend on a network, the more content you produce and the more you interact with others and that produces a smell that will attract other small insects which will be served ads, will produce more content and interaction.
> I think Yishan is nailing the nail on the head by focusing the discussion on user behavior instead of on the content itself.
But user behavior problem can be solved cheaply, easily and in a scalable way:
Give each user an ability to form the personal filter. Basically, all what I need is:
1. I want to read what person A writes - always.
2. I want to read what person B writes - always, except when talking to person C.
3. I want to peek through a filter of the person I like - to discover more interesting to me persons.
4. Show me random people posts like 3-4 (configurable) times per day.
This is basically how my spinal brain worked in unmoderated Fido-over-Usenet groups (Upd: yes, this “oh how everyone will die without moderation” problem is that old). Some server help will be great, sure, but there is nothing here that is expensive or not scalable.
PS: centralized filtering is needed only when you are going after some content, not noise.
I disagree, we can't frame this discussion on only the content. My whatsapp "feed" is doing just fine. The problem are all the other whatsapp groups that I'm not in, which are filled with hateful politics. It hurts when I meet in real life a friend that you haven't met in a while, and then find out that they've been radicalized.
The radical Bolsonaro whatsapp groups are a mix of top down and grass roots content. On one end there is the central "propaganda office", or other top political articulators. On the bottom are the grassroots group chats: neightborhoods, churches, biker communities, office mates, etc. Memes and viral content flow in both directions. The contents and messages that ressonate in the lower levels are actively distributed by the central articulators, which have a hierarchy of group chats to circulate new propaganda as widely as possible. You can see this happen in real time when a political conundrum happens,e.g a corruption scandal. The central office will A-B test various messages in their group chats and then the one that resonates better with their base gets amplified and suddenly they manage to "change the topic" on the news. The end result is that we just had 4 years of political chaos, where the modus operandi of the goverment was to put out fires by deflecting the public discourse whenever a new crisis emerged. It's not merely a content problem that could be solved by a better filtration algorithm. It's a deeper issue of how social media can affect human behavior. How quickly ideas can spread, how easy it became for people to self-organize into "bubbles", and so on.
It’s a hurtful topic and has complicated ramifications, but…
> other whatsapp groups that I'm not in, which are filled with hateful politics.
That brings the discussion from you to others users, and your goal seems to be to change how other users are affected by their time on a platform.
> The radical Bolsonaro whatsapp groups are a mix of top down and grass roots content.
You seem to be talking about motivated individuals who formed groups with a specific purpose and express their views and propositions in these specific groups (which you don’t belong to, as mentioned first)
Whatever way we slice it, short of the platform banning their related topics entirely, I don’t see how a neutral service could act about these people/ideas spreading, from a mere platform POV. You’re talking about thought policing private groups basically.
For sure, it would be foolish to assume that the problems could be solved by changing the software, or passing a new law. Policing private groups is impossible. We're living in a new world of social media and we still need to figure out how to deal with that. I hope that in the long term there is some kind of culture shift with how people interact with the internet, because whatever we're doing today isn't working.
By the way, I mentioned Bolsonaro explicitly because those groups are currently the better organized ones in the Brazilian social media space, masters at exploiting the Whatsapp dynamics. But the fundamental problem is not exclusive to the extreme right.
The extreme right is literally the fundamental problem. People keep saying "both sides" in this thread and honestly in most places in the world (especially US, Europe and as mentioned on this thread often, Brazil) the extreme right IS the problem. We don't need to invent scary leftist groups that threaten to overthrow the US government by storming the US capitol with nooses, we can just admit that it is the far right doing this on mass not the left and drop this stupid need to portray these global changes as a political radicalization on both sides rather than white nationalism becoming more openly violent and extreme.
Far right extremists are the problem, keep "both sides" out of this.
How precisely does that protect you from the people who are seeking those out? Filtering your feed doesn’t cause them not to vote against their interests, firebomb your store, or attempt to assassinate leaders.
People are not seeking those groups out of nowhere.
Most of them are getting pushed there from automated platforms bombarding them with things they can't stand. The automated outrage machines are what radicalize people, not the private message groups.
By the time you have private message groups propagating the same messaging the damage has already been done. To get people out, you need to cut off the source of the outrage and soothe whatever fear is being ramped up.
Censoring the outrage and dismissing/patronizing people from a position of authority doesn't work, it just acts as an accelerant.
I think the crazy groups will sputter out if there's enough space for venting, free association, and ability for the average person to turn off things they don't want to see. I think that last point is the key. Millions of people are bombarded with things they really don't like in systems they have no way to really manage themselves. The legitimate complaints will filter up and out of the crazy circles if people are allowed to form their own groups and don't feel forced to pack themselves together around the loudest people who can given them a voice on the big automated feeds they don't really understand.
People have a much better chance at effectively communicating with their neighbor or family member or old friend with a different perspective than a stranger online with a different perspective, and they are much more likely to engage when in a frame of mind that will actually accept differences of opinion if they are managing their own connections and exposure themselves.
Automated platforms such as twitter certainly play a role in the social media landscape. But in Brazil they are not dominant. Whatsapp rules supreme and is the primary source for news for over 75% of Brazilians. Viral content here starts on Whatsapp and flows out to twitter, instead of the other way around. Some viral content does get bombarded, but it's actively done so by humans instead of by recommendation algorithms.
Interesting, that violates some of my assumptions. I think the basic model I'm suggesting might still be at play, though. I'm guessing most of those whatsapp groups are really big, so they effectively prevent people from insulating themselves from outrage if they just want to know whats happening. They can't just form a group with their friends and find other channels just to stay informed with minimal outrage and can't find news that's just straight news, the main source of news is the big viral firehose where outrage dominates. Does that seem accurate?
This conversation also speaks to the social complexity that gets obfuscated by some of these big platforms and the fact that things are different in all kinds of ways in different parts of the world. I don't know really anything about Brazil besides some fairly superficial basic history/culture and have pretty much zero contextual understanding of what it's like to live/communicate there. The fact that the same thing seems to be happening all over the world is really weird.
That's why I think it has a tight relationship to scale of communication channels, as that seems to be the main thing that's changed recently and is consistent regardless of specific platform/means of communication.
People flock to other people that make them feel the way they want to feel, and tell them what they want to do is okay.
People find these groups organically too, or form them themselves, through friend associations, not just by having them pushed.
A lot of people like to feel angry against someone (the other), as it gives them a sense of control and power. Joining others who are similarly aligned, also gives them the sense of belonging.
People who are in these states aren’t going to filter out their friends, or filter out the people telling them things they want to hear.
They’re going to filter out things that make them feel uncomfortable, or that they don’t agree with - like anyone who would stop them from going further down the rabbit hole!
If the assumption is that as a consequence of allowing people to filter things themselves more, everyone will just get along, understand each other, and all political problems will disappear/people will become enlightened and cease to be ignorant, then yes, it's incredibly naive. This was the case made by people setting up these platforms without filters.
That's not my assumption. I think echo chambers are impossible to prevent and ignorance is something most people gravitate towards. I think many people are likely to find reasons to vilify and misunderstand each other until the end of time.
What I think is possible to curtail and is being ramped up now is viral conflict escalation. If active conflict is curtailed, that expands the amount of peace time available for actually effective communication if and when the opportunity arises over long stretches of time with a lot of effort. If you have an irrational hatred towards people with X quality/opinion because of bad information, are you more likely to become agitated and attack people if you a) people with X quality/opinion aren't visible b) people with X quality/opinion are unavoidable?
The ideal is obviously to get people to understand each other better, but I think we're seeing you can't just do that by superficially exposing everyone to everyone else. It requires a more sophisticated approach that respects people enough to have their own autonomy and seek out understanding and better information voluntarily through more personal interaction and influence.
There are natural forms of competency and organizational filters that allowed us to progress to where we are today and nudge people towards greater understanding of each other and cooperation which punish bigotry and unintelligent assumptions through natural rather than artificial consequences. Increased social cohesion and progress was not a top down process in the past and does not need to be a top process down now, and I think an attempt to manage global conversation is just as naive as the naive open global village kumbaya assumptions that got us into this mess. People's autonomy needs to be respected if you want them to consider a more enlightened perspective.
Near as I can tell, these ‘natural forms of competency and organizational filters’ are not natural, and never have been.
If they were, Nazi Germany would not have been what it was.
Also, baked into what you’re saying seems to be an assumption that if no one was prompted to start a fight, no one would fight. And that is true sometimes, with many people. But that is definitely not true of everyone, let alone everyone all the time!
The forms of social organization which are natural to us are easily directed towards hate of ‘outside’ groups, which can be easily constructed from any easily identifiable group of people with the right set of conditions. Even without prompting, it naturally arises based on visual identifiers and in many common environmental triggers.
In fact, near as I can tell, the only thing stopping larger scale movements of exactly that type is cultural memory of the death, destruction, and terribleness that results from it, resulting in active pressure against anyone trying to form a similar group. The ‘my grandfather fought a war against Nazi’s, you’re not going to be a Nazi’ type of memory.
Which fades with time, and isn’t natural to anyone?
> Filtering your feed doesn’t cause them not to vote against their interests, firebomb your store, or attempt to assassinate leaders.
Silencing people so that they will vote against their own interests - it’s the last thing I ever can wish. Ends badly.
As to firebombing and assasination - wouldn’t you rather have them discuss and plan such things openly, so that police can easily apprehend them?
Or, as I’ve already described here - in 1990 during Gorbachev’s glasnost (kinda sorta more freer speech) our county seat newspaper with a proud name Kommunist published my article about some successful economic experiment we’ve done in our town. In the next issue the same paper published letters of angry “workers” naming me “an enemy of the people”. Which is a pretty serious accusation in the Soviet Union, historically speaking. It basically meant I should be killed if Gorbachev ever loses power to conservatives. A death threat, basically. And that threat was THE most important information I’ve ever got in my life. Why on earth I would wish it silenced?
> wouldn’t you rather have them discuss and plan such things openly, so that police can easily apprehend them?
That’s not the threat here: the problem are the changes constantly promoting messages which stay on the legal side but encourage people to do things. People like the woman behind LibsOfTikTok are careful not to incriminate themselves but they know there are people who will hear what they say and act on it.
That’s why moderation can’t be individual: the people most susceptible to lies and propaganda are willingly seeking it out. That’s always been a problem but social media has made it orders of magnitude easier to discover.
> People like the woman behind LibsOfTikTok are careful not to incriminate themselves
From what I know, this person literally just reposts crazy liberal videos verbatim. The same thing has and will continue to be done to crazy conservative ideas.
> but they know there are people who will hear what they say and act on it.
What is the account saying and hoping people will act on? It's bringing crazy ideas to the forefront to expose the ridiculousness of them. It's no different then people bringing the "lizard people" conspiracy theories to the forefront. It exposes how ridiculous these ideas are by taking the ideas out of the deeply nested echo chambers they're started in.
> From what I know, this person literally just reposts crazy liberal videos verbatim. The same thing has and will continue to be done to crazy conservative ideas.
That’s how this works: Chaya Raichik claims to just be sharing real things but in reality is often providing heavy spin or outright misrepresenting what she shares, encourages hate (e.g. she’ll routinely label LGBTQ people as child groomers), and takes no responsibility for her role in inciting violence.
This culminated in children’s hospitals having to react to blob threats over the summer over false claims she popularized:
> the people most susceptible to lies and propaganda are willingly seeking it out. That’s always been a problem but social media has made it orders of magnitude easier to discover.
What are you basically saying is: democracy (which is impossible without free speech) is not able to work after social media is invented and deployed, because some people are susceptible to lies and propaganda. I kinda doubt the truth of such a radical statement. Democracy is not an ideal thing, but look what happens with alternatives - when some wise men decide what is better for those susceptible (and all others too).
No, what I’m saying is that democracy requires constant reinforcement - people have to stay committed to the ideas around sharing power, respecting other citizens they disagree with, and some basic shared reality.
That doesn’t mean we can’t have social media but it means that we need to have some basic constraints, and that companies need regulation, especially for what they promote.
Also, to be clear I’m not saying that this is unique to social media - for example, the current wave of anti-democratic sentiment in the U.S. has been promoted on cable TV as well - but that we can’t continue to ignore social media at a time when it’s shaping so much popular opinion.
That's not a solvable problem. People are going to try to kill their leaders if they get upset enough. And given how high the stakes are in the modern state which controls every aspect of our lives its going to happen. Whether that's Ortega-Hernandez trying to kill Obama or Hodgkinson trying to kill Republican senators...
The best you can do is try and convince them otherwise, which requires talking to them rather than ejecting them from your community and having them form little communities out of the rejects that then radicalize.
> The IYI [Intellectual Yet Idiot] pathologizes others for doing things he doesn’t understand without ever realizing it is his understanding that may be limited. He thinks people should act according to their best interests and he knows their interests, particularly if they are “red necks” or English non-crisp-vowel class who voted for Brexit.
> When plebeians do something that makes sense to them, but not to him, the IYI uses the term “uneducated”. What we generally call participation in the political process, he calls by two distinct designations: “democracy” when it fits the IYI, and “populism” when the plebeians dare voting in a way that contradicts his preferences.
Great solution!
What will you do if your users then filter out everyone who disagrees them and end up forming self radicalizing echo chambers which are highly polarized ?
Echo chambers - in my experience, starting from Fido - are usually created by moderators. Unmoderated, people like arguing no less than having others sharing their views around.
My feed is already fine; no one is posting hateful politics on the group chats that I am in. The problem is that even if the hateful content doesn't reach me, it's still out there reaching other people. In the US, they ended up with stuff like the Capitol Insurrection. Here in Brazil, as we speak there are disgruntled "stop the steal" extremists blocking roads and calling for a military coup.
It's a social problem that's largely been created by technical solutions. To put simply, radicalization is a lot easier when you give every loon with an axe to grind both a megaphone, and an echo chamber to invite new recruits into.
Now, there are also problems with a world that doesn't give everyone access to 1:N broadcast media, but they are different problems.
Well, noise/signal ratio is rather a technical problem, which can be solved by technical means.
Wrong signal - is not a problem for society, in my always humble opinion. It’s a problem only for those who want bad things. Silence helps them, not the society.
because HN users in royal we strongly believes in an automated technocracy by means of evolutionarily derived information technology techniques. This is a pseudo-religion without a sound scientific basis. Which I have my faith with by the way.
Personal user filtering can't be the only form of moderation. There is some content that I don't care if it exists as long as I don't see it. Spam is one example of this. However there is some content that its existence on the platform makes me less likely to use the platform even if I never see it.
I'm sure Yishan knows this from his time at Reddit. They seem to go through a controversy about this every year or two[1] with most of the offenders focussed on either hate or porn. /r/jailbait is one of the early examples that comes to mind. It hosted risqué, but apparently legal photos of underage girls. Reddit couldn't just tell people "Don't visit those subreddits if you don't want to see that content". Most people found the content objectionable enough to want it removed completely. Its mere existence made normal users less likely to use Reddit despite users needing to go out of their way to see that objectionable content.
> Its mere existence made normal users less likely to use Reddit despite users needing to go out of their way to see that objectionable content.
To me it’s kind of perverse logic. If a platform hosts persons that some powerful group of people want to be removed - than I can trust the platform to less likely to remove me, when and if I cross some powerful or just loud and anger people.
Like: there is a lot of people in the US that I do not like, but I rather live in a country with 14th Amendment than in a country without one. I’ve tried both :)
It’s funny you mention the 14th Amendment. It is a little over 150 years old. For the first century interracial marriage wasn't on the list of protections provided by the 14th.
When enough people consistently say you shouldn't be allowed to do something, we won't allow you to do it. When enough people say you should be allowed, you will be. Maybe that is cynical, but that is life in a democracy. Churchill was right about it being the worst form of government.
>Exactly. You guys can lose your 1st amendment pretty quickly.
The example of the 14th Amendment was meant to show that these amendments don't have some innate meaning or special true power. The 1st Amendment only ever means what the majority wants it to mean (a loose definition of majority, the US is a republic not a democracy, etc.).
We will never lose the 1st Amendment, however we are constantly having a debate over what the 1st Amendment actually means. From a practical perspective, your rights are not innate or god given, they are majority given. That is how these systems work and no one has yet found a system that can improve on that approach.
> However there is some content that its existence on the platform makes me less likely to use the platform even if I never see it.
This really confuses me. Does the fact that people send racist insults at each other in Call of Duty matches via the Internet make you less likely to use the internet?
I think the idea is that the platform has the power to prevent people from saying these things you find (extremely) objectionable, and so you feel compelled not to use said platform if they refuse to do so (and thereby appear to be passively endorsing the behavior).
> and thereby appear to be passively endorsing the behavior).
This “passive endorsing” is a great social construct! It allows anybody to go after TMobile for passively endorsing all the lies I said to pretty women over the phone.
That seems like a bit of a strawman. There's a pretty significant difference between public and private speech. Regardless, I'm not arguing what or when these platforms should censor. Just trying to explain why certain content on a platform could cause someone to choose to stop using that platform, even if they're not personally exposed to that content.
> Just trying to explain why certain content on a platform could cause someone to choose to stop using that platform, even if they're not personally exposed to that content.
This explanation seems pretty strange to me.
1. If I am not exposed to some content - how do I know that it exist?
2. If I see that platform does not delete an objectionable content and does not force me to contact it - I would feel much safer in re that my non-objectionable would not be deleted too.
I'm not sure why you would be concerned that your non-objectionable content would be deleted regardless. Either way, it sounds like you put greater value in freedom of speech, whereas others put greater value in condemning speech they find harmful. I can see how reasonable people could have differing priorities there.
> However there is some content that its existence on the platform makes me less likely to use the platform even if I never see it.
> ...
> /r/jailbait is one of the early examples that comes to mind. It hosted risqué, but apparently legal photos of underage girls. Reddit couldn't just tell people "Don't visit those subreddits if you don't want to see that content". Most people found the content objectionable enough to want it removed completely. Its mere existence made normal users less likely to use Reddit despite users needing to go out of their way to see that objectionable content.
Are there metrics showing that reddit usage decreased because of subreddits like that? It's possible that most normal users who aren't actively searching for such content may not even be aware of that subreddit's existence and the number that are aware of it and who would choose not to use reddit is not signficant enough to affect overall usage patterns.
On the usenet side, Andrew Cuomo (back when he was the attorney general of New York) made a deal with a number of large ISPs in terms of dealing with CP getting posted to usenet. It's almost certain that those who were just using usenet for their discussions were not searching for CP on usenet, so their usage patterns would not have been affected.
But what did affect it was ISPs dropping usenet service, which lead to people no longer posting and groups dying off because no one was posting. This isn't something that would happen to reddit unless all major ISPs started blocking access to it.
1(b) I want to read what person A writes about #hashtag
1(c) I never want to see person A's posts about #hashtag
The latter can be bluntly accomplished by hiding everything about #hashtag, but sometimes you don't want to do that (you still want to see stuff on the topic, just not written by that person).
This doesn't work and it seems like you ignored his entire point. If moderation is being done to reduce the tone of angry conversations, then you have to actually effectively reduce it. If every person out there is running a personal filter list and ignoring your mods' decisions then there is no way to do that and the emotional state is completely uncontrollable.
Which you'll find you won't enjoy. You'll also find you can't keep up with these filters of yours; surely you have better things to do than personally moderate every forum you use?
Thank you for your thoughtful reply. I’ve participated in such kind of discussions in 1990s when we were moving our Fido group to Usenet. Fido was very strictly moderated, since data was carried over people’s personal phone lines and nobody wanted to spend money on carrying garbage. But Usenet has no such restrictions and eventually we decided to go unmoderated, because no matter whom we elected - and we knew these people rather personally over the years - moderator’s powers were always abused. Going unmoderated was scary to me at first - I was for keeping the moderation, yes, but the experience turned out to be quite enjoyable - I just needed to teach my spinal brain those simple rules I’ve outlined. Now to your arguments.
Why on earth I would want to control your (for example) emotional state? You are a grown up person, your medical decisions are yours to take, not mine. Yes, angry conversations are long and stupid and are always reduced to “inventive” cursing - but a lot of people seem to enjoy this style of “argument” - who am I to stop them? Why stop others’ fun? All I want is - not to read and preferably not even to see such kind of “discussions”.
> You'll also find you can't keep up with these filters of yours
But I can. All I ask is just some help from the server.
> surely you have better things to do than personally moderate every forum you use?
It’s much easier than you think. I do not object to people hiring cleaning companies to clean their houses, but I prefer to clean my house myself. And this Dyson handheld helps very much. Same goes for moderation - why delegate this dirty work to others, especially when I’ve observed how even reasonable people I personally know tend to abuse this power?
Yes, I do remember Usenet - not very well! That was a while ago.
I wasn't a heavy user, but it was a bit of a smaller world back then - that tends to make people behave a bit better (obviously not always…), but there were also fewer other places to go if you got tired of it, it was mainly for personal use, and there wasn't a feeling that someone "owned" it.
These days if a forum has bad vibes people will want to leave it, and there's more of a customer service attitude because they know someone runs it. (Or for a professional/project mailing list, a non-abusive-workplace attitude maybe.)
Similarly the people running it feel more responsible so they want to mod it more. It's mostly just annoying if there's a lot of intense arguments on it, not necessarily bad for business, but if they get really bad there's the worry it drives away new users and eventually gets you legal attention over users, say, stalking each other, or planning crimes, or posting terrorism threats.
> It’s much easier than you think.
It was in the 90s, I just don't think killfile technology has kept up with the rate of modern posting on something like Twitter. There is block and mute but there's not, say, OCR to stop people from screenshotting you and attracting new harassers for years after the fact.
Though I think Twitter just laid off all their mods so we'll see how it goes.
This is the amazing thing Yishan misses. He claims there's no difference between spam filtering, moderation and censorship.
How come he's been in charge at reddit all these years and never looked at it through the lens of "consent"?
Ask, can you consent to this? For spam filtering, the answer is obviously hell yes. That's pretty much how we define spam: stuff we don't want, and which essentially no one wants.
Moderation is when it's still about you and what you want for yourself, but where you concede that some people may want it. And it may be more or less OK that some people want it, but this is not the place for it. If you don't want religion or politics in your game discussion group, that's moderation.
Censorship is about controlling what other people want, in order to shape their opinions and beliefs. You can't coherently consent to it. You're not worried that there's some magically seductive piece of propaganda that will turn you into a Nazi, or a furry for that matter. No, you're worried that there's a seductive piece of propaganda that will work on other people. That's the defining aspect of censorship, that it's for other people's sake, not your own.
Now, censors will often claim that they're only doing it for themselves, that they just get so disgusted by furries (or whatever) that they just don't want to deal with it. A way to test that, is to see how transparent they are about their supposed moderation. If it's moderation, being transparent about it is perfectly fine. But for the censor, transparency is not possible, because even letting the poor flock know that the seductive information exists, could tempt them to go look for it.
Another way, of course, is whether they are content with moderating their own spaces. People that try their damndest to eliminate the thing they hate from existence with DDoS attacks, targeted harassment, or preventing communication they're not even a party to, are obviously censorious, not merely "moderating".
Now, there's a separate discussion of when, if ever, it's appropriate to censor. Maybe if we're talking about people who can't meaningfully consent and are easily manipulated (i.e. children). Maybe if there's some sort of existential issue at stake (e.g. national security).
But let's not pretend, like Yishan Wong does, that consent doesn't matter.
And in WhatsApp groups there is no moderation to prevent the obnoxious behavior described in the thread. The far right fabricates militants to reproduce these bad behaviors and pollute every discussion group.
Everything in that thread seems correct, but it doesn't address why its so bad on social media sufficiently, as those philosophical language and principal problems apply to all forms of written communication that have managed to avoid the problems of social media, including wide open written communication (remember actual bulletin boards instead of virtual bulletin boards? Wide open text forums are not a new thing)
I'm very confident at this point that the problem is the enormous size of the rooms where everyone is talking.
I don't think user behavior will ever be better in a sufficiently large room. Effective communication requires shared context which is impossible to establish when a room is sufficiently large and sufficiently diverse.
What we're seeing with increased political polarization is I think an attempt by most people to differentiate themselves into groups of like minded people not just so they can form echo chambers, but so they can actually talk and be heard, and hear what other people are saying. You don't get that when the room becomes global in scale, you just get a bunch of noise, as is mentioned.
I think we're in a particularly nasty situation in tech because most of the people involved in creating and administering these platforms have a huge blindspot when it comes to the lack of context hopping ability in the average person, because the people that tend to get into tech tend to be very open to global connectivity and actually do have somewhat of a capacity to hop around and absorb a huge range of different contexts (although much more superficially than I think they realize/that's the price of extreme openness).
I am also fairly confident the solution is to create user managed and federated platforms. The problem would I think solve itself if people were better able to control what online social groups they expose themselves to. We've been organizing ourselves for thousands of years into a hierarchical networks of different families/towns/cities/states because those are the only long term stable systems that can deal with the huge variations in contextual difference between people. You can still have egalitarian cosmopolitan global villages for people who want that, and you can still have emissaries and exchange programs and global shared interest channels and encourage cooperation amongst different groups and avoid conflict, but I think the idea of a singular all encompassing global village was always a terribly thought out, completely untenable idea that would never work at scale.
I was going to write an article on this a long time ago after a similar thread came up here, but I don't have any clout and ended up getting swamped in other things, so I never did. Properly making this case would also require a lot of research and historical comparison to really evaluate, but the more time passes, the stronger and stronger my hunch is that we need to do almost exactly the opposite of what most people seem to think we should and allow people to differentiate themselves instead of forcing them to be together and all play by the same rules. The echo chambers will become less echoey if you allow people to build their own and feel safe enough to poke their heads out of when they feel sufficiently understood in their corner. The echo chambers are the result of gamed automated feeds where people are trying to signal I want my space by pulling harder and harder in a direction that makes them less and less likely to be bombarded with stuff they don't want entering their space involuntarily.
All of our brains have limited bandwidth. I can engage with someone very different from me, but not many of them at the same time, and not many who are different along different axes. So if I want to talk - genuinely talk - to someone who's politically, religiously, or philosophically very different from me, one at a time works best.
So, I agree. The enormous size of the room is a problem. If you're X, you can't have a one-on-one conversation with someone who's Y without having a bunch of other Ys jump in, and then a bunch of Xs in response. And then you get a bunch of Zs piling on, and the whole thing becomes a shouting match, not a calm, reasonable conversation.
I echo what others have said. That article sounds important, and useful. Please write it.
if that can help, you can email me a shitty first draft and ask for feedback if writing the whole things with all the historical research sounds like a daunting task
I blame both, but I think the audience/reach factor is more ubiquitous. Specifically, I think the problem is an inability for most people to properly filter in sufficiently large rooms, not the audience/reach factor in and of itself.
This commenter is talking about whatsapp behavior in Brazil and gives an example of a seemingly similar situation where there doesn't seem to be as much engagement focused algorithmic influence at play https://news.ycombinator.com/item?id=33459878
I actually went into a deep dive of any statistical efforts that showed bans on twitter based on American political leanings.
Apparently in both studies I found the most statistically significant user behavior for bans was if the user had a tendency to post articles from low quality online ”news” sites.
So essentially even the political controversy around moderation boils down to the fact that one side, the right, is happily posting low quality news/fake news that they either get banned for disinformation or other rule-breaking behavior.
One of the biggest political news stories of 2020, Hunter Biden's laptop, was falsely declared misinformation, and the NY Post was accused of being a low quality site. Now we know it's true.
On the other hand, the Steele Dossier was considered legitimate news at the time and "many of the dossier’s most explosive claims...have never materialized or have been proved false."[1].
So I'd like to know exactly what the study's authors considered low-quality news, but unfortunately I couldn't find a list in the paper you linked. In my experience, most people tend to declare sources "high-quality" or "low-quality" based on whether they share the same worldview.
I had the same skepticism as you, but the study authors did attempt to be fair by letting a set of politically balanced laypeople (equal number Democrat and Republican) adjudicate the trustworthiness of sites. They also had journalists rate the sites, but they present the results of both results (layperson-rated and journalist-rated).
I wish they had included the list so we could see for ourselves. It's still possible that there are flaws in the study. But the it appears to take the concern of fairness more seriously than most.
This is a tangent. Twitter declared this story a story about hacked material which they treated as rule-breaking at the time. They changed their rule afterwards, but it is a legitimate rule to have.
The quality was measured by a group that was selected due to their broad range of leanings. It should be in the final paper.
Hunter Biden's "laptop" is a misdirection campaign.
The contents appear to be from Biden but whether or not it was actually sourced as claimed will likely never be verified (the recipient is conveniently effectively blind).
But those that care about this story only seem to care about the story about the story, i.e., that there was effort to prevent propagating it.
I find it fascinating that every person who is hung up on this story doesn't care about the substance or relevance of it.
In regards to substance, it's a story about corruption. The corruption on its face is irrefutable (i.e., Hunter Biden was paid money in an attempt to purchase the influence of his father). That's genteel, old school corruption.
But the relevance of it is that Hunter himself is a private citizen and his (mis)conduct is not really newsworthy. But the implication of it is that his father is equally corrupt (which has not been shown), and that is the entire point of the story.
That charge of corruption is important to deflect any reports of corruption by the political rival of Hunter's father (of which there's significant reporting of such).
My commenting here is pointless yet necessary: pointless because I've yet to see a cogent rebuttal, and necessary because this topic is toxic and should be countered as best possible.
It wasn't my intention to litigate this story here. I was simply citing it as an example of the difficulty of identifying misinformation.
Also, I think a fair discussion of Twitter moderation should include a post-mortem of one of their most famous mistakes, discussing what Twitter did and what should they have done differently, and not so much the story itself.
(Of course, the substance is also important, but this isn't the right place for that discussion.)
That's just it: I don't think it was a mistake (if we're talking "censorship" about the laptop story).
The story is radioactive. The veracity of it means nothing -- just the phrase is now a talking point, a campaign slogan.
If my points are correct per the prior comment, what good does it serve the people to be amplified in a short attention span economy? Why shine light on dialog that sucks oxygen out of the room?
HN is not the place to come to blows over tribal politics, but I think that discussing policies, strategies, and other political hacking is worthy of being explored if there's willingness to examine the impacts that these actions have.
This is an extraordinary technological measure. Plenty of things posted on Twitter turn out to be false, misleading, or not worth shining light on. This one turned out to be true, and yet earned the most extreme sanction Twitter is capable of: prior restraint on speech.
My original comment in this thread never contested that. There's worthy conjecture of the provenance of same, but that's a different discussion.
What was found to be true was that Hunter Biden was open to profiting from his association with his father (among other distasteful things).
What has not been found to be true (read the report and tell me otherwise), and that is that Joe Biden himself was corrupt and sold favors. If you cite the "10% for the big guy" quote, you also have to acknowledge that it was ready to be offered, but never shown to be accepted.
Don't you find it odd that will all the investigation that happened by his political enemies that Joe Biden was never charged with anything? Do you think that they would pounce on any opportunity to do so?
But it's been shown by people such as yourself that actual guilt doesn't matter -- just the accusation is good enough.
Also note that I offer no defense to Hunter because what he did was sleazy. I also note that those obsessed with the affaire d'laptop don't care about the corruption of his predecessor and family (specifically his son in law), which was far more vast in scope and in venality.
I am against corruption regardless of which party engages in it. Would you say the same is true for you, and if so, demonstrate that in some way?
Yes I am against corruption regardless of which party engages in it. Trump is as corrupt as they come, the only way he knows how to run anything is like a mob boss who expects loyalty above all.
But we're commenting on a story about moderation of online forums, not corruption of politicians. I don't find the emails especially incriminating, and I honestly find Joe Biden to be a pretty admirable person, not especially corrupt as politicians go. But I will never forget the way that the media and platforms like Twitter used their power to bury a story that turned out to be true. I believe they did so because it was inconvenient to their own personal biases and their investment in seeing Biden elected. I was invested in seeing Biden elected too! But our highest commitment must be to the truth. I have a deep distrust of people who would suppress true facts, especially when so many false or misleading narratives are circulated all the time.
As a great example of this, my comment that you replied to is currently rated at -1. It directly answers your question, it is not rude or impolite, and it is unquestionably true. And yet at least two people disliked it enough that they wanted to bury it. I don't know why those two people had that reaction, but that is exactly the behavior that I feel so distrustful of.
Contrast the Biden Laptop story with the Steele Dossier. That story was repeated endlessly in the media, with far more numerous and salacious accusations, which turned out to be almost totally discredited. Now I find Trump to be a repugnant figure, but the double standard here is infuriating to me. The truth should matter above all.
I didn't downvote you earlier (and upvoted you now), I save downvotes for special occasions (and I think HN rations them, which is an interesting angle).
I share your opinion of Biden (not my first choice but a breath of fresh air after his predecessor).
I'd really like to come to a true understanding with you, because I want the truth to be promoted and lies and misinformation to be unamplified.
But I still don't understand what "true story" you are concerned about. The facts about Hunter himself seem to be true (corrupt and sleazy). But the whole point of the story wasn't about Hunter, it was an implied corruption of Joe.
So squelching the laptop story was about silencing effectively baseless conjecture that impugned the integrity of Joe Biden. That was the only point of the story -- to smear Joe.
Just like with Hillary's emails -- all that mattered was "she did bad" (which she did do, but in context it was quite insignificant. The amazing thing is these stories only seem to work one way: agains the Dems. There doesn't seem to be a single accusation against Trump that would dissuade his followers (his shooting someone on 5th Ave quote was frighteningly prescient).
The Steele Dossier was lurid but there's so much shady shit about Trump that it did raise credible concerns: is there kompromat on Trump? His behavior suggests so (absolute deference to Putin, money laundering accusations, etc.)
The essence of my concern is that we're flooded by misinformation and it's horribly effective, and more horribly so in the direction it seems to be effective (against the Left).
The laptop story is a lose/lose scenario. We lose if the story continues to echo and destabilize trust, and we lose when we try to prevent that act of erosion.
I'm not a fan of censorship but I also see the results of unfettered hate speech and lies that have literally demonized the Left and divided this country with no sign of letting up.
I think the leak of thousands of authentic emails from a direct family member of a presidential candidate is newsworthy, regardless of what is in the emails. People should get to decide for themselves whether "10% for the big guy" is incriminating or not.
If the story had gotten to run its normal course, my overall opinion would be that it actually makes Biden look pretty good (20k emails and this is the most incriminating line that they could find?)
But the moment that Twitter took unprecedented measures to prevent the story from even being discussed, it became something that bothered me a lot.
The way to gain credibility in an age of misinformation is to make the truth your north star. To have credibility, you have to be willing to call out misinformation on your own side, like the Steele Dossier. To say that it "raises credible concerns" is to excuse misinformation. You asked me to demonstrate that I am against corruption regardless of which side does it. Can you demonstrate that you are against misinformation regardless of which side does it?
> People should get to decide for themselves whether "10% for the big guy" is incriminating or not.
I applaud your faith in humanity but the same population isn't likely going to read the exonerating report that I cited elsewhere and will note here. Read the conclusion at the end and then tell me again why the laptop story still needs to be promoted:
> If the story had gotten to run its normal course
Nobody pushing that story cares about anything other than the story sticking around to be a campaign talking point.
The Steele Dossier is far more complicated because of its origins and handling (yes Clinton helped fund some of it but McCain and Comey were alarmed by it). It was also always framed as raw, unverified intel that, once public, compelled verification of it. At that time it was also paired with the Buttery Mails investigation which likely tipped the scales away from Clinton.
Read the wiki and you can see that it's not cut and dried. Meanwhile that whole thing has been turned into RussiaGate and categorically denied. MSM has dropped that line.
Nobody on the left is defending Hunter Biden. But again, he's not the candidate, so why is there such concern? Because it's ButteryMails v2. The laptop story is the story.
I'm not sure what I'm supposed to condemn about the Dossier affair -- it had its moment in the sun and then was squashed, with the bonus that it's now projected that Trump is 100% innocent in regards to dealing with Russia and that just ain't the case.
As mentioned elsewhere, the laptop story was not restricted for being misinformation, it was restricted due to Twitter's policy (at the time) on hacked materials.
And, following the laptop story, Twitter changed their policy on hacked materials.
And that's the thing with content moderation - specific examples of content moderation often themselves go viral, but often also shed the actual context of said moderation (as was the case in your example), and thus feed into existing narratives around bias in content moderation. With full context, one might still disagree with the policy, but the narrative falls away.
As a European I don't know much about the Hunter Biden story, but it wasn't a political news story because it doesn't even cover a politically relevant person. It is a celebrity story and even two years later no allegation of substance seems to have come from it (skimming wikipedia).
From a European perspective it was just outlandish hearsay and speculation. Hard to understand why Americans get so invested when no criminal case materializes.
This is a political minefield/one of the most polarized topics of conversation in a minefield of extremely polarized topics, and I greatly value this forum as a place for intelligent conversation. Please don't jump to any assumptions about my presentation of this perspective, or endorsement of crazy people who say similar sounding things, this is simply a summary of why this story is important to the right.
The Hunter Biden story is significant because there are emails which suggest Hunter Biden had ties to foreign companies that wanted access to his father. This could be grounds for a criminal case implicating his father.
Now, in a sane society during a less polarized election, this would have been discussed in the open, and people would make cases as to the relevance or irrelevance of the evidence, relevance to Joe Biden specifically, and whether the evidence merits investigation through proper channels in an open and transparent way.
Since Trump was viewed as even more corrupt and more of a threat by enough people capable of suppressing the story, the story was suppressed, and this conversation didn't happen until after the election. That does not mean the entire FBI and judicial process is broken like some people on the right claim, which is an extremely dangerous accusation, and the lack of a proper case means we do not know the extent or non extent of corruption, but the suppression of the story was a violation of public trust by those who suppressed it and poured a lot of fuel on an already raging fire.
> The Hunter Biden story is significant because there are emails which suggest Hunter Biden had ties to foreign companies that wanted access to his father. This could be grounds for a criminal case implicating his father. Now, in a sane society during a less polarized election, this would have been discussed in the open, and people would make cases as to the relevance or irrelevance of the evidence, relevance to Joe Biden specifically, and whether the evidence merits investigation through proper channels in an open and transparent way.
I agree with that in principle. The idea that a presidential candidate's son has obviously nepotistic business appointments in industries and countries that he has no business being in presents a facial conflict of interest and should invite a debate on quid pro quo corruption. I would say these kinds of stories are a feature of both political parties in the U.S. and across a wide swath of time -- this is effectively the same role as the Clinton Foundation played in the previous election, for instance. And I think it makes sense to debate or investigate this kind of thing in public, including during the election. In fact, Hunter Biden's role in these companies was scrutinized to some degree earlier in the presidential election and also in the primary and also indeed over the last four years. So I reject the notion on the right that this is an issue that "could not" be discussed because "the elites" suppressed the discussion. My sense is that the eventual conclusion was what it seemed from the very beginning: Hunter Biden is a colossal fuckup, everyone around him tried to help him not be a fuckup, he kept being a fuckup, and it should be to no one's surprise that he tried to leverage his father even though there's no to limited evidence any such quid pro quo occurred.
The laptop, though, is a pretty complicated thing. It feels more like a direct parallel to the Podesta email hacking in the 2016 election (which, among other things, spawned Pizzagate). Bad actors (in 2016, foreign intelligence; in 2020, a domestic political adversary) took steps to secure a large cache of material, potentially by committing crimes to do so, and released it to the public on the very eve of a presidential election as an October surprise. The framing of the stories on the right was that the _existence_ of the material (emails in 2016, laptop in 2020) was proof that the allegations against the candidate were true, even though basically none of the material seems at all connected to the allegations.
Furthermore, the nature of the release was that the people who secured the damaging material obfuscated and laundered it. The press had neither the time nor the resources necessary to verify the contents of the material. Finally, the biggest risk was that large amounts of true material would be combined with knowingly fabricated material in a way that laundered the reputation of the false material. This is hardly an empty false flag thing, it's literally exactly what a nefarious actor _should_ do if they come into possession of huge amounts of stolen material to make the greatest impact.
Although the reactions to the Hunter Biden story were overreactions, that's the framework that I think people had adopted. I think people were thinking back to the Podesta email hack and to the FBI confirmation of the Weiner stuff causing a significant late disruption in 2016 election and wanting to avoid the possibility of a similar disruption here. And I think in principle there is a responsibility to avoid this kind of chaos: for instance, if a candidate is accused of a heinous crime (say rape or murder) the day before an election, it is possible both that the accusation would have a serious impact on the election and that not enough time exists to validate, refute, or contextualize it.
I can't say the exact length of time before an election that should be a sundown period, informally or formally, but I just recognize that you might imagine some combination of late breaking, serious, and difficult to verify that would merit a deliberate choice to not interfere with the election. The same thing occurred in France about 36 hours before the 2017 presidential election: Huge cache of Macron emails were hacked and distributed (probably by foreign intelligence). Opponents of Macron said, without evidence, that this was proof of Macron's perfidy and would sink him. The media generally refused to engage the story because it was the fruit of a poison tree and 36 hours is not enough time to work it out. The far-right party noted that this was censorship and the end of free speech and investigative journalism is over.
The only winner in a situation like this are the agents of chaos.
The only notable thing about the Biden laptop story is that normally cynical people believe all of it; they should be able to recognize an intelligence op when they see one.
(Some of the emails are validated through DKIM. This does not actually prove anything in general.)
Also, they should be happy reading a story that he took bribes and then didn't do the thing he was bribed for. That means the criminals have less money now! It's clearly a virtuous action.
The big tell that its not all social medias fault is that the narrative changes with the decade on the exact cause even if the outcome of radicalization is the same. 25 years ago people were blaming AM radio hosts for political radicalization. The only truth imo is that radicalization is highly profitable so media will favor it on any platform that is in vogue.
If I say something and you have an emotional response and or behavior response that contributes towards negative emotions/behavior vs positive emotions/behavior.
Beyond the obvious, let's just suspend what we think is common sense.
"I love you" vs "I hate you"
There is a charge associated with both those sentences and probably at the discrete unit word level which may be associated with particular vowel structures/sounds. I've vaguely looked into a "universal language" of phenotypes/sounds/grammars. I believe it was like 16 and associated roughly 16 base sounds to 16 emotions and may even have found some evidence for cross-species use.
Polarization, partisanship, and radicalization receive a great deal of academic attention. (I can't speak to the public mind, or political hobbyists.)
The book Why We Are Polarized [2020] by Ezra Klein surveys what was known at the time. Nationalization of politics, consolidation of media, death of journalism, election reform, campaign spending. TLDR: Politics used to have weird coalitions, where party was not strongly associated with positions. Then people sorted. Then people's identities (positions) fused into a kind of super identity. Now party is a strong predictor of position.
Rachel Bitecofer https://twitter.com/RachelBitecofer researched (negative) partisanship and used to talk about it. I don't know of an otherwise good intro.
People have always self-radicalized. Pamphlets and zines. Social media made it a self renewing pandemic. Root cause is probably a combo of algorithmic hate machines and social isolation. (People are increasingly anxious and isolated. These para-social relationships fill the void.)
This point he made aligns with non-specific features of user behavior:
> "In fact, once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didnʻt understand the language they were being spoken in?"
However, I think discussions of controversial subjects often fall under 'this content is a problem' type of thinking. I was banned from all the major Reddit Covid subreddits for politely noting that a lab leak was entirely plausible, even likely. I was banned from their major news subreddits for regularly discussing the economic agendas behind the vast majority of military invasions in both the present day and historically - for example, I'd note that the USA-Russia split seems to have got started in 2003 when Putin rejected an Exxon bid for majority control of Russia oil production, and then started jailing the Wall Street-linked oligarchs like Khodorkovsky. I was also banned from the Ukraine subreddit for noting that a negotiated settlement to the regional war was inevitable, even if Ukraine had to relinquish some territory inhabited mainly by Russia-speaking citizens. Banned from Neoliberal, for pointing out failures of neoliberalism. Etc.
In none of those cases did I start hurling nasty insults, spam-posting, etc. Clearly those subreddits and their mods have a particular 'keep the conversation on our preferred message' agenda, i.e. 'align with the hive mind or be excommunicated'. (Eventually I tired of this and deleted my account at Reddit).
Now, if Elon can change Twitter such that polite, even-tempered but off-desired-message content isn't actively suppressed, it will be a much better platform. However, will advertisers and other sources of revenue be pleased with such changes? That may be a major factor in how it all plays out.
You're surprised by the fact that subreddit moderators abuse their power to remove content and eventually ban users who post content they disagree with?
Dozens if not hundreds of popular subreddits even have bots in place that will automatically ban you from subreddit (A) if you ever engage on any "wrong-think" subreddit (B) in any capacity, even if you're not a member of subreddit (A).
USians are heavily propagandized to believe that everyone else is too stupid to form their own opinions and is being manipulated by malicious media and elite cabals to think the wrong things. So we see demands for censorship, meddling with school curricula, etc. Isolation from opposing views due to pandemic lockdowns + remote work boom + network effect helps to keep the divide from healing.
Every single social media platform that has ever existed makes the same fundamental mistake. They believe that they just have to remove or block the bad actors and bad content and that will make the platform good.
The reality is everyone, myself included, can be and will be a bad actor.
How do you build and run a "social media" product when the very act of letting anyone respond to anyone with anything is itself the fundamental problem?
You're confusing bad actors with bad behavior. Bad behavior is something good people do from time to time because they get really worked up about a specific topic or two. Bad actors are people who act bad all the time. There may be some of those but they're not the majority by far (and yes, sometimes normal people turn into bad actors because they get upset about a given thing that they can't talk about anything else anymore).
OP's argument is that you can moderate content based on behavior, in order to bring the heat down, and the signal to noise ratio up. I think it's an interesting point: it's neither the tools that need moderating, nor the people, but conversations (one by one).
I think that's right. One benefit this has: if you can make the moderation about behavior (I prefer the word effects [1]) rather than about the person, then you have a chance to persuade them to behave differently. Some people, maybe even most, adjust their behavior in response to feedback. Over time, this can compound into community-level effects (culture etc.) - that's the hope, anyhow. I think I've seen such changes on HN but the community/culture changes so slowly that one can easily deceive oneself. There's no question it happens at the individual user level, at least some of the time.
Conversely, if you make the moderation about the person (being a bad actor etc.) then the only way they can agree with you is by regarding themselves badly. That's a weak position for persuasion! It almost compels them to resist you.
I try to use depersonalized language for this reason. Instead of saying "you" did this (yeah that's right, YOU), I'll tell someone that their account is doing something, or that their comment is a certain way. This creates distance between their account or their comment and them, which leaves them freer to be receptive and to change.
Someone will point out or link to cases where I did the exact opposite of this, and they'll be right. It's hard to do consistently. Our emotional programming points the other way, which is what makes this stuff hard and so dependent on self-awareness, which is the scarcest thing and not easily added to [2].
If someone points out a specific action I did that can/should be improved upon (and especially if they can tell me why it was "bad" in the first place), I'm far more likely to accept that, attempt to learn from it, and move on. As in real life, I might still be heated in the moment, but I'll usually remember that when similar cues strike again.
But if moderation hints at something being wrong with my identity or just me fundamentally, then that points to something that _can't be changed_. If that's the case, I _know they are wrong_ and simply won't respect that they know how to moderate anything at all, because their judgment is objectively incorrect.
Practically at work, this has actually been a good policy you described when I think about bugs and code reviews.
> "@ar_lan broke `main` with this CLN. Reverting."
is a pretty sure-fire way to make me defend my change and believe you are wrong. My inclination, for better or worse, will be to dispute the accusation directly and clear my name (probably some irrational fear that creating a bug will go on a list of reasons to fire me).
But when I'm approached with:
> "Hey, @ar_lan. It looks like pipeline X failed this test after this CLN. We've automatically reverted the commit. Could you please take a second look and re-submit with a verification of the test passing?"
I'm almost never defensive about it, and I almost always go right ahead to reproducing the failure and working on the fix.
The first message conveys to me that I (personally) am the reason `main` is broken. The second conveys that it was my CLN that was problematic, but fixable.
Both messages are taken directly from my companies Slack (ommitting some minor details, of course), for reference.
> I try to use depersonalized language for this reason. Instead of saying "you" did this (yeah that's right, YOU), I'll tell someone that their account is doing something, or that their comment is a certain way. This creates distance between their account or their comment and them, which leaves them freer to be receptive and to change.
I feel quite excited to read that you, dang, moderating HN, use a similar technique that I use for myself and try to teach others. Someone told my good friend the other day that he wasn't being a very good friend to me, and I told him that he may do things that piss me off, annoy me, confuse me, or whatever, but he will always be a good friend to me. I once told an Uber driver who told me he just got out of jail and was a bad man, I said, "No, you're a good man who probably did a bad thing."
I think your moderation has made me better at HN, and consequently I'm better in real life. Actively thinking about how to better communicate and create environments where everyone is getting something positive out of the interaction is something I maybe started at HN, and then took into the real world. I think community has a lot to do with it, like "be the change you want to see".
But to your point, yeah my current company has feedback guidelines that are pretty similar: criticize the work, not the worker, and it super works. You realize that action isn't aligned with who you want to be or think you are, and you stop behaving that way. I mean, it's worked on me and I've seen it work on others, for sure.
> I try to use depersonalized language for this reason. Instead of saying "you" did this (yeah that's right, YOU), I'll tell someone that their account is doing something, or that their comment is a certain way. This creates distance between their account or their comment and them, which leaves them freer to be receptive and to change.
I use this tactic with my kids when they do something wrong. Occasionally I slip up and really lay into them, but almost all of the time these days I tell them that I love them, I think they are capable of doing the right thing, but I didn't love some action they did or didn't do and I explain why. They may not be happy with this always, or with the natural (& parent-imposed) consequences of their actions, but it reinforces that they have a choice to do good in the future even if they slip up from time to time. If all of us were immutably identified by the worst thing we ever did, no one would have any incentive to change.
Thanks for the thoughtful & insightful comment, dang.
I think you do a good job on HN and I appreciate, as someone who moderated a similarly large forum for a long time, how candid you are in your communications on and off site. You're also a very quick email responder!
> I try to use depersonalized language for this reason. Instead of saying "you" did this (yeah that's right, YOU), I'll tell someone that their account is doing something, or that their comment is a certain way. This creates distance between their account or their comment and them, which leaves them freer to be receptive and to change.
My sense is that this is a worthy thing to do (first of all because it's intellectually correct to blame actions rather than people, and second of all because if you're right about the effect it's all upside). But I suspect this will produce very little introspection, maybe a tiny but on the margins.
It's pretty normal in an argument between two people IRL that one will say something like "That was a stupid comment" or "Stop acting like an asshole" -- both uses of distancing language -- and the other person will respond "Don't call me stupid" or "Don't call me an asshole". I think most people who are on the receiving end of even polite correction are going to elide the distancing step.
On the social psych side, I have no idea whether there's any validated way to encourage someone to be more introspective, take a breath, try to switch down into type-II processing, etc.
Yes. But in our experience to date, this is less common than people say it is, and there are strategies for dealing with it. One such strategy is https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme... (sorry I don't have time to explain this, as I'm about to go offline - but the key word is 'primarily'.) No strategy works in all cases though.
... kinda wondering if this is the sort of OT post we're supposed to avoid, it would be class if you chastised me for it. But anyway, glad you're here to keep us in check and steer the community so well.
Empty comments can be ok if they're positive. There's nothing wrong with submitting a comment saying just "Thanks." What we especially discourage are comments that are empty and negative—comments that are mere name-calling.
It's true that empty positive comments don't add much information but they have a different healthy role (assuming they aren't promotional)
This "impersonal" approach to also works in the other direction. Someone who said something objectively bad once doesn't have to be a "known bad person" forever.
That scares me. Today's norms are tomorrow's taboos. The dangers of conforming and shaping everyone into the least controversial opinions and topics are self evident. It's an issue on this very forum. "Go elsewhere" doesn't solve the problem because that policy still contributes to a self-perpetuating feedback loop that amplifies norms, which often happen to be corrupt and related to the interests of big (corrupt) commercial and political powers.
I don't mean persuade them out of their opinions on $topic! I mean persuade them to express their opinions in a thoughtful, curious way that doesn't break the site guidelines - https://news.ycombinator.com/newsguidelines.html.
Sufficiently controversial opinions are flagged, downvoted til dead/hidden, or associated users shadow banned. HN's policies and voting system, both de facto and de jure, discourage controversial opinions and reward popular, conformist opinions.
That's not to pick on HN, since this is a common problem. Neither do I have a silver bullet solution, but the issue remains, and it's a huge issue. Evolution of norms, for better or worse, is suppressed to the extent that big communication platforms suppress controversy. The whole concept of post and comment votes does this by definition.
There are a few sacred cows here (I won't mention them by name, though the do exist), but I have earned my rep by posting mostly contrarian opinions, and I almost always have quite a few net upvotes - sometimes dozens. It's not too difficult: First, I cite facts that back up my claims from sources whose narratives would typically go against my argument. I cite the New York Times, Washington Post, the Atlantic, NPR, CNN, etc.; I only rarely cite Fox News, and never cite anything to the right of Fox. Second, I really internalize the rules about good faith, not attacking the weakest form of an argument, not cross-examining, etc. Sometimes I have a draft that has my emotions, and I'll edit it to make it more rational before posting. Third, I ask open-ended questions to allow myself to be wrong in the mind of other commenters. Instead of just asserting that some of my ultra-contrarian opinions are the only way anyone can see an issue, I may propose a question. By doing that, I have at times seen some excluded middle I hadn't considered, and my opinion becomes more nuanced. Fourth, I often will begin replying and then delete my reply because I know it won't add anything. This is the hardest one to do, but sometimes it's just the way you have to go. Some differences are merely tastes and preferences, and I'm not going to change the dominant tastes and preferences of the Valley on HN. I can only point out some of the consequences.
The content moderation rules and system here have encouraged me to write better and more clearly about my contrarian opinions, and have made me more persuasive. HN can be a crap-show at times, but in my experience, it's often some of the best commentary on the Internet.
Completely disagree about HN. Controversial topics that are thought out, well formed, and argued with good intent are generally good sources of discussion.
Most of the time though, people arguing controversial topics phrase them so poorly or include heavy handed emotions so that their arguments have no shot of being fairly interpreted by anyone else.
That's true to an extent (and so is what ativzzz says, so you're both right). But the reasons for what you're talking about are much misunderstood. Yishan does a good job of going into some of them in the OP, by the way.
People always reach immediately for the conclusion that their controversial comments are getting moderated because people dislike their opinion—either because of groupthink in the community or because the admins are hostile to their views. Most of the time, though, they've larded their comments pre-emptively with some sort of hostility, snark, name-calling, or other aggression—no doubt because they expect to be opposed and want to make it clear they already know that, don't care what the sheeple think, and so on.
The way the group and/or the admins respond to those comments is often a product of those secondary mixins. Forgive the gross analogy, but it's as if someone serves a shit milkshake and when it's rejected, say, "you just hate dairy products" or "this community is so biased against milkshakes".
If you start instead from the principle that the value of a comment is the expected value of the subthread it forms the root of [1], then a commenter is responsible for the effects of their comments [2] – at least the predictable ones. From that it follows that there's a greater burden on the commenter who's expressing a contrarian view [3]. The more contrarian the view—the further it falls outside the community's tolerance—the more responsibility that commenter has for not triggering degenerative effects like flamewars.
This may be counterintuitive, because we're used to thinking in terms of atomic individual responsibility, but it's a model that actually works. Threads are molecules, not atoms—they're a cocreation, like one of those drawing games where each person fills in part of a shared picture [4], or like a dance—people respond to the other's movements. A good dancer takes the others into account.
It may be unfair that the one with a contrarian view is more responsible for what happens—especially because they're already under greater pressure than the one whose views agree with the surround. But fair or not, it's the way communication works. If you're trying to deliver challenging information to someone, you have to take that person into account—you have to regulate what you say by what the listener is capable to hear and to tolerate. Otherwise you're predictably going to dysregulate them and ruin the conversation.
Contrarian commenters usually do the opposite of this—they express their contrarian opinion in a deliberately aggressive and uncompromising way, probably because (I'm repeating myself sorry) they expect to be rejected anyhow, and it's safer to be inside the armor of "you people can't handle the truth!" than it is to really communicate, i.e. to connect and relate.
This model is the last thing that most contrarian-opinion commenters want to adopt, because it's hard and risky, and because usually they have pre-existing hurt feelings from being battered repeatedly with majoritarian opinions already (especially the case when identity is at issue, such as being from a minority population along some axis). But it's the one that actually has a hope of working, and is by far the best solution I know of to the problem of unconventional opinions in groups.
Are there some views which are so far beyond the community's tolerance that any mention in any form will immediately blow up the thread, making the above model impossible? Yes, but they're rare and extreme and not usually the thing people have in mind. I think it's better to stick to the 95% or 99% case when having this discussion.
Just wanted to say that it's great to have you posting your thoughts/experience on this topic. I've run a forum for almost 19 years as a near-lone moderator and so have a lot of thoughts, experience and interest in the topic. It's been frustrating when Yishan's posted (IMO, solid) thoughts on social networks and moderation and the bulk of HN's discussion can be too simple to be useful ("Reddit is trash", etc).
I particularly liked his tweet about how site/network owners just wish everyone would be friendly and have great discussions.
> The more contrarian the view—the further it falls outside the community's tolerance—the more responsibility that commenter has for not triggering degenerative effects like flamewars.
This sounds similar to the “yelling fire” censorship test
it’s not that we censor discussing combustion methods,
there would be no effect if everyone else was also yelling fire
But people were watching a movie and now the community’s experience has been ruined (with potential for harm), in exchange for nothing of value
And bad behavior gets rewarded with engagement. We learned this from "reality television" where the more conflict there was among a group of people the more popular that show was. (Leading to producers abandoning the purity of being unscripted in the pursuit of better ratings.) A popular pastime on Reddit is posting someone behaving badly (whether on another site, a subreddit, or in a live video) for the purpose of mocking them.
When the organizational goal is to increase engagement, which will be the case wherever there are advertisers, inevitably bad behavior will grow more frequent than good behavior. Attempts to moderate toward good behavior will be abandoned in favor of better metrics. Or the site will stagnate under the weight of the new rules.
In this I'm in disagreement with Yishan because in those posts I read that engagement feedback is a characteristic of old media (newspapers, television) and social media tries to avoid that. The OP seems to be saying that online moderation is an attempt to minimize controversial engagement because platforms don't like that. I don't believe it. I think social media loves controversial engagement just as much as the old-school "if it bleeds, it leads" journalists from television and newspapers. What they don't want is the (quote/unquote) wrong kind of controversies. Which is to say, what defines bad behavior is not universally agreed upon. The threshold for what constitutes bad behavior will be different depending on who's doing the moderating. As a result the content seen will be influenced by the moderation, even if said moderation is being done in a content-neutral way.
And I just now realize that I've taken a long trip around to come to the conclusion that the medium is the message. I guess we can now say the moderation is the message.
I'd argue that bad actors are people that behave badly "on purpose". Their goals are different than the normal actor. Bad actors want to upset or scare people. Normal actors want to connect with, learn from, or persuade others.
I can "behave well" and still be a bad actor in that I'm constantly spreading dangerous disinformation. That disinformation looks like signal by any metadata analysis.
Yes, that's probably the limit of the pure behavioral analysis, esp. if one is sincere. If they're insincere it will probably look like spam; but if somebody truly believes crazy theories and is casually pushing them (vs promoting them aggressively and exclusively), that's probably harder to spot.
It's not a mistake. It's a PR strategy. Social media companies are training people to blame content and each other for the effects that are produced by design, algorithms and moderation. This reassigns blame away from things that those companies control (but don't want to change) to things that aren't considered "their fault".
Its really all media not just social media that profits from propaganda. Turn on CNN. You might agree with what they are saying versus what the talking heads on fox news are saying, but they use the same state of constant panic style of reporting because that works really well to fix eyeballs on advertisements, both overt ones and the more subtle ones that happen during the programming.
That is very much a problem in the US (AFAIK) where news and entertainment are merged. Other countries have laws to ensure that news are presented emotionless and factual.
This is something Riot Games has spoken on, the observation that ordinary participants can have a bad day here or there, and that forgiving corrections can preserve their participation while reducing future incidents.
Did Riot eventually sort out the toxic community? If so that would be amazing, and definitely relevant. I stopped playing when it was still there, and it was a big part of the reason I stopped.
I’ve been playing very very active from 2010 to 2014 and since then on-off, sometimes skipping a season.
Recently picked it up again and I noticed that I didn’t had to use /mute all anymore. I’ve got all-chat disabled by default so I’ve got no experience there, but overall I’d say it has come a long way.
But I’d also say it depends which mode and MMR you are in. I mostly play draft pick normals or ARAMs in which I both have a lot of games played - I heard from a mate that chat is unbearable in low level games.
The only success I've seen in sorting out random vitriol is cutting chat off entirely and minimizing methods of passive aggressive communication. But Nintendo's online services haven't exactly scaled to the typical MOBA size to see how it actually works out
It sounds like a insurmountable problem. What makes this even more interesting to me is that HN seems to have this working pretty well. I wonder how much of it has to do with clear guidelines of what should be valued and what shouldn't and having a community that buys in to that. For example one learns quickly that Reddit-style humor comments are frowned upon because the community enforces it with downvotes and frequently explanations of etiquette.
If we follow the logic of Yishan's thread, HN frowns upon and largely doesn't allow discussion that would fall into group 3 which removes most of the grounds for accusations of political and other biases in the moderation. As Yishan says, no one really cares about banning groups 1 and 2, so no one objects to when that is done here.
Plus scale is a huge factor. Automated moderation can have its problems. Human moderation is expensive and hard to keep consistent if there are large teams of individuals that can't coordinate on everything. HN's size and its lack of desire for profit allow for a very small human moderation team that leads to consistency because it is always the same people making the decisions.
Nope. There's been abuse in text-only environments online since forever. And lots of people have left (or rarely post on) HN because of complaints about the enviroment here.
This is essentially moderation rule #0. it is unwritten, enforced before violation can occur, and generates zero complaints because it filters complainers out of the user pool from the start.
The no-avatars rule also takes away some of the personalization aspect. If you set your account up with your nickname, your fancy unique profile picture and your favorite quote in the signature, and someone says you're wrong, you're much more invested because you've tied some of your identity to the account.
If you've just arrived on the site, have been given a random name and someone says you're wrong, what do you care? You're not attached to that account at all, it's not "you", it's just a random account on a random website.
I thought that was an interesting point on 4chan (and probably other sites before them), that your identity was set per thread (iirc they only later introduced the ability to have permanent accounts). That removes the possibility of you becoming attached to the random name.
Why would one be concerned with being wrong at all? Being wrong, thus being able to learn, is the whole reason for having discussions with others.
Once you’re confident that you can’t be wrong, you’re not going to care about the topic anymore. There is good reason why we don’t sit around talking about how 1+1=2 all day.
Some areas of reddit do similar things with similar results. AskHistorians and AskScience are the first two to come to mind.
This may be a lot easier in places where there's an explicit point to discussion beyond the discussion itself - StackOverflow is another non-Reddit example. It's easier to tell people their behavior is unconstructive when it's clearly not contributing to the goal. HN's thing may just be to declare a particular type of conversation to be the goal.
I think that has far more to do with this site being relatively low-traffic. Double the traffic, while keeping the exact same rules and topic, and it would become unreadable. It's easy to "moderate" when people clearly break the rules; but "moderation" can't do anything if the only problem is that most comments are uninsightful. Large numbers always ruin things, in real life or online. You can see that on this very website on Musk-related stories, with a terrible heat-to-light ratio in the comments.
It's controversial, but if the average IQ was 120 rather than 100, I doubt you'd have 1/10th as many issues on massively popular social media; most of the moderation issues would go away. The problem comes from the bottom-up, and can't be fixed from the top down.
The only thing HN has going for it imo is its size. Once it becomes a larger and therefore more attractive market for media, the propaganda will be a lot more heavy handed, like what happened to reddit as it grew from something no one used and into a mainstream platform. You definitely see propaganda posted on here already from time to time.
I think most posts are short lived so they drop off quickly and people move on to new content. I think a lot of folks miss a lot of activity that way. I know I miss a bunch. And if you miss the zeitgeist it doesn’t matter what you say cause nobody will reply.
The twitter retweet constantly amplifies and the tweets are centered around an account vs a post.
Reddit should behave similarly but I think subreddit topics stick longer.
There's also the fact that there's no alerts about people replying to you or commenting on your posts. You have to explicitly go into your profile, click comments, and then you can see if anyone has said anything to you.
This drastically increases time between messages on a topic, lets people cool off, and lets a topic naturally die down.
Very good point about the "fog of war". If HN had a reply-notification feature, it would probably look differently. Every now and then someone builds a notification feature as an external service. I wonder if you can measure change in the behavior of people before and after they've started using it?
Of course, that also soft-forces everyone to move on. Once a thread is a day or two old, you can still reply, but the person you've replied to will probably not read it.
Category 1 from Yishan's thread, spam, obviously isn't allowed. But also thinking about house general framework of it all coming down to signal vs noise, most "noise" gets heavily punished on here. Reddit-style jokes frequently end in the light greys or even dead. I had my account shadow-banned over a decade ago because I made a penis joke and thought people didn't get the joke.
Free speech doesn't mean you can say whatever, wherever, without any repercussions. It solely means the government can't restrict your expression. On a private platform you abide their rules.
Well, no. Free Speech is an idea, far more expansive than the law as written in the US constitution, or many other countries and their respective law of the land/documents.
Free Speech does mean what you describe it not as. But there is no legal body to punish you for violating the principle. It is similar to 'primum non nocere', translated roughly to 'do no harm', extremely common in medicine and something you may see alongside the 'Hippocratic Oath'. You can be in violation of that principle or the oath at any time, some people even get fired for violating either as a pretext to malpractice. Some even argue that it is quite impossible to abide by this principle, and yet, it is something many people take on as responsibility everyday all across the globe.
I won't argue about websites and their rules, they have their own set of principles and violate plenty of others, sometimes they even violate their own principles. But Free Speech is not just the law and interactions one may have with their government.
What now? You’re suggesting that the removal of the word “general” turns it into a concept that can exist and be disagreed upon? There can’t be conflicting general principles of free speech over which people consistently disagree? What a bizarre correction.
Where are the people arguing about Donald Trump? Where are the people promoting dodgy cryptocurrencies? Where are the people arguing about fighting duck-sized horses? Where's the Ask HN asking for TV show recommendations?
> The reality is everyone, myself included, can be and will be a bad actor.
But you likely aren't, and most people likely aren't either. That's the entire premise behind removing bad actors and spaces that allow bad actors to grow.
> But you likely aren't, and most people likely aren't either.
Is there any evidence of this? 1% bad content can mean that 1% of your users are bad actors, or it can mean that 100% of your users are bad actors 1% of the time (or anything in between.)
I assume all of us have evidence of this in our daily lives.
Even the best people we know have bad days. But you have probably also encountered people in your life who have consistent patterns of being selfish, destructive, toxic, or harmful.
> you have probably also encountered people in your life who have consistent patterns of being selfish, destructive, toxic, or harmful.
This is not evidence that most bad acts are done by bad people. This is evidence that I've met people who've annoyed or harmed me at one or more points, and projected my personal annoyance into my fantasies of their internal states or of their essence. Their "badness" could literally have only consisted of the things that bothered me, and during the remaining 80% of the time (that I wasn't concerned with) they were tutoring orphans in math.
Somebody who is "bad" 100% of the time on twitter could be bad 0% of the time off twitter, and vice-versa. Other people's personalities aren't reactions to our values and feelings; they're as complex as you are.
As the OP says: our definitions of "badness" in this context are of commercial badness. Are they annoying our profitable users?
edit: and to add a bit - if you have a diverse userbase, you should expect them to annoy each other at a pretty high rate with absolutely no malice.
Your logic makes sense but is not how these moderation services actually work. When I used my own phone number to create a Twitter, I was immediately banned. So instead I purchased an account from a service with no issues. It’s become impossible for me at least to use large platforms without assistance from an expert who runs bot farms to build accounts that navigate the secret rules that govern bans.
I believe that's GP's point! Any of us has the potential to be the bad actor in some discussion that gets us irrationally worked up. Maybe that chance is low for you or I, but it's never totally zero.
And even if the chance is zero for you or I specifically, there's no way for the site operators to a priori know that fact or to be able to predict which users will suddenly become bad actors and which discussions will trigger it.
I know sales bros who live their live by their ABCs - always be closing, but that's besides the point. if the person behind the spam bot one day wakes up and decides to do turn over a new leaf and something else with their life, they're not going to use the buyC1alis@vixagra.com email address they use for sending spam as the basis for their new persona. thus sending spam is inherit to the buyC1alis@vixagra.com identity that we see - of course there's a human being behind it, but as we'll never know them in ant other context, that is who they are to us.
We have laws around mobs and peaceful protest for a reason. Even the best people can become irrational as a group. The groupmind is what we need controls for: not good and bad people.
The original post is paradoxical in the very way it talks about social media being paradoxical.
He observes that social media moderation is about signal to noise. Then he goes on about introducing off-topic noise. Then, he comes to conclusions that seem to ignore his original conclusion about it being a S/N problem.
Chiefly, he doesn't show how a "council of elders" is necessary to solve S/N problems.
Strangely enough, Slashdot seems to have a system which worked pretty well back in the day.
I think the key is that no moderation can withstand outside pressure. A community can be entirely consistent and happy but the moment outside pressure is applied it folds or falls.
Slashdot moderation is largely done by the users themselves, acting anonymously as "meta-moderators." I think they were inspired by Plato's ideas around partially amnesiac legislators who forget who they are while legislating.
What about fox news? AM radio? These are bastions of radicalization but they dont let anyone come on and say anything. At the end of the day this sort of rhetoric played by these groups is taught in university communications classes as a way to exert influence. Its all just propaganda at the end of the day, and that can come in the form of a pamphlet, or a meeting in a town hall, or from some talking head on tv, or a tweet. Social media is just another avenue for propaganda to manifest just like how the printing press is.
Some people are much more likely to engage in bad behavior than others. The thing is, people who engage in bad behavior are also much more likely to be "whales," excessive turboposters who have no life and spend all day on these sites.
Someone who has a balanced life, who spends time at work, with family, in nature, only occasionally goes online, uses most of their online time for edification, spends 30 minutes writing a reply if they decide one is warranted - that type of person is going to have a minuscule output compared to the whales. The whales are always online, thoughtlessly writing responses and upvoting without reading articles or comments. They have a constant firehouse of output that dwarfs other users.
Worth reading "Most of What You Read on the Internet is Written by Insane People"[1].
If you actually saw these people in real life, chances are you'd avoid interacting with them. People seeing a short interview with the top mod of antiwork almost destroyed that sub (and lead to the mod stepping down). People say the internet is a bad place because people act badly when they're not face to face. That might be true to some extent, but we're given online spaces where it's hard to avoid "bad actors" (or people that engage in excessive bad behavior) the same way we would in person.
And these sites need the whales, because they rely on a constant stream of low quality content to keep people engaged. There are simple fixes that could be done, like post limits and vote limits, but sites aren't going to implement them. It's easier to try to convince people that humanity is naturally terrible than to admit they've created an environment that enables - and even relies on - some of the most unbalanced individuals.
How do you build and run a "social media" product when the very act of letting anyone respond to anyone with anything is itself the fundamental problem?
This isn't the problem as much as giving bad actors tools to enhance their reach. Bad actors can pay to get a wider reach or get/abuse a mark of authority, like a special tag on their handle, getting highlighted in a special place within the app, gaming the algorithm that promotes some content, etc. Most of these tools are built into the platform. Some though, like sock puppets, can be detected but aren't necessarily built in functionality.
At the very least you could be susceptible overreacting because of an emotionally charged issue. Eg. Reddit's boston marathon bomber disaster, when they started trying to round up brown people (actual perp "looked white")
Maybe that wouldn't be your crusade and maybe you would think you were standing up for an oppressed minority. You get overly emotional, and you could be prone to making some bad decisions.
People act substantially differently on reddit vs. hackernews; honestly I have to admit to being guilty of it. Some of the cool heads here are probably simultaneously engaged in flamewars on reddit/twitter.
The business plan of massive user scale, user generated content, user “engagement” with ad driven revenue leads to the perceived issues about polarization and content moderation. That and the company structure are the fundamental problems that attract what we see on social media. The data about users is the product sold to advertisers. The platform only cares about moderation in that it supports the goal of more ad revenue, that is why Yishan said spam moderation was job #1, its more harmful to ad revenue than users with poor behavior.
If a social media company’s mission is to have no barrier, anyone and everyone to share ideas, information and “all are welcome” then maybe a company structure like a worker cooperative [0] would be a better match to that mission statement. No CEO that gets massive pay/stock, instead employees are owners. All employees. They decide what features/projects the company does, how to allocate resources, how to moderate content, etc.
> The reality is everyone, myself included, can be and will be a bad actor.
Customised filters for anyone, but I am talking about filters completely under the control of the user. Maybe running locally. We can wrap ourselves in a bubble but better that than having a bubble designed by others.
I think AI will make spam irrelevant over the next decade by switching from searching and reading to prompting the bot. You don't ever need to interface with the filth, you can have your polite bot present the results however you please. It can be your conversation partner and you get to control its biases as well.
Internet <-> AI agent <-> Human
(the web browser of the future, the actual web browser runs in a sandbox under the AI)
>How do you build and run a "social media" product when the very act of letting anyone respond to anyone with anything is itself the fundamental problem?
Not true at all - everyone has the capacity for bad behaviour in the right circumstances but most people are not, in my opinion, there intentionally to be trolls.
There are the minority who love to be trolls and get any big reaction out of people (positive or negative). Those people are the problem. But they are also often very good at evading moderation or laying in wait and toeing the line between bannable offences and just every so slightly controversial comments.
To be honest, and maybe this will be panned, but the real answer is for people to grow thicker skin and stop putting one's feelings on a pedestal above all.
I don't block people because they hurt my feelings, i block people because im just not interested in seeing bird watching content on my timeline. No one deserves my eyeballs.
Look - I don't even particularly disagree with you, but I want to point out a problem with this approach.
I'm 33. I grew up playing multiplayer video games (including having to run a db9 COM cable across the house from one machine to another to play warcraft 2 multiplayer, back when you had to explicitly pick the protocol for the networking in the game menu)
My family worked with computers, so I had DSL since I have memories. I played a ton of online games. The communities are BRUTAL. They are insulting, abusive, misogynistic, racist, etc... the spectrum of unmonitored teenage angst, in all it's ugly forms (and to be fair, some truly awesome folks and places).
As a result - I have a really thick skin about basically everything said online. But a key difference between the late 90s and today, is that if I wanted it to stop, all I had to do was close the game I was playing. Done.
Most social activities were in person, not online. I could walk to my friend's houses. I could essentially tune out all the bullshit by turning off my computer, and there was plenty of other stuff to go do where the computer wasn't involved at all.
I'm not convinced that's enough anymore. The computer is in your pocket. It's always on. Your social life is probably half online, half in person. Your school work is online. Your family is online. your reputation is online (as evidenced by those fucking blue checkmarks). The abuse is now on a highway into your life, even if you want to turn it off.
It's like the school bully is now waiting for you everywhere. He's not waiting at school - he's stepping into the private conversations you're having online. He's talking to your friends. He's hurling abuse at you when you look at your family photos. He's in your life in a way that just wasn't possible before.
I don't think it's fair to say "Just grow a thicker skin" in response to that. I think growing a thicker skin is desperately needed, but I don't think it's sufficient. The problem is deeper.
We have a concept for people who do the things these users are doing on twitter in person - They're called fighting words, and most times, legally (even in the US) there is zero assumption of protected speech here. You say bad shit about someone with the goal of riling them up and no other value? You have no right of free speech, because you aren't "speaking" - you're trying to start a fight.
I'm not protecting your ability to bully someone. Full stop. If you want to do that, do it with the clear understanding that you're on your own, and regardless of how thick my skin is - I think you need a good slap upside the head. I'd cheer it on.
In person - this resolves itself because the fuckwads who do this literally get physically beaten. Not always - but often enough we have a modicum of civil discussion we accept, and a point where no one is going to defend you because you were a right little cunt, and the beating was well deserved.
I don't know how you simulate the same constraint online. I'm not entirely sure you can, but I think the answer isn't to just stop trying.
> The computer is in your pocket. It's always on. Your social life is probably half online, half in person. Your school work is online. Your family is online. your reputation is online (as evidenced by those fucking blue checkmarks). The abuse is now on a highway into your life, even if you want to turn it off.
It is still a choice to participate online. I'm not on Twitter or Facebook or anything like that. It doesn't affect my life in the slightest. Someone could be on there right this minute calling me names, and it can't bother me because I don't see it, and I don't let it into my life. This is not a superpower, it's a choice to not engage with social media and all the ills it brings.
Have I occasionally gotten hate mail from an HN post? Sure. I even got a physical threat over E-mail (LOL good luck, guy). If HN ever became as toxic as social media can be, I could just stop posting and reading. Problem solved. Online is not real if you just ignore it.
The attitude of "If you don't like it, leave!" is allowing the bullies to win.
Minorities, both racial and gender, should be able to use social media without having vitriol spewed at them because they're guilty of being a minority.
This is yet another antitrust issue, regulation should be put in place so that a private company cannot have own a platform with a market share large enough to become "the" public square.
That's asking human nature to change, or at least asking almost everyone to work on their trauma until they don't get so activated. Neither will happen soon, so this can't be the real answer.
Interesting, that wasn’t my interpretation of the twitter thread, it was more that spam and not hurtful content was the real tricky thing about moderating social media.
Spam was more of an example than the point, I think -- the argument Yishan is making is that moderation isn't for content, it's for behavior. The problem is that if bad behavior is tied to partisan and/or controversial content, which it often is, people react as if the moderation is about the content.
I respectfully disagree. Beyond the reason that there is no way you can be 100% certain 'unmoderated media' was the primary motivator. Nobody can presume to know his motivations or inner dialogue. A look at that mans history shows clear mental health issues and self-destructive behavior so we can infer some things but never truly know.
Violence exists outside of mean tweets and political rhetoric. People, even crazy ones, almost always have their own agency even if it runs contrary to what most consider to be normal thoughts and behavior. They choose to act, regardless of others and mostly without concern or conscious. There are crazy people out there and censoring others wont ever stop bad people from doing bad things. If so, then how do we account for the evils done by those prior to our inter-connected world?
You hit the nail on the head, but maybe the other way around.
"Block" and "Mute" are the Twitter user's best friends. They keep the timeline free of spam, be it advertisers, or the growth hackers creating useless threads of Beginner 101 info and racking up thousands of likes.
After using several communications tools over the past couple of decades (BBSes, IRC, Usenet, AIM, plus the ones kids these days like), I'm convinced blocking and/or muting is required for any digital mass communication tool anyone other than sociopaths would use.
Charge them $10 to create an account (anonymous, real, parody whatever), then if they break a rule give them a warning, 2 rule breaks, a 24 hour posting suspension, 3 strikes and permanently ban the account.
Let them reregister for $10.
Congrats, i just solved spam, bots, assholes and permanent line steppers.
This is how the SomethingAwful forums operated when they started charging for accounts. Unfortunately it probably wouldn't be useful as a test case because it was/is, at it's core, a shitposting site.
Unless you generate more than $10 from the account. For example in presidential election years in the US billions is spent in advertising the elections. A few PACs would gladly throw cash at astroturf movements on social media even at the risk of being banned.
Sounds good to me. That would mean that your energy in moderation would directly result in income. If superpacs are willing to pay $3.33 a message, that's a money-spinner.
Having posted there in its heyday, it made for an interesting self-moderation dynamic for sure. Before I posted something totally offbase that I knew I'd be punished for, I had to think "is saying this stupid shit really worth $10 to me?". Many times that was enough to get me to pause (but sometimes you also can't help yourself and it's well worth the price).
I think the idea is that it shifts the incentives. Sure, a rich nation state could buy tons of bot accounts at $10 a pop. But is that still the most rational path to their goal? Probably not, because there are lots of other things you can do for $100M.
Give the user exclusive control over what content they can see. The platform should enforce legal actions against users only, as far as bans are concerned.
Everything else, like being allowed to spam or post too quickly, is a bug, and bugs should be addressed in the open.
I've had to give this some thought for other reasons, and after a couple decades solving analogous problems to moderation in security, I agree with yishan about signal to noise over the specific content, but what I have effectively spent a career studying and detecting with data is a single factor: malice.
It's something every person is capable of, and it takes a lot of exercise and practice with higher values to reach for something else when your expectations are challenged, and often it's an active choice to recognize the urge and act differently. If there were a rule or razor I would make on a forum or platform, it's that all content has to pass the bar of being without malice. It's not "assume good intent," it's recognizing that there are ways of having very difficult opinions without malice, and one can have conventional views that are malicious, and unconventional ones that are not. If you have ever dealt with a prosecutor or been on the wrong side of a legal dispute, these are people fundamentally actuated by malice, and the similar prosecution of ideas and opinions (and ultimately people) is what wrecks a forum.
It's not about being polite or civil, avoiding conflict, or even avoiding mockery and some very funny and unexpected smackdowns either. It's a quality that in being universally capable of it, I think we're also able to know it when we see it. "Hate," is a weak substitute because it is so vague we can apply it to anything, but malice is ancient and essential. Of course someone malicious can just redefine malice the way they have done other things and use it as an accusation because words have no meaning other than as a means in struggle, but really, you can see when someone is actuated by it.
I think there is a point where a person decides, consciously or not, that they will relate to the world around them with malice, and the first casulty of that is an alignment to honesty and truth. What makes it useful is that you can address malice directly and restore an equillibrium in the discourse, whereas accusations of hate and others are irrevocable judgments. I'd wonder if given it's applicability, this may be the tool.
For me, malice relates to intent. Intent isn't observable. When person X makes a claim about Y's intent, they're almost always filling in invisible gaps using their imagination. You can't moderate on that basis. We have to go by effects, not intent (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...).
It took me a long time to learn that if I tell a user "you were being $FOO" where $FOO relates to their intent, they can simply say "no I wasn't" and no one can prove otherwise, making the moderation position a weak one. Mostly they will deny it sincerely because they never had such an intent, at least not consciously. If you do that as a mod, you've just given them a reason to feel entirely in the right, and if you proceed to moderate them anyway, they will feel treated unjustly. This is a way to generate bad blood, make enemies, and lose the high ground.
The reverse strategy is better: describe the effects of someone's posts and explain why they are bad. When inevitably they respond with "but my intent was $BAR", the answer is "I believe you [what else can you say about something only that person could know?], but nonetheless the effects were $BAZ and we have to moderate based on effects. Intent doesn't communicate itself—the burden is on the commenter to disambiguate it." (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...)
Often when people get moderated in this way, they respond by writing the comment they originally had in their head, as a defense of what they actually meant. It's often astonishing what a gap there is between the two. Then you can respond that if they had posted that in the first place, it would have been fine, and that while they know what they have in their head while posting, the rest of us have no access to that—it needs to be spelled out explicitly.
Being able to tell someone "if you had posted that in the first place, it would have been fine" is a strong moderation position, because it takes off the table the idea "you're only moderating me because you dislike my opinions", which is otherwise ubiquitous.
This exchange ought to be a post in its own right. It seems to me that malice, hate, Warp contamination, whatever you want to call it, is very much a large part of the modern problem; and also it's a true and deep statement that you should moderate based on effects and not tell anyone what their inner intentions were, because you aren't sure of those and most users won't know them either.
Kate Manne, in Down Girl, writes about the problems with using intent as the basis for measuring misogyny. Intent is almost always internal; if we focus on something internal, we can rarely positively identify it. The only real way to identify it is capturing external expressions of intent: manifestos, public statements, postings, and sometimes things that were said to others.
If you instead focus on external effects, you can start to enforce policies. It doesn't matter about a person's intent if their words and actions disproportionately impact women. The same goes for many -isms and prejudice-based issues.
A moderator who understands this will almost certainly be more effective than one who gets mired in back-and-forths about intent.
Hi, dang! I wonder if it makes sense to add a summarized version of your critical point on "effects, not intent" to the HN guidelines. Though, I fear there might be undesirable ill effects of spelling it out that way.
Thanks (an understatement!) for this enlightening explanation.
There are so many heuristics like that, and I fear making the guidelines so long that no one will read them.
I want to compound a bunch of those explanations into a sort of concordance or whatever the right bibliographic word is for explaining and adding to what's written else where (so, not concordance!)
Fair enough. Yeah your plans of "compounding" and "bibliographic concordance" (thanks for the new word) sound good.
I was going to suggest this (but scratch it, your above idea is better): A small section called "a note on moderation" (or whatever) with hyperlinks to "some examples that give a concrete sense of how moderation happens here". There are many excellent explanations buried deep in the the search links that you post here. Many of them are a valuable riffing on [internet] human nature.
As a quick example, I love your lively analogy[1] of a "boxer showing up at a dance/concert/lecture" for resisting flammable language here. It's funny and a cutting example that is impossible to misunderstand. It (and your other comment[2] from the same thread) makes so many valuable reminders (it's easy to forget!). An incomplete list for others reading:
- how to avoid the "scorched earth" fate here;
- how "raw self-interest is fine" (if it gets you to curiosity);
- why you can't "flamebait others into curiosity";
- why the "medium" [of the "optionally anonymous internet forum"] matters;
- why it's not practical to replicate the psychology of "small, cohesive groups" here;
- how the "burden is on the commenter";
- "expected value of a comment" on HN; and much more
It's a real shame that these useful heuristics are buried so deep in the comment history. Sure, you do link to them via searches whenever you can; that's how I discovered 'em. But it's hard to stumble upon otherwise. Making a sampling of these easily accessible can be valuable.
But the difference between the original post and the revised post often is malice (or so I suspect). The ideas are the same, though they may be developed a bit more in the second post. The difference is the anger/hostility/bitterness coloring the first post, that got filtered out to make the second post.
I think that maybe the observable "bad effects" and the unobservable "malice" may be almost exactly the same thing.
I would attribute to malice things like active attempts to destroy the very forum - spamming is a form of "malice of the commons".
You will know when you encounter malice because nothing will de-malice the poster.
But if it is not malice; you can even take what they said and rewrite it for them in a way that would pass muster. In debate this is called steelmanning - and it's a very powerful persuasion method.
Spamming is an attempt to promote something. Destroying the forum is a side effect.
It's fair to describe indifference to negative effects of one's behavior as malicious, and it is, indeed almost never possible to transform a spammer into a productive member of a community.
Yeah, most people take the promotion spamming as the main one, but you can also refer to some forms of shitposting as spamming (join any twitch chat and watch whatever the current spam emoji is flood by) - but the second is more almost a form of cheering perhaps.
If you wanted to divide it further I guess you could discuss "in-group spamming" and "out-group spamming" where almost all of the promotional stuff falls in the second but there are still some in the first group.
I guess I'd describe repeatedly posting the same emoji to a chat as flooding rather than spamming. Even then, your mention of cheering further divides it into two categories of behavior:
1. Cheering. That's as good a description as any. This is intended to express excitement or approval and rally the in-group. It temporarily makes the chat useless for anything else, but that isn't its purpose.
2. Flooding. This is an intentional denial of service attack intended to make the chat useless for as long as possible, or until some demand made by the attacker is met.
Yeah - one thing I've noticed with some forums is that the addition of the "like/dislike" buttons (some have even more reactions available) greatly INCREASES the signal to noise ratio (I mean makes the forum have more signal, maybe I said it backwards) because the "me too" posts and the "fuck off" posts are reduced, you can just hit the button instead.
Some streaming platforms have a button you can hit that makes party emoji or heart emoji or whatever appear in a stream from the lower right, that's a similar thing which helps with cheering so you can then combat flooding.
I've observed the same with Matrix and Discord. It reduces noise to the point that while "fuck off" would call for moderation in a lot of contexts, reacting with the middle finger emoji usually doesn't even though it has the same meaning.
This aspect of people writing what they meant again after being challenged and it being different - I'd assert that when there is malice (or another intent) present, they double down or use other tactics toward a specific end other than improving the forum or relationship they are contributing to. When there is none, you get that different or broader answer, which is really often worth it. However, yes it is intent, as you identify.
I have heard the view that intent is not observable, and I agree with the link examples that the effect of a comment is the best available heuristic. It is also consistent with a lot of other necessary and altruistic principles to say it's not knowable. On detecting malice from data, however, the security business is predicated on detecting intent from network data, so while it's not perfect, there are precedents for (more-) structured data.
I might refine it to say that intent is not passively observable in a reliable way, as if you interrogate the source, we get revealed intent. On the intent taking place in the imagination of the observer, that's a deep question.
I think I have reasonably been called out on some of my views being the artifacts of the logic of underlying ideas that may not have been apparent to me. I've also challenged authors with the same criticism, where I think there are ideas that are sincere, and ones that are artifacts of exogenous intent and the logic of other ideas, and that there is a way of telling the difference by interrogating the idea (via the person.)
I even agree with the principle of not assuming malice, but professionally, my job has been to assess it from indirect structured data (a hawkish, is this an attack?) - whereas I interpret the moderator role as assessing intent directly by its effects, but from unstructured data (is this comment/person causing harm?).
Malice is the example I used because I think it has persisted in roughly its same meaning since the earliest writing, and if that subset of effectively 'evil' intent only existed in the imaginations of its observers, there's a continuity of imagination and false consciousness about their relationship to the world that would be pretty radical. I think it's right to not assume malice, but fatal to deny it.
Perhaps there is a more concrete path to take than my conflating it with the problem of evil, even if on these discussions of global platform rules, it seems like a useful source of prior art?
> "Hate," is a weak substitute because it is so vague we can apply it to anything
That is a big stretch. Hate can't be applied to many things, including disagreements like this comment.
But it can be pretty clearly applied to statements that, if carried out in life, would deny another person or peoples' human rights. Another is denigration or mocking someone on the basis of things that can't or shouldn't have to change about themselves, like their race or religion. There is a pretty bright line there.
Malice (per the conventional meaning of something bad intended, but not necessarily revealed or acted out) is a much lower bar that includes outright hate speech.
> but really, you can see when someone is actuated by it.
How can you identify this systematically (vs it being just your opinion), but not identify hate speech?
Hate absolutely can, and is, applied to disagreements: Plenty of people consider disagreement around allowing natal males in women's sport is hateful. Plenty of people consider opposition to the abolishment of police is hateful. Plenty of people immigration enforcement hateful. I could go on...
> Plenty of people consider disagreement around allowing natal males in women's sport is hateful. Plenty of people consider opposition to the abolishment of police is hateful. Plenty of people immigration enforcement hateful.
Those things aren't deemed hate speech, but they might be disagreed with and downvoted on some forums (i.e. HN), and championed on others (i.e. Parler) but that has nothing to do with them being hate speech. They are just unpopular opinions in some places, and I can understand how it might bother you if those are your beliefs and you get downvoted.
Actual hate speech based on your examples is: promoting violence/harassment against non-cisgender people, promoting violence/harassment by police, and promoting violence/harassment by immigration authorities against migrants.
Promoting violence and harassment is a fundamentally different type of speech than disagreeing with the prevailing local opinion on a controversial subject that has many shades of gray (that your examples intentionally lack).
> Promoting violence and harassment is a fundamentally different type of speech than disagreeing with the prevailing local opinion on a controversial subject that has many shades of gray (that your examples intentionally lack).
Plenty of people disagree, and do indeed claim that not letting a transgender woman compete against natal females is harassment towards transgender people. Heck, I've even seen people claim that this is genocide.
I don't really care about these topics, but the point is that many people do not, or perhaps cannot, distinguish between "hate speech" and and opinions they disagree with. Contrary to your claim that "hate can't be applied to many things, including disagreements like this comment", hate speech is often applied to dissenting opinions.
> Plenty of people disagree, and do indeed claim that not letting a transgender woman compete against natal females is harassment towards transgender people. Heck, I've even seen people claim that this is genocide.
It's a ridiculous claim but so what? It has no teeth anyways.
Nobody gets suspended from mainstream social media for simply expressing opposition to transgender gender athletics.
They get suspended for actually harassing transgender athletes.
You might get demonetized for that opinion, but getting paid to express an unpopular opinion isn't a right.
From that same article it says that he was banned for "Promoting, encouraging, or facilitating the discrimination or denigration of a group of people based on their protected characteristics". That is harassment, not expressing an opinion.
All you did was repeat the banned individual's own opinion about why they were banned. Of course, you are free to think Twitch is lying about its reasons for the ban, but you've offered no evidence of that.
According to that article, the individual also had a history of encouraging the murder of protestors by white militias on Twitch, so sounds like they had plenty of grounds on which to ban him already, and perhaps they finally just got around to it.
It undermines your argument to use someone like that as an example.
> From that same article it says that he was banned for "Promoting, encouraging, or facilitating the discrimination or denigration of a group of people based on their protected characteristics". That is harassment, not expressing an opinion. All you did was repeat the banned individual's own opinion about why they were banned. Of course, you are free to think Twitch is lying about its reasons for the ban, but you've offered no evidence of that.
And what was the nature of that "harassment"? The fact that he didn't agree with the orthodoxy around natal males in women's sports. You're acting out the the exact dynamic I'm talking about: dissenting opinions are labeled harassment or denigration and become bannable offenses. Then people get banned for the "harassment" that is holding a verboten opinion, with no actual harassment taking place. And this is by far from the only example [1].
You're asking me to prove a negative. Who was harassed? Twitch didn't say, and no one can seem to identify a harassment target. When did this harassment occur? Again, nothing is specified. If you can identify a harassment victim it'd be good of you to do so.
> According to that article, the individual also had a history of encouraging the murder of protestors by white militias on Twitch, so sounds like they had plenty of grounds on which to ban him already, and perhaps they finally just got around to it.
If this was the case, the ban would have been for incitement to violence, a separate ToU clause Twitch uses to ban people who call for violence. Not discrimination or denigration of a group on the basis of protected characteristics. Furthermore, these comments occurred two years before the ban - your comment makes it sound like this happened the week before he commented about Thomas. In case you're wondering, in reference to people defending themselves from arsonists (the exact words were, "dipshit protesters that think that they can torch buildings at 10 p.m.") not shooting actual protestors. It continues to amaze me how 10 seconds can be edited to portray someone in a completely opposite light of reality. If you're interested in this creator's leanings, just take a look at a recent video [2].
Espousing dissenting opinions absolutely does get people banned. The reasons cited are harassment, but no harassment victim is identified because holding the prohibited opinion is now considered harassment even if no individual is actually harassed.
> How can you identify this systematically (vs it being just your opinion), but not identify hate speech?
What I think distinguishes malice is the prosecution example. Where if someone makes an argument about you personally, and they are unable to abstract the idea from you personally as an individual, that is acting with malintent toward you.
Antagonizing someone personally by trying to scandalize them in the eyes of others (performatively, as though in front of a judge or jury, or public opinion) is not discourse, it's just persecution and animus. The modern internet version in the form of, "hey everybody, this person is an X!" is probably malice in its more pure form, with a spectrum of dilutions that are variations of, "surely you aren't tainting yourself with this taboo!" after that. It's a purity game, but really a proxy for the 'deadly sin' of wrath and the secular concept of animus. In this sense, it's something that with reflection we recognize ourselves as capable of and learn to moderate it within ourselves.
Hate is something we only recently started accusing others of. I do think it's part of a substitute, secular, belief that persuades people they cannot ever be good, so instead we can repent of our recieved worthlessness by giving up our moral agency and becoming _anti-bad_, all while accepting poor treatment and even demanding it for others because the people we have animus for are also equally bad. The only hope is that by redemption through criticism, some are less worthless than others, but that's the upside. It's like a profoundly false religion whose core tenant is that you believe in the universality of hate, and then be against it. Things are shaped by the forces they oppose, and I think indexing on hate has had the effect of moralizing malice. When we start with the premise that we are all interchangeable group elements without moral agency, and we must redeem ourselves by becoming above criticism, that is a system of slavery with the reins in the hands of the most zealous critics - who happen to be other living people, and not a relationship with a self or an ideal. This whole cycle is based on this _anti-bad_ negative definition that mostly seems to moralize malice in its followers. Back in the day they had to call them sins because they felt good, but really weren't.
Anyway this isn't your outgroup either, it's us. In terms of detecting it, I think ML is just on the cusp of doing it as well or better than most people and that's the future of forum moderation. This question of whether we can observe intent is going to be a big one. It's a huge topic.
What was normal was not hate, and hate - as an extreme emotion - was never normal, but if belief in hate is the root of one's entire ontology, there isn't common ground with someone who doesn't have that princple. Acts of hate absolutely occurred, especially systemic ones that resulted in genocides, but what was considered normal was not hate. We have adapted the word to mean something it did not previously. Just like we don't attribute to malice what can be attributed to stupidity, I'd say the same thing about modern hate and self-interest.
The instances of the other examples have only barely been reduced by identifying them, and in many cases, it has licensed behaviours that don't fit the definition, and it has empowered a layer of people who have jobs effectively managing and extracting value from them now, imo. Like malice, I reject hate, and in doing so, I also reject indexing on it as the axiom for a positive morality.
I think it's two things: the power of malice, and popularity measurement. Malice and fame.
Social networks are devices for measuring popularity; if you took the up/down arrows off, no one would be interested in playing. And we have proven once again that nothing gets up arrows like being mean.
HN has the unusual property that you can't (readily) see others' score, just your own. That doesn't really make it any less about fame, but maybe it helps.
When advertising can fund these devices to scale to billions, it's tough to be optimistic about how it reflects human nature.
I find most forums that advocate against behavior they view as malicious wind up becoming hugboxes as people skirt this arbitrary boundary. I will never, never come back to platforms or groups after I get this feeling.
Hugbox environments wind up having a loose relationship with the truth and a strong emphasis on emotional well-being.
Setting your moderation boundaries determines the values of your platform. I’d much rather talk to someone who wants to hurt my feelings than someone who is detached from reality or saying what they think.
I don't think it's as simple as malice either because the people who get worked up about online debate often believe they are doing the right thing, they believe they are trying to save someone else from thinking the wrong thing. There's no malice there.
This is a very good way to pitch your afforestation startup accelerator in the guise of a talk on platform moderation. /s
I'm pretty sure I've got some bones to pick with yishan from his tenure on Reddit, but everything he's said here is pretty understandable.
Actually, I would like to develop his point about "censoring spam" a bit further. It's often said that the Internet "detects censorship as damage and routes around it". This is propaganda, of course; a fully censorship-resistant Internet is entirely unusable. In fact, the easiest way to censor someone online is through harassment, or DDoS attacks - i.e. have a bunch of people shout at you until you shut up. Second easiest is through doxing - i.e. make the user feel unsafe until they jump off platform and stop speaking. Neither of these require content removal capability, but they still achieve the goal of censorship.
The point about old media demonizing moderation is something I didn't expect, but it makes sense. This is the same old media that gave us cable news, after all. Their goal is not to inform, but to allure. In fact, I kinda wish we had a platform that explicitly refused to give them the time of day, but I'm pretty sure it's illegal to do that now[0], and even back a decade ago it would be financial suicide to make a platform only catering to individual creators.
[0] For various reasons:
- The EU Copyright Directive imposes an upload filtering requirement on video platforms that needs cooperation with old media companies in order to implement. The US is also threatening similar requirements.
- Canada Bill C-11 makes Canadian content (CanCon) must-carry for all Internet platforms, including ones that take user-generated content. In practice, it is easier for old media to qualify as CanCon than actual Canadian individuals.
I've often pointed out that the concept of censorship as being only or primarily through removal of speech is an antiquated concept from a time before pervasive communications networks had almost effortlessly connected most of the world.
Censorship in the traditional sense is close to impossible online today.
Today censorship is often and most effectively about suppressing your ability to be heard, often by flooding out the good communications with nonsense, spam, abuse, or discrediting it by association (e.g. fill the forums of a political opponents with apparent racists). This turns the neigh uncensorability of modern communications methods on its head and makes it into a censorship tool.
And, ironically, anyone trying to use moderation to curb this sort of censorious abuse is easily accused of 'censorship' themselves.
I remain convinced that the best tool we have is topicality: When a venue has a defined topic you can moderate just to stay onto the topic without a lot of debatable value judgements (or bruised egos-- no reason to feel too bad about a post being moved or removed for being offtopic). Unfortunately, the structure of twitter pretty much abandons this critical tool.
(and with reddit increasingly usurping moderation from subreddit moderators, it's been diminished there)
Topicality doesn't solve all moderation issues, but once an issue has become too acrimonious it will inherently go off-topic: e.g. if your topic is some video game well participants calling each other nasty names is clearly off-topic. Topicality also reduces the incidence of trouble coming in from divisive issues that some participants just aren't interested in discussing-- If I'm on a forum for a video game I probably don't really want to debate abortion with people.
In this thread we see good use of topicality at the top with Dang explicitly marking complaints about long form twitter offtopic.
When it comes to spam scaling considerations mean that you need to be able to deal with much of it without necessarily understanding the content. I don't think this should be confused with content blindness being desirable in and of itself. Abusive/unwelcoming interactions can occur both in the form (e.g. someone stalking some around from thread to thread or repeating an argument endlessly) and and in the content (continually re-litigating divisive/flame-bate issues that no one else wants to talk about, vile threatening messages, etc.)
Related to topicality is that some users just don't want to interact with each other. We don't have to make a value judgement about one vs the other if we can provide space so that they don't need to interact. Twitter's structure isn't great for this either, but more the nature of near-monopoly mega platforms isn't great for it. Worse, twitter actively make it hard-- e.g. if you've not followed someone who is network-connected to other people you follow twitter continually recommends their tweets (as a friend said: "No twitter, there is a reason I'm not following them") and because blocking is visible using it often creates drama.
There are some subjects on HN where I might otherwise comment but I don't because I'd prefer to avoid interacting with a Top Poster who will inevitably be active in those subjects. Fortunately, there are plenty of other places where I can discuss those things where that poster isn't active.
Even a relatively 'small' forum can easily have as many users as many US states populations at the founding of the country. I don't think that we really need to have mega platforms with literally everyone on them and I see a fair amount of harm from it (including the effects of monoculture moderation gone wrong).
In general, I think the less topic constrained you can make a venue the smaller it needs to be-- a completely topic-less social venue probably should have no more than a few dozen people. Twitter is both mega-topicless and ultra-massive-- an explosive mixture which will inevitably disappoint.
Another tool I think many people have missed the value of is procedural norms including decorum. I don't believe that using polite language actually makes someone polite (in fact, the nastiest and most threatening remarks I've ever received were made with perfectly polite language)-- but some people are just unable to regulate their own behavior. When there is an easily followed set of standards for conduct you gain a bright line criteria that makes it easier to eject people who are too unable to control themselves. Unfortunately, I think the value of a otherwise pointless procedural conformity test is often lost on people today, though they appear common in historical institutions. (Maybe a sign of the ages of the creators of these things: As a younger person I certainly grated against 'pointless' conformity requirements, as an older person I see more ways that their value can pay for their costs: I'd rather not waste my time on someone who can't even manage to go through the motions to meet the venue's standards)
Early on in Wikipedia I think we got a lot of mileage out of this: the nature of the site essentially hands every user a loaded gun (ability to edit almost everything, including elements on the site UI) and then tells them not to do use it abusively rather than trying to technically prevent them from using it abusively. Some people can't resist and are quickly kicked out without too much drama. Had those same people been technically prevented they would have hung around longer and created trouble that was harder to kick them out over (and I think as the site added more restrictions on new/casual users the number of issues from poorly behaved users increased).
I can speak only at a Star Trek technobabble level on this, but I'd like it if I could mark other random accounts as "friends" or "trusted". Anything they upvote or downvote becomes a factor in whether I see a post or not. I'd also be upvoting/downvoting things, and being a possible friend/trusted.
I'd like a little metadata with my posts, such as how controversial my network voted it. The ones that are out of calibration, I can view, see their responses, and then I could see if my network has changed. It would be nice to click on a friend and get a report across months of how similar we vote. If we started drift, I can easily cull them and get my feed cleaned up.
Slashdot started introducing features like this 20 years ago. We thought ”web of trust” would be the future of content moderation, but subsequent forums have moved further and further away from empowering users to choose what they want to see and towards simple top down censorship.
Web of trust works too well and unfortunate ideas can be entertained by someone you trust, which is a no-no once you have top-down ideas.
Almost everyone has someone they trust so much that even the most insane conspiracy theory would be entertained at least for awhile if that trusted person brought it up.
>Almost everyone has someone they trust so much that even the most insane conspiracy theory would be entertained at least for awhile if that trusted person brought it up.
What's wrong with that?
If someone you trust brings up a crazy idea, it should be considered. Maybe for a day, maybe for an hour, maybe for a half second, but it shouldn't be dismissed immediately no matter what.
I mean remember when the wuhan lab thing was a total conspiracy theory. Or when masks were supposedly (not airborne) not needed and you just had to wash your hands more. All sorts of fringe stuff sometimes turns out to be true. But you know sometimes you get pizzagate but it’s the price we pay.
It's not censorship, it's about optimizing for advertisers instead of users. Which means users can't have too much control. But since users won't pay advertising is the only business model that works.
Unfortunately this is antithetical to the advertising-based revenue model these sites operate on. There's no incentive for the site to relinquish their control over what you see and return it to the user.
On an anecdotal level, the fraction of tweets in my feed that are "recommended topics" or "liked by people I follow" or simply "promoted" has risen astronomically over the past few months. I have a pretty curated list of follows (~100), I had the "show me tweets in chronological order" box checked back when that was an option, and the signal to noise ratio has still become overwhelmingly unusable.
Amusingly, a subreddit I frequent has one notorious user who has abused the 'block' feature to ensure their frequent posts on a particular topic are free of dissenting commenters. They block anyone who even slightly disagrees with their position, this prevents them from seeing his threads, and the comments on each thread become entirely one-sided as a result - and the growing list of blocked people have no idea that those threads even exist.
Edit: From your comment history you sound like an Aussie, so I reckon you are.
Dude's been suspended from reddit now - as in by the admins. The moderation on /r/AusFinance itself is simultaneously extreme and very hands-off. It's basically AutoModerator set to kill (no naughty words, minimum karma requirements, minimum comment length, auto-deletes posts that have a few reports, likely with no human confirmation). Has a lot of false positives and yet allows very disruptive behaviour to carry on just fine. Not ideal but presumably the way it is because it's a growing sub and the moderators are unpaid volunteers that lack time.
I'm pretty happy with it now that you-know-who isn't around. But he'll be back as soon as reddit's presumably IP-based suspension-evasion-detection's time limit runs out.
I would frame it more the opposite: I can easily see comments that are worthless to me, and I want to automatically downvote any comments by the same user (perhaps needs tags: there are users whose technical opinions are great, but I’m not interested at all in their jokes or politics).
One problem with your idea is that moderation votes are anonymous, but ranking up positively or negatively depending on another users votes would allow their votes to be deanonymised.
Perhaps adding in deterministic noise would be enough to offset that? Or to prevent deanonimisation you need a minimum number of posting friends?
In fact I would love to see more noise added to the HN comments and a factor to offset early voting. Late comments get few votes because they have low visibility, which means many great comments don’t bubble to the top.
Honestly, the voting sorts on Reddit are an underrated feature. Being able to sort comments not by total number of upvotes (which I feel panders to low-effort content) but by highest ratio of upvotes to downvotes produces an extremely different experience. I also enjoy being able to see "controversial" comments because sometimes those can be quite insightful.
If you use Twitter long enough you'll find that some people you thought were interesting will either take an interest in a new topic ("yeah, I liked your discussions of music, but I actually don't ever wish to hear your political opinions") or just have some kind of epiphany that alienates you from them.
This would become hard to manage as you expand to users you “trust” but don’t agree with. Or users that have extremely good views in one topic they spend most of their time talking about, and the few rare posts about other topics that express extremely bad views. Users with high trust networks also become a target for spam and low brow accounts: anything they get to engage with will be boosted exponentially.
I think this comes back to the “focus on bad behaviors, not bad actors” discussion, but in reverse: focusing on “good actors” (“trusted” users) assumes people have fixed patterns, when there’s a lot of nuance and variability to be taken into account.
The solution is simple - only show users tweets from people they follow. People may say twitter can't make money this way, but with this model you don't need much money. You don't need moderation, or AI, or a massive infrastructure, or tracking, etc. You don't need managers or KPIs or HR, or anything beyond an engineer or two and a server or two. Musk could pay for this forever and it would never be more than a rounding error in his budget.
But this isn't what twitter is for. Twitter is for advertising.
As any simple solution, it becomes extremely complicated as you get into the details.
I follow official gov accounts notifying of policy changing, deadlines etc.
- Should I see their retweets ? Yes, they retweet relevant info from other gov, accounts.
- Should I see replies to these tweets ? Probably, there’s useful info coming in the comments from time to time, in particular about situations similar to mine.
So as a user, I have valid reason to whant these two mechanism. But then applying them to shitposting accounts, it becomes a hell scape. And with users who bring in valuable info but sometimes shitpost, we’re starting to need nuance. And so on.
We’re back to square one. The “simple” solution expanded to its useful form brings back the moderation issues we’re trying to flee.
This makes me wonder if part of the problem is that it’s a bit like the psychology of road rage. It isn’t so much the trigger, as it is that it happened in an perceived personal safe space. Maybe It’s not that there’s someone wrong on the internet that makes people angry, but that the comment was fed to them while they were reading things they like. Something with a blended approach that makes it clear when you’re leaving the bubble and might be more tolerable. Like turning on PVP in a mmorpg, you expect to encounter opposition.
Yes, I think there's part of that. In the same direction, compartmenting to have some stuff come within the frame you expect them might help a lot.
Google's defunct social network tried to embrace that concept to a point, and facebook also has some way to create explicit bubbles, but I think your vision of visually separate modes when moving between contexts coud be the best approach.
This is such a gross simplification. First, even people who you follow might post both wanted and unwanted content, and the platform will be more useful if it can somehow show me the things I want to see. Second, it overlooks content creator side of things. How does a new person without any followers start gaining them, or vice-versa a new person who doesn't know whom to follow yet. People keep saying this is what they want from Twitter. But Twitter is not only for them, it is valuable for these other use cases.
You can choose to unfollow someone if their noise outweighs their signal, otherwise you just put up with it. Eliminate inline "cards" so they can't force 3rd party content into your stream. If they post a bunch of stuff you don't want to see you can unfollow - there is no way for anyone to force stuff on you.
You discover content by people you follow posting links to content, and by search. Good content will gain a following, as it always has even before feed algorithms.
I only see tweets from people I follow. There is zero chance I'd use the twitter app/site default timeline, look at "trending" topics etc. I just follow a hundred or so people and I see their tweets in alphabetical order because I use a sane client (In my case Tweetbot, but there are others).
Content/people discovery is not a problem because of retweets. Almost all the people I follow I follow because people I already followed retweeted or quoted them. Then I look at that profile, the content they write, and if they are interesting, I follow them too.
If someone produces content I don't like for whatever reason, I unfollow them. That includes content they quote or retweet too obviously.
This won’t work because it’s not good enough to just not see something you don’t like. You have to ensure that no one else can see the thing you don’t like as well. It should be deplatformed at the IP level.
Something like this, where the feed is ordered by time, has the added advantage of having a clear cue for when to stop scrolling. When you reach a post from yesterday, you know it's time to stop.
Any client that doesn't work exactly like this (doesn't let me resume from where I last read, lets me know when I have read everything I follow, and shows only what they write in alphabetical order) is a broken Twitter client.
The default behavior of both the Twitter apps and Twitter site feed is exactly like this. Plus it inserts ads into the feed, and tries to bait you with "trending" topics.
I normally use the realtwitter.com redirect to get just the tweets from people I follow. Occasionally I look at the regular feed and the other stuff can be interesting, but I’m also reminded of why I don’t use it by default.
You can still have search. And the people you follow can still post links (without those "cards") to content they find interesting - but in those cases you are in control. Nothing is forced into your feed.
One is 'put up or shutup' for appeals of moderator decisions.
That is anyone who wishes to appeal needs to also consent to have all their activities on the platform, relevant to the decision, revealed publicly.
It definitely could prevent later accusations of secretiveness or arbitrariness. And it probably would also make users think more in marginal cases before submitting.
This is something that occurs on twitch streams sometimes. While it can be educational for users to see why they were banned, some appeals are just attention seeking. Occasionally though it exposes the banned user’s or worse a victim users personal information, (eg mental health issues, age, location) and can lead to both users being targeted and bad behaviour by the audience. For example Bob is banned for bad behaviour towards Alice (threats, doxxing), by making that public you are not just impacting Bob, but could also put Alice at risk.
This also used to be relatively popular in the early days of League of Legends, people requesting a "Lyte Smite". Players would make inflammatory posts on the forums saying they were banned wrongly, and Lyte would come in with the chatlogs, sometimes escalating to perma-ban. I did always admire this system and thought it could be improved.
There's also a lot of drama around Lyte in his personal life, should you choose to go looking into that.
But those users would be left alone in their pride in the put-up-or-shut-up model, because everybody else would see the mistakes of that user and abandon them. So the shame doesn't have to be effective for the individual, it just has to convince the majority that the user is in the wrong.
Right. To put it another way, this "put up or shut up" system, in my mind, isn't even really there to convince the person who got moderated that they were in the wrong. It's to convince the rest of the community that the moderation decision was unbiased and correct.
These news articles about "platform X censors people with political views Y" are about generating mass outrage from a comparatively small number of moderation decisions. While sure, it would be good for the people who are targeted by those moderation decisions to realize "yeah, ok, you're right, I was being a butthole", I think it's much more important to try to show the reactionary angry mob that things are aboveboard.
The most high profile, and controversial, "moderation" decisions made by large platforms recently have generally been for obvious, and very public, reasons.
It is expensive to do, because you have to ensure the content being made public doesn't dox / hurt someone other than the poster. But I think you could add two things to the recipe. 1 - real user validation. So the banned user can't easily make another account. Obviously not easy and perhaps not even possible, but essential. 2 - increased stake. Protest a short ban, and if you lose, you get an even longer ban.
I've never understood that idea that PM's on a platform must be held purely private by the platform even in cases where:
* There's some moderation dispute that involves the PM's
* At least one of the parties involved consents to release the PM's
The latter is the critical bit, to me. When you send someone a chat message, or an email, obviously there's nothing actually stopping them from sharing the content of the message with others if they feel that way, either legally or technically. If an aggrieved party wants to share a PM, everyone knows they can do so -- the only question mark is that they may have faked it.
To me the answer here seems obvious: allow users to mark a PM/thread as publicly visible. This doesn't make it more public than it otherwise could be, it just lets other people verify the authenticity, that they're not making shit up.
Here's a radical idea: let me moderate my own shit!
Twitter is a subscription-based system (by this, I mean that I have to subscribe to someone's content) so if I subscribe to someone and don't like what they say then buh-bye!
Let me right click on a comment/tweet (I don't use social media so not sure of the exact terminology the kids use these days) with the options of:
- Hide this comment
- Hide all comments in this thread from <name>
- Block all comments in future from <name> (you can undo this in settings).
More like "let us moderate it ourselves". Reddit users already do this - there are extensions you can install that allow you to subscribe to another group of user's ban list. So you find a "hivemind" that you mostly agree with, join their collective moderation, and allow that to customize the content you like. The beauty is that you get to pick the group you find most reasonable.
Like I can't believe that this reasoning doesn't resonate with people even outside of advertisers. Who wants to be on a social network where if one of your posts breaks containment you spend the next few weeks getting harassed by people who just hurl slurs and insults at you. This is already right now a problem on Twitter and opening the floodgates is the opposite of helping.
I feel like you don't understand the issue here at all.
Blocking them requires first engaging with their content. This is what people always miss in the discussion. To know if you need to block someone or not involves parsing their comment and then throwing it in the bin.
The same goes for ignoring it. And eventually people get tired of the barrage of slurs and just leave because the brainpower required to sift through garbage isn't worth it anymore. That's how you end up with places like Voat.
Most customers don't care the only reason it's a real issue is Twitter users run the marketing department at a lot of companies and they are incorrectly convinced people care.
How much is "most"? What data do you have? Plus, even if ~20% of customers care and only half will boycott, that's still going to have an impact on the company's bottom line.
People producing products don’t actually care. I’d love to see stats on this made public (I’ve seen internal metrics). Facebook and Twitter don’t even show you what your ad is near. You fundamentally just have to “trust” them.
If you have a billboard with someone being raped beneath it and a photo goes viral, no one would blame the company advertising on the billboard. Frankly, no one will associate the two to change their purchasing habits.
The reason corporations care are the ESG scores and activist employees.
Also these brands still advertise in places where public executions will happen (Saudi Arabia). No one is complaining there.
It is good recapitulation of why (particularly from a more legal-oriented layperson's standpoint) moderation is hard for an online platform.
The crazy thing is even though that is a long list, you could probably double the size of that list of issues with whole other classes of issues.
For example, that list is mostly focused on issues facing a platform with outside stakeholders making it difficult... then there are the inside stakeholders!
Stuff like... "actually, arguments/controversy increases engagement/views/ads (and your/my bonus/stock)", "more regulation we have to comply with actually increases barriers to entry for new competitors", "I work here and have politics X but I sense our company acting with bias Y", "Our moderators are traumatized from looking at too much <porn/hate/etc>", etc.
Besides a Reddit CEO posting on this, I would also pay money to see a CmdrTaco editorial on this topic...
I recently started my own Discord server and had my first experience in content moderation. The demographics is mostly teenagers. Some have mental health issues.
It was the hardest thing ever.
In first incident I chose to ignore a certain user being targeted by others for posting repeated messages. The person left a very angry message and left.
Comes the second incident, I thought I learnt my lesson. Once a user is targeted, I tried to stop others from targeting the person. But this time the people who targeted the person wrote angry messages and left.
Someone asked a dumb question, I replied in good faith. The conversation goes on and on and becomes weirder and weirder, until the person said "You shouldn't have replied me.", and left.
Honestly I am just counting on luck at this time that I can keep it running.
I'm confused, do you think some individual leaving is a failure state? Realistically I don't think you can avoid banning or pissing some people off as a moderator, at least in most cases.
There's a lot of people whose behavior on internet message boards/chat groups can be succinctly summarized as, "they're an asshole." Now maybe IRL they're a perfectly fine person, but for whatever reason they just engage like an disingenuous jerk on the internet, and the latter case is what's relevant to you as a moderator. In some cases a warning or talking-to will suffice for people to change how they engage, but often times it won't, they're just dead set on some toxic behavior.
> I'm confused, do you think some individual leaving is a failure state?
When you are trying to grow something, them leaving is a failure.
I ran a Minecraft server for many years when I was in high school. It's very hard to strike a balance of:
1. Having players
2. Giving those players a positive experience (banning abusers)
3. Stepping in only when necessary
Every player that I banned meant I lost some of my player base. Some players in particular would cause an entire group to leave. Of course, plenty of players have alternate accounts and would just log onto one of those.
I think it can be a failure state, certainly, but sometimes it's unavoidable, and banning someone can also mean more people in the community, rather than less.
Would HN be bigger if it had always had looser moderation that involved less banning of people? I'm guessing not.
edit: I guess what I was thinking was that often in a community conflict where one party is 'targeted' by another party, banning one of those parties is inevitable. Not always, but often people just cannot be turned away from doing some toxic thing, they feel that they're justified in some way and would rather leave/get banned than stop.
It was hard from my perspective because I would rather not ban anyone. You're right that failure to ban could cause players to leave because they are being harassed, but it's hard to know when to step in.
If you're too strict you'll drive off a lot of players, which sounds acceptable, but nobody wants to play on a near-empty server. If you're too lenient you'll have too many hostile players, but at least your server is not empty.
This was by far my least favorite aspect of managing a server, but I still had an overall positive experience and it is a huge part of me becoming a software engineer.
The person leaving is the least bad part of what happened in the OP's example, try reading this again?:
>In first incident I chose to ignore a certain user being targeted by others for posting repeated messages. The person left a very angry message and left.
They have three examples, and all of them ended with the person leaving; it just sounded to me like they were implying that the person leaving represented a failure on their part as a moderator. That, had they moderated better, they could've prevented people leaving.
That was bad yes, but it sounds like they feel that the outcome each time of someone leaving (and possibly with an angry message) was also bad, and indicative that they handled the situation incorrectly.
IME, places (or forums, or social networks, etc.) with good moderation tend to fall into 2 camps of putting that into play:
1. The very hands-off approach style that relies on the subject matter of the discussion/topic of interest naturally weeding out "normies" and "trolls" with moderation happening "behind the curtain";
2. The very hands-on approach that relies on explicit clear rules and no qualms about acting on those rules, so moderation actions are referred directly back to the specific rule broken and in plain sight.
Camp 1 begins to degrade as more people use your venue; camp 2 degrades as the venue turns over to debate about the rules themselves rather than the topic of interest that was the whole point of the venue itself (for example, this is very common in a number of subreddits where break-off subreddits usually form in direct response to a certain rule or the enforcement of a particular rule).
Camp 2 works fine in perpetuity if the community is built as a cult of personality around a central authority figure; where the authority figure is also the moderator (or, if there are other moderators, their authority is delegated to them by the authority figure, and they can always refer arbitration back to the authority figure); where the clear rules are understood to be descriptive of the authority's decision-tree, rather than prescriptive of it — i.e. "this is how I make a decision; if I make a decision that doesn't cleanly fit this workflow, I won't be constrained by the workflow, but I will try to change the workflow such that it has a case for what I decided."
It would be cool if such forks were transparent on the original forum / subreddit, and if they also forked on specific rules. I.e. like a diff with rule 5 crossed out / changed / new rule added, etc.
I've seen an example of this. The fork is less active than the original, but I wouldn't call it a failure. Rather, it was a successful experiment with a negative result. The original forum was the most high-quality discussion forum I've ever experienced in my life, so this wasn't quite a generalizable experiment.
Discord is particularly tough, depending on the type of community. I very briefly moderated a smaller community for a video game, and goodness was that awful. There was some exceptionally egregious behavior, which ultimately made me quit, but even things like small cliques. Any action, perceived or otherwise, taken against a "popular" member of that clique would immediately cause chaos as people would begin taking sides and forming even stronger cliques.
One of the exceptionally egregious things that made me quit happened in a voice call where someone was screensharing something deplorable (sexually explicit content with someone that wasn't consenting to the screensharing). I wouldn't have even known it happened except that someone in the voice call wasn't using their microphone, so I was able to piece together what happened from them typing in the voice chat text channel. I can't imagine the horror of moderating a larger community where various voice calls are happening at all times of the day.
Imo, some people leaving is not necessary bad thing. Like, some people are looking for someone to bully. Either you allow them bully or they leave. The choice determines overall culture of you community.
And sometimes people are looking for a fight and will search it until they find it ... and then leave.
And sometimes people are looking for a fight and will search it until they find it ... and then leave.
I've found the more likely result is that people looking for a fight will find it, and then stay because they've found a target and an audience. Even if the audience is against them (and especially so if moderators are against them), for some people that just feeds their needs even more.
Wow, and now we all learned that nothing should be censored thanks to this definitely real situation where the same outcome occurred when you censored both the victim and perpetrator
> In first incident I chose to ignore a certain user being targeted by others for posting repeated messages. The person left a very angry message and left.
> Comes the second incident, I thought I learnt my lesson. Once a user is targeted, I tried to stop others from targeting the person. But this time the people who targeted the person wrote angry messages and left.
Makes me think that moderators should have the arbitrational power to take two people or groups, and (explicitly, with notice to both people/groups) make each person/group's public posts invisible to the other person/group. Like a cross between the old Usenet ignore lists, and restraining orders, but externally-imposed without either party actively seeking it out.
What's the problem? Moderation means people are forcibly made to leave, but just as often they'll leave voluntarily. And lack of moderation will also cause people to leave. You'll never be able to moderate in a way that doesn't cause people to be angry and leave. If you try, you'll cause other people to be angry (or annoyed by spam, etc.) and leave. Users leaving isn't a failure.
I think all this just revolves around humans being generally insane and emotionally unstable. Technology just taps into this, exposes it, and connects it to others.
Someone asked a dumb question, I replied in good faith. The conversation goes on and on and becomes weirder and weirder, until the person said "You shouldn't have replied me.", and left.
My interpretation was he ran a discord server for a topic who's demographics happened to include a large number of teenagers and folks with mental illness thus unintentionally resulting in a discord containing a lot of them, not that he was specifically running a discord server targeting mentally ill teens.
I'm afraid I'm too young to understand that reference or context around chatrooms.
Anyway, the Discord server is purely for business and professional purposes. And I use the same username everywhere including Discord, so it's pretty easy to verify my identity.
I doubt it's explicitly for mentally ill teenagers. It could be, say, a video game discord, and so the demographics are mostly teens who play the game, and obviously some subset will be mentally ill.
It's probably something like this. I'm interested in a specific videogame and have bounced around a lot of discords trying to find one where most of the members are older. We still have some under-18s (including one guy's son), but they're in the minority, and that makes everything easier to moderate. We can just ban (or temp-ban) anyone who's bringing the vibe down and know that the rest will understand and keep the peace.
Teens don't have as much experience with communities going to shit, or with spaces like the workplace where you're collectively responsible for the smooth running of the group. They're hot-headed and can cause one bad experience to snowball where an adult might forgive and forget.
About the only thing that makes mentally healthy adults hard to moderate is when they get drunk or high and do stupid stuff because they've stopped worrying about consequences.
> Teens don't have as much experience with communities going to shit, or with spaces like the workplace where you're collectively responsible for the smooth running of the group. They're hot-headed and can cause one bad experience to snowball where an adult might forgive and forget.
Some people, not just teens of course, feel utterly compelled to go tit-for-tat, to retaliate in kind. Even if you can get them to cool down and back off for a while, and have a civil conversation with you about disengaging, they may tell you that they're going to retaliate against the other person anyway at a later date, in cold blooded revenge, because they have to. That necessity seems to be an inescapable reality for such people. They feel they have no choice but to retaliate.
When two such people encounter each other and an accident is mispercieved as an offense, what follows is essentially a blood feud. An unbreakable cycle of retaliation after retaliation. Even if you can get to the bottom of the original conflict, they'll continue retaliating against each other for the later acts of retaliation. The only way to stop it is to ban one if not both of them. Moderation sucks, never let somebody talk you into it.
> An adult running a discord server for mentally ill teenagers seems like a cautionary tale from the 1990s about chatrooms
It sounds like a potential setup for exploitation, grooming, cult recruitment, etc. (Not saying the grandparent is doing this, for all I know their intentions are entirely above board-but other people out there likely are doing it for these kinds of reasons.)
Discord is already considered a groomer hotspot, at least in joking. You can join servers based on interests alone and find yourself in a server with very young people.
Mental illness or not, your interactions with users in a service with a block button are all voluntary. Unless someone is going out of their own way to drag drama out of Discord, or god forbid, into real life, it tends to be best to just let it happen, as they are entirely willingly participating in it and the escape is just a button away.
Community defined by the most aggressive people that come in tend to be the one where everyone else voluntarily leaves, cause leaving is much better for them.
I see this a fair amount, and yeah, "just let people block others" is really terrible moderation advice.
Besides the very reasonable expectation almost everyone has that assholes will be banned, the inevitable result of not banning assholes is that you get more and more assholes, because their behavior will chase away regular users. Even some regular users may start acting more like assholes, because what do you do when someone is super combative, aside from possibly leaving? You become combative right back, to fight back.
> Because it is not TOPICS that are censored. It is BEHAVIOR.
> (This is why people on the left and people on the right both think they are being targeted)
An enticing idea but simply not the case for any popular existing social network. And it's triply not true on yishan's reddit which both through administrative measures and moderation culture targets any and all communities that do not share the favoured new-left politics.
Indeed, I've seen several subs put in new rules where certain topics aren't allowed to be discussed at all, because the administrators told them that the sub would get banned if the users went against the beliefs held by the admins (even if the admins had a minority opinion when it came to the country as a whole).
Then there is just arbitrary or malicious enforcement of the rules. /r/Star_Trek was told by admins they would be banned if they talked about /r/StarTrek at all, so now that's a topic that's no longer allowed in that sub. But there are tons of subs set up specifically to talk about other subs, where just about all posts are about other subs (such as /r/subredditdrama), and the admins never bother them.
I don't think we can have a conversation about moderation when people are pretending that the current situation doesn't exist, and that moderation is only ever done for altruistic reasons. It's like talking about police reform but pretending that no police officer has ever done anything wrong and not one of them could ever be part of a problem.
"Hey guys no brigading okay? ;-)" followed by a page which directly links to threads for people to brigade.
They don't even bother to use the np.reddit "no participation" domain. Most other subs don't even allow you to link outside the sub, because they've been warned by admins about brigading.
Their rules barely even mention brigading: https://www.reddit.com/r/subredditdrama/wiki/rules, and you have to go to the expanded version of the rules to find even this, which just says not to vote in linked threads.
Literally the entire purpose of this sub is to brigade and harass other subs. Their politics align with those of the admins, though, so it's allowed. It is blatant bullying at the tacit encouragement of the people running the site.
> Most other subs don't even allow you to link outside the sub, because they've been warned by admins about brigading.
I joined reddit in 2005 and have moderated several subreddits. The admins have never imposed anything resembling that on any subreddit I have moderated. I have a suspicion they impose it when they see a large amount of brigading behavior.
Perhaps it's not applied in an entirely fair or even manner, but I suspect it's only applied when there's an actual problem.
IIRC, np was the norm for many years and it just didn't actually change anything. Oodles of people do get banned from SRD for commenting in linked threads. The easiest way to see this is when month+ old threads get linked. Only the admins can see downvoting patterns.
Is simply linking to other threads on reddit sufficient for you to consider something promoting brigading?
> Is simply linking to other threads on reddit sufficient for you to consider something promoting brigading?
As I mentioned previously, linking to other subs, or even simply _talking_ about /r/StarTrek, was enough for admins to accuse /r/Star_Trek of brigading. They threatened to shut them down unless they stopped members from doing that, and so you're not allowed to do it in the sub anymore.
Whether you think that linking to other subs is brigading or not, it's clear that admins call it brigading when they want to shut down subs, yet then let continue on much larger subs dedicated to the act as long as the admins like the sub.
Edit: For example, here's a highly upvoted SRD post talking about the admins threatening /r/Star_Trek if they mention /r/StarTrek[1]. They call /r/Star_Trek linking to /r/StarTrek posts to complain about them "brigading," in the same post that they themselves are linking to a /r/Star_Trek post in order to complain about it.
What I got from similar subreddits (e.g. /r/bestoflegaladvice) is that you'll get (shaddow)banned really fast if you click a link in the subreddit and coment on the linked post.
Just mentioning this because I agree with the point you make (in general).
Brigading absolutely happens in SRD. We can talk about whether this style of content should exist, but it does not "exist with the explicit purpose of brigading other subs."
Right, it exists with the tacit purpose of brigading other subs. But like Kiwifarms, blurbs in the site rules mean nothing given the context of the community.
Yeah, the "there's no principled reason to ban spam" is just silly. The recipients don't want to see it whereas people cry censorship when messages they want to see are blocked.
It's literally the difference between your feed being filtered by your choices & preferences and someone else imposing theirs upon you.
You must hang out in a very different place, then. I see much more outcry when 3rd parties come between willing speakers and recipients, with most of the rest being people misrepresenting censorship as moderation because it allows them to justify it.
I mean that's the claim, the counter-claim would require a social network banning topics and not behavior. Note: As a user you can see topics, you can't see behavior. The fact that some users flood other users' DM's is not visible to all users. So how do you know?
"I don't trust left-y CEO's", is a fair enough answer, but really that's where the counter-claim seems to end. Now that we have a right-wing CEO, looks like the shoe is on the other foot[1]
> As a user you can see topics, you can't see behavior.
True, but not really a good argument for the "trust us, this person needed banning, no we will not give any details"-style of moderation that most companies have applied so far. And you can see topics, so you'll generally notice when topic are being banned, not behavior, because they usually don't align perfectly.
I'm not sure we could tell the difference. As Yishan states, the proof of the behavior isn't being made public because of the exposure to creating new issues. Without that, you would never know.
As for specific platforms, aka Reddit, how can one be sure that right wingers on that platform aren't in fact more likely to engage in bad behavior that left wingers? It might be because they are being targeted, but it could also be that that group of people on that platform tend to act more aggressively.
I am NOT saying that I know if Reddit is fair in its moderation, I just don't know.
All the communist subreddits are in constant hot water with the admins. They banned ChapoTrapHouse when it was one of the biggest subreddits. When a bunch of moderators tried to protest against reddit hosting antivax content, the admins took control over those subreddits.
r/politics is “communist”? That’s… just a really dumb take. If there is a far-left presence on Reddit, it is not prominent. r/politics is left-leaning and angry, but, objectively speaking, not really all that extreme.
And, for what it’s worth, it seems perfectly reasonable to label those who tried to overthrow our democratic government “traitors”.
This kind of sentiment always shows up in this kind of thread; I think a lot of people don't really grok that being far enough to one side causes an unbiased forum to appear biased against you. If you hate X enough, Reddit and Twitter are going to seem pro-X, regardless of what X is.
(And, separately, almost no one who argues about anything being "communist" is using the same definition of that word as the person they're arguing with, but that's a different problem entirely)
> r/politics is “communist”? That’s… just a really dumb take. If there is a far-left presence on Reddit, it is not prominent. r/politics is left-leaning and angry, but, objectively speaking, not really all that extreme.
Obviously, none of the affluent idiots from chapo or at hexbear controlling r/politics r/news or r/worldnews are really communists, they are just rich asshats that just pretend to be, my point still stands. They are still spouting marxist non sense, violent speech, and their behavior isn't moderated, as long as they don't target "the wrong people".
Can you elaborate with an example? I'm unfamiliar with reddit and it's content management. I'm unsure if the premise of "AI" moderation is true, how it could moderate beyond a pattern or behavior since it can't reasonably be scanning every post and comment for political affiliation?
IIRC moderators can set their subs to not appear on the front page intentionally. I know that /r/askhistorians did so in order to stem the tide of low effort comments. Given the reddit community's political leanings as a whole, self-removal from the front page is probably the best for everyone involved
He is half-correct, but not in a good way. When people on the left say something that goes against new-left agenda, they get suppressed too. That is not a redeeming quality of the system or an indicator of fairness. It simply shows that the ideology driving moderation is even more narrow-minded and intolerant of dissent than most observers assume at first sight.
At the same time, it's trivial to demonstrate that YouTube and Twitter (easy examples) primarily target conservatives with their "moderation". Just look at who primarily uses major alternative platforms.
But that's besides the point, because it's much simpler than that. You don't need elaborate analysis to see that people tired of Twitter "moderation" filled 4 other platforms: Gab, Parler, Minds and Truth Social. Literally all of them are characterized as right-wing by the same left-wing media outlets that claim that Twitter is impartial in moderation.
I'm tired of gaslighting around this issue. Just within replies to my above comment I've gotten two contradictory statements. One, that there is no bias in Twitter moderation, because conservatives are actually targeted less. Two, that there is no bias because conservatives are more likely to break rules, so they should be banned more often. We have two diametrically opposite descriptions of reality that nevertheless converge on the same conclusion. This is ideology-driven reasoning at its worst.
I said that at best it is impartial, and if anything, is more permissive to conservatives.
The fact that your "logic" is that extremists like KKK and other rightwing nut jobs are not allowed on twitter and thus have to create their own sites is proof of a double standard is absolutely insane.
You're so busy with your persecution fetish you cannot even see simple reason.
Or consider that perhaps the right in particular tends to harbor and support people who lean more towards disinformation, hate speech, and incitement to violence.
At least one missing element is that of reputation. I don't think it should work exactly like it does in the real world, but the absence of it seems to always lead to major problems.
The cost of being a jerk online is too low - it's almost entirely free of any consequences.
Put another way, not everyone deserves a megaphone. Not everyone deserves to chime in on any conversation they want. The promise of online discussion is that everyone should have the potential to rise to that, but just granting them that privilege from the outset and hardly ever revoking it doesn't work.
Rather than having an overt moderation system, I'd much rather see where the reach/visibility/weight of your messages is driven by things like your time in the given community, your track record of insightful, levelheaded conversation, etc.
I agree with the basic idea that we want reputation, but the classic concept of reputation as a single number in the range (-inf, inf) is useless for solving real-world problems the way we solve them in the real world.
Why? Because my reputation in meatspace is precisely 0 with 99.9% of the world's population. They haven't heard of me, and they haven't heard of anyone who has heard of me. Meanwhile, my reputation with my selected set of friends and relatives is fairly high, and undoubtedly my reputation with some small set of people who are my enemies is fairly low. And this is all good, because no human being can operate in a world where everyone has an opinion about them all the time.
Global reputation is bad, and giving anyone a megaphone so they can chime into any conversation they want is bad, full stop. Megaphone-usage should not be a democratic thing where a simple majority either affirms or denies your ability to suddenly make everyone else listen to you. People have always been able to speak to their tribes/affinity groups/whatever you want to call them without speaking to the entire state/country/world, and if we want to build systems that will be resilient then we need to mimic that instead of pretending that reputation is a zero-sum global game.
Social reputation IRL also has transitive properties - vouching from other high-rep people, or group affiliations. Primitive forms of social-graph connectedness have been exposed in social networks but it doesn't seem like they've seen much investment in the past decade.
So there's 8 billion people in the world approximately. Therefore you're saying that you have reputation (either good or bad) with 8e9 x 0.001 of them. That is 8 million people? Wow, you maintain a very large reputation, actually. I hope it's not bad reputation!
But all jokes aside, you're exactly right. Reputation is only relevant in your closer circles. Having a global reputation score would just be another form of the Twitter blue check mark, but worse.
I want to hear from people that I don't know, the average Joe, not from Hollywood, Fox News persons, or politicians. And I want to hear from reputable people in my circles, but not so much that I'm just hearing from the echo chamber.
Social media is just a really big game that people are playing, where high scores are related to number of followers. So dumb.
> The cost of being a jerk online is too low - it's almost entirely free of any consequences.
Couldn't agree more here.
Going back to the "US Postal service allows spam" comment made by Yishan, well yes, the US postal service will deliver mail that someone has PAID to have delivered, they've also paid to have it printed. There's not a zero cost here and most businesses will not send physical spam if there weren't at least some return on investment.
One big problem not even touched by Yishan is vote manipulation, or to put it in your terms, artificially boosted reputation. I consider those to be problems with the platform. Unfortunately, I haven't yet seen a platform that can solve the problem of "you, as an individual, have ONE voice". It's too easy for users to make multiple accounts, get banned, create a new one, etc.
At the same time, nobody who's creating a platform for users will want to make it HARDER for users to sign up. Recently Blizzard tried to address this (in spirit) by forcing users to use a phone number and not allowing "burner numbers" (foolishly determined by "if your phone number is pre-paid"). It completely backfired for being too exclusionary. I personally hate the idea of Blizzard knowing and storing my phone number. However, the idea that it should be more and more difficult or costly for toxic users to participate in the platform after they've been banned is not, on its own, a silly idea.
There is no such thing as "Reputation", or rather - Reputation isn't one dimensional, and it's definitely not global. There will naturally emerge trusted bastions, but which bastions are trusted is very much an individual choice.
Platforms that operate on, and are valuable for, their view of reputation are valid and valuable. At the same time, that's clearly a form of editorial control. Their reputation system saying "Trust us" becomes an editorial statement. They can and should phrase their trust as an opinion.
On the other hand, an awful lot of platforms want to pretend they are the public square while maintaining tight editorial control. Viewing Twitter as anything other than a popular opinion section is foolish, yet there's a lot of fools out there.
Maybe? Reputation systems can devolve into rewarding groupthink. It's a classic "you get what you measure" conundrum, where once it becomes clear that an opinion / phrase / meme is popular, it's easy to farm reputation by repeating it.
I like your comment about "track record of insightful, levelheaded conversation", but that introduces another abstraction. Who measures insight or levelheadedness, and how to avoid that being gamed?
I general I agree that reputation is an interesting and potentially important signal, I'm just not sure I've ever seen an implementation that doesn't cause a lot of the problems it's trying to solve. Any good examples?
Yeah, definitely potential for problems and downsides. And I don't know of any implementations that have gotten it right. And to some degree, I imagine all such systems (online or not) can be gamed, so it's also important for the designers of such a system to not try to solve every problem either.
And maybe you do have some form of moderation, but not in the sense of moderation of your agreement/disagreement with ideas but moderation of behavior - like a debate moderator - based on the rules of the community. Your participation in a community would involve reading, posting as desired once you've been in a community for a certain amount of time, taking a turn at evaluating N comments that have been flagged, and taking a turn at evaluating disputes about evaluations, with the latter 2 being spread around so as to not take up a lot of time (though, having those duties could also reinforce your investment in a community). The reach/visibility of your posts would be driven off your reputation in that community, though people reading could also control how much they see too - maybe I only care about hearing from more established leaders while you are more open to hearing from newer / lower reputation voices too. An endorsement from someone with a higher reputation counts more than an endorsement from someone who just recently joined, though not so huge of a difference that it's impossible for new ideas to break through.
As far as who measures, it's your peers - the other members of the community, although there needs to be a ripple effect of some sort - if you endorse bad behavior, then that negatively effects your reputation. If someone does a good job of articulating a point, but you ding them simply because you disagree with that point, then someone else can ding you. If you consistently participate in the community duties well, it helps your reputation.
The above is of course super hand-wavy and incomplete, but something along those lines has IMO a good shot of at least being a better alternative to some of what we have today and, who knows, could be quite good.
> Your participation in a community would involve reading, posting as desired once you've been in a community for a certain amount of time, taking a turn at evaluating N comments that have been flagged, and taking a turn at evaluating disputes about evaluations, with the latter 2 being spread around so as to not take up a lot of time (though, having those duties could also reinforce your investment in a community).
This is an interesting idea, and I'm not sure it even needs to be that rigorous. Active evaluations are almost a chore that will invite self-selection bias. Maybe we use sentiment analysis/etc to passively evaluate how people present and react to posts?
It'll be imperfect in any small sample, but across a larger body of content, it should be possible to derive metrics like "how often does this person compliment a comment that they also disagree with" or "relative to other people, how often do this person's posts generate angry replies", or even "how often does this person end up going back and forth with one other person in an increasingly angry/insulting style"?
It still feels game-able, but maybe that's not bad? Like, I am going to get such a great bogus reputation by writing respectful, substantive replies and disregarding bait like ad hominems! That kind of gaming is maybe a good thing.
One fun thing is this could be implemented over the top of existing communities like Reddit. Train the models, maintain a reputation score externally, offer an API to retrieve, let clients/extensions decide if/how to re-order or filter content.
This is pure hypothetical, but I bet Reddit could derive an internal reputational number that is a combination of both karma (free and potentially farmable) and awards (that people actually pay for or that are scarce and shows what they value) that would be a better signal to noise ratio than just karma alone.
It could work like Google search where link farms have less weight than higher quality pages. Yes, you can still game it yet it will be harder to do so.
This is a good idea, except that it assumes _reputation_ has some directional value upon which everyone agrees.
For example, suppose a very famous TV star joins Twitter and amasses a huge following due to his real-world popularity independent of Twitter. (Whoever you have in mind at this point, you are likely wrong.) His differentiator is he's a total jerk all the time, in person, on TV, etc. He is popular because he treats everyone around him like garbage. People love to watch him do it, love the thrill of watching accomplished people debase themselves in attempts to stay in his good graces. He has a reputation for being a popular jerk, but people obviously like to hear what he has to say.
Everyone would expect his followers to see his posts, and in fact it is reasonable to expect those posts to be more prominent than those of lesser-famous people. Now imagine that famous TV star stays in character on the platform and so is also total jerk there: spewing hate, abuse, etc.
Do you censor this person or not? Remember that you make more money when you can keep famous people on the site creating more engagement.
The things that make for a good online community are not necessarily congruent with those that drive reputation in real life. Twitter is in the unfortunate position of bridging the two.
I posted some additional ideas in a reply to another comment that I think addresses some of your points, but actually I think you bring up a good point of another thing that is broken with both offline and online communities: reputation is transferrable across communities far more than it should be.
You see this anytime e.g. a high profile athlete "weighs in" on complicated geopolitical matters, when in reality their opinion on that matter should count next to nothing in most cases, unless in addition to being a great athlete they have also established a track record (reputation) of being expert or insightful in international affairs.
A free-for-all community like Twitter could continue to exist, where there are basically no waiting periods before posting and your reputation from other areas counts a lot. But then other communities could set their own standards that say you can't post for N days and that your incoming reputation factor is 0.001 or something like that.
So the person could stay in character but they couldn't post for awhile, and even when they did, their posts would initially have very low visibility because their reputation in this new community would be abysmally low. Only by really engaging in the community over time would their reputation rise to the point of their posts having much visibility, and even if they were playing the long game and faking being good for a long time and then decided to go rogue, their reputation would drop quickly so that the damage they could do would be pretty limited in that one community, while also potentially harming their overall reputation in other communities too.
As noted in the other post, there is lots of vagueness here because it's just thinking out loud, but I believe the concepts are worth exploring.
> You see this anytime e.g. a high profile athlete "weighs in" on complicated geopolitical matters, when in reality their opinion on that matter should count next to nothing in most cases, unless in addition to being a great athlete they have also established a track record (reputation) of being expert or insightful in international affairs.
I apologize for multiple replies; I'm not stalking you. It's just an area I'm interested in and you're hitting on many ideas I've kicked around over the years.
I once got paid to write a white paper on a domain-based reputational system (long story!), based on just this comment. I think it requires either a formal taxonomy, where your hypothetical athlete might have a high reputation for sports and low one for international affairs, or a post-hoc cluster-based system identifies the semantic distance from one's areas of high reputation.
And reputation itself can be multi-dimensional. Behavior, like we've talked about elsewhere, is an important one. But there's also knowledge. Can the system model the difference betwwen a knowledgeable jerk (reputation 5/10) and a hapless but polite and constructive novice (reputation 5/10)?
So if your athlete posts about workouts, they may have a high knowledge reputation. And if they post about the design of stadiums, it's relatively closer to their area of high knowledge reputation than international affairs would be. And so on. And independently of their domain knowledge, they have a behavior reputation that follows them to new domains.
These are such great questions, and worth exploring IMO. My thinking is biased towards online communities (like old Usenet groups or mailing lists) and not towards giant free-for-alls like Twitter, so I think I have a lot of blind spots, but this is something I've thought about a lot too, so it's great to hear peoples' ideas, thank you.
> I think it requires either a formal taxonomy [...] or a post-hoc cluster-based system identifies the semantic distance
Yeah, I wonder if some existing subject-x-is-related-to-subject-y mapping could be used, at least as a hint to the system, e.g. all the cross-reference info from an encyclopedia. When communities become large enough, you might also be able to tease out a little bit of additional info from how many people in group X also participate in group Y maybe.
As an experiment, I'd also be curious to see how annoying it'd be to have your reputation not transfer across communities at all, but instead you build reputation via whatever that community defines as good behavior and having existing community members vouch for you (which, if you turn out to be a bad apple relatively soon after joining, then their endorsement ends up weakening their reputation to some degree a little too). There are some aspects to how it works in real life that are worth bringing over into this, I think.
> system model the difference betwwen a knowledgeable jerk (reputation 5/10) and a hapless but polite and constructive novice (reputation 5/10)?
I touched on this in a sibling comment somewhere though I've long lost track of the threads, but I think the platform would want to rely on human input to some degree - part of being a good community member is doing things like periodically reviewing a batch of posts that got flagged by other users. In one community, there might be an 'anything goes' mentality, where another may set stricter standards around what's considered normal discussion, and so I think it'd be hard for a machine to differentiate but relatively easy for an established community member (again though, it always has at least a micro impact on your reputation, so how you carry out your reviewing duties can also increase or decrease your reputation in that community).
Odds are too that, if you occasionally have to put on the hat of evaluating others' behavior, it might help you pause next time you're tempted to fly off the handle and post a rant.
Anyway, the focus of the technology would be less about automatically policing behavior and more about making it easier for communities to to call out good and bad behavior without much effort, and then having that adjust a person's reputation - often very tiny adjustments that over time accumulate to establish a good reputation.
> independently of their domain knowledge, they have a behavior reputation that follows them to new domains
These are good ideas that might help manage an online community! On the other hand, they would be bad for business! When a high-profile athlete weighs in on a complicated geopolitical matter and then (say) gets the continent wrong, that will generate tons of engagement (money) for the platform. Plus there's no harm done. A platform probably wants that kind of content.
And the whole reason the platform wants the athlete to post in the first place is because the platform wants that person's real-world reputation to transfer over. I believe it is a property of people that they are prone to more heavily weigh an opinion from a well-known/well-liked/rich person, even if there is no real reason for that person to have a smart opinion on a given topic. This likely is not something that can be "fixed" by online community governance.
I agree this solution wouldn't scale to all platforms; those driven by maximizing views and engagement would find it counter-productive, as you say.
But that's fine. There are enough of those platforms already, and they have all the flaws we're talking about. We don't need to fix the existing ones so much as get to a world where better (by these standards) platforms exist and can compete on quality.
Twitter/Reddit/etc kind of try to do that, but it's hard to get right and it's always an afterthought, like Yoshin mentioned.
But maybe there's room in the market for something with higher signal to noise, that's reputation based. And reputation doesn't have to be super cerebral and dry and stodgy; reputation can be about sense of humor or whatever is appropriate for the sub-communities that evolve on the platform.
> This is a good idea, except that it assumes _reputation_ has some directional value upon which everyone agrees.
Reputation is inherently subjective. I think any technological solution that treats reputation as an objective value that's the same for everyone won't work if applied to any kind of diverse community, but I don't see any problem with a technical solution that computes a high reputation score for that TV star among his fans, and a low score among people who aren't fans.
(It's also sometimes worthwhile to consider negative reputation, which behaves differently than positive reputation in some ways. Not all communities should have reputation systems that have a concept of negative reputation, but in some kinds of communities it might be necessary.)
The problem is that "jerk" is relative and very sensitive people will tilt the scale. Also, occasionally jerks will have interesting insights that you would miss if you blocked the jerks. It's a problem of whether your platform cares more about having diverse viewpoints or about people being polite to each other.
> Twitter Verification is only verifying that the account is "authentic, notable, and active".
Musk has been very clear that it will be open to anyone who pays the (increased) cost for Twitter’s Blue (which also will get other new features), and thus no longer be tier to “notable” or “active”.
> At least I have not heard about any changes other then the price change from free to $8.
Its not a price change from free to $8 for Twitter Verification. It is a discontinuation of Twitter Verification as a separate thing, but moving the (revised) process and resulting checkmark to be an open-to-anyone-who-pays component of Blue, which increases in cost to $8/mo (currently $4.99/mo).
Public figures still have separate labels, I'm sure you'll have to have a credit card that matches your name on file (even if you choose to publicly remain anonymous). This is a much faster way to verify someone's identity than having people submit a photo ID and articles about them or written by them... Which was the previous requirement.
I do wish that public figures would have a different badge color or a fully filled in badge with other verified users having only a bordered badge etc. Frankly we don't really know what it will look like.
But the how would you address "reputable" people spreading idiotic things or fake news? How would you prevent Joe Rogan spreading COVID conspiracy theories? Or Kanye's antisemitic comments? Or a celebrity hyping up some NFT for a quick cash grab? Or Elon Musk falling for some fake news and spreading it?
Why does anyone think this is a solvable problem? Once there is sufficient notability only authoritarian censorship or jailing is going to work, to increasing degrees of force up to executions (and even then that won't silence supporters or there was any sort of group association). For people of lessor reputations, "canceling" might work, in the sense of public shaming or loss of income, but there is a ladder of force that is required.
If we want any sort of free society I don't think we _can_ stop fake news. We can only attempt to counter it with reason, debunking, and competitive rhetoric. We maybe can build tools that attempt to amplify debunking, build institutions trusted as neutral arbiters or that have other forms of credibility.
This is lumping together multiple problems, but IMO a platform that tries to police wrongthink from the top down is guaranteed to fail.
For my part, I don't think anyone anywhere should be prevented from saying the dumbest things that pop into their heads; what I disagree with is giving everyone a global megaphone and then artificially removing the consequences for saying the dumbest things that pop into their heads. :)
If you have a reputation that is tied to the community in which you are participating, and your reputation affects the reach/visibility of your messages in that community, then as you behave at odds with the standards of that community, your reputation goes down, thus limiting your reach. Exactly how to implement that well is the billion dollar question, but at the heart of it all is a simple feedback loop.
> your reputation goes down, thus limiting your reach
That's a nice concept, but it is unclear how to implement this. If you have a low reputation, you can only reach X users, but if you reputation improves, you can reach 2X users? How do you set these thresholds? How do you pick the X unlucky users to receive the low-reputation tweets?
Almost all users likely just need automated moderation through reputation (like my HN karma, or StackOverflow reputation, where a certain score is required for certain actions) for most their tweets.
To combat viral fake news, the top 0.001% of users could have manual reviews, as should viral tweets that are about to explode into reaching hundreds of millions of users. Having a manual moderation queue for any tweet that has grown from 0 to 100M rewtweets in a certain time frame, as well as for everything tweeted by users with X million followers, would be quite a low cost.
We have alredy seen some of this where Twitter has manually labeled information as being false, from e.g. Donald Trump. I doubt they'd give that treatment to any of my claims to 20 followers, and nor would they have to. Because what I write simply doesn't have the same importance when I have 20 followers.
Eventually in the end, you do need the ivory tower with the ministry of truth saying exactly what is and isn't acceptable. Should Rogan's Covid conspiracy tweet be blocked? Given a warning label? Or allowed to spread? That would have to depend on the level of harm it can cause. There can't be a "marketplace of ideas" where everyone is forced to make their own judgement and where all content is acceptable. Not just because US conservatives will find it hard to tweet, or because advertisers won't advertise, but because it'll have terrible signal to noise ratio for anyone, causing it to be pretty deserted.
The Internet needs to have a verified public identity / reputation system especially with deep fakes becoming more and more pervasive/easier to create. Trolls can troll all they want but if they want to be serious with their words then back it up with his or her online verified public Internet /reputation ID.
If this is one of Musk's goals with Twitter he didn't overpay. The Internet definitely needs such a system..has for awhile now!
He might connect Twitter into the crypto ecosystem and that along with a verified public Internet / Reputation ID system i think could be powerful.
It's worth noting that Twitter gets a lot of flak for permanently banning people, but that those people were all there under their real names. Regardless of your opinion on the bans, verifying that they were indeed banning e.g. Steve Bannon would not have helped the decision making process around his ban any easier.
Shouldn't ban anyone as both sides play politics which is filled with tons of untruths and lies on both sides. Which you have a point even a Steve Bannon or AOC will make up things and lie. Maybe the reputation system avoids political speech or there is a huge bar because politics is always a shit-show. Overall what you bring up here is a huge problem that maybe can be solved somehow. For deepfakes i still feel strongly a system needs to be in place ..one i mentioned or something close to it.
How does this system work worldwide across multiple governments, is resistant to identity theft, and prevents things like dictatorships from knowing exactly who you are?
The bad actors were much less prevalent back in the heyday of small phpBB style forums. I have run a forum of this type for 20 years now, since 2002. Around 2011 was when link spam got bad enough that I had to start writing my own bolt-on spam classifier and moderation tools instead of manually deleting spammer accounts. Captchas didn't help because most of the spam was posted by actual humans, not autonomous bots.
In the past 2 years fighting spam became too exhausting and I gave up on allowing new signups through software entirely. Now you have to email me explaining why you want an account and I'll manually create one for the approved requests. The world's internet users are now more numerous and less homogeneous than they were back when small forums dominated, and the worst 0.01% will ruin your site for the other 99.99% unless you invest a lot of effort into prevention.
Yep, if you're on the internet long enough you'll remember the days before you were portscanned constantly. You'll remember the days before legions of bots hammered at your HTTP server. You'd remember it was rare to have some kiddie DDOS your server off the internet and you had to hide behind a 3rd party provider like cloudflare.
That internet is long dead, hence discussions like Dead Internet Theory.
My mom still has a land-line. She gets multiple calls a day, robots trying to steal an old lady's money. For this we invented the telephone? the transistor?
phpBB forums have always been notorious for capricious bans based on the whims of mods and admins, it's just that getting banned from a website wasn't newsworthy 10 years ago.
As a moderator of one of such forums, all of the behavioural issues have been a staple there too. We always had to moderate not just spam, racism and insults, but also "that dude that rants about his pet peeve in every topic", "that dude that rants over Apple even in topics where someone is asking how to change keyboard layout in macos", "that dude that's obviously mentally ill" and many others.
It was always a job of ensuring a comfortable environment for the community gathering on the forum and it did require "censorship" beyond obvious to prevent a minority of users from souring the environment for everyone.
The problems didn't exist as much because people didn't usually come across communities they hated/disagreed with unless they were searching them out, and every community could set the standards it wanted to set. And I think that's where large social media sites can't work; they put a bunch of groups, many of which deeply disagree and dislike each other, on the same site/platform and have to try and keep the peace without said groups getting into flame wars and personal attacks 24/7.
Small, indepedently run communities could set their own standards, and those that who disagreed with any one set of rules could go and find somewhere more to their liking. Reddit and Discord have this to an extent, but even then, it's too centralised and too heavily controlled by one organisation.
Hopefully if Mastodon takes off, federated services will bring this style of community back, except with the ability to take part in other communities if people agree with that.
That's because they were small and often has strict rules (written or not), aka moderation, about how to behave. You don't remember massive problems because the bad actors were kicked off. It falls apart at scale and when everyone can't/won't agree on "good behavior" or "the rules" is/are.
This topic was adjacent to the sugar and L-isomer comments. Which probably influenced my viewpoint:
Yishan is saying that Twitter (and other social networks) moderate bad behavior, not bad content. They just strive for higher SNR. It is just that specific types of content seems to be disproportionately responsible for starting bad behavior in discussions; and thus get banned. Sounds rational and while potentially slightly unfair looks totally reasonable for a private company.
But what I think is happening is that this specific moderation on social networks in general and Twitter in particular has pushed them along the R- (or L-) isomer path to an extent that a lot of content, however well presented and rationally argued, just cannot be digested. Not because it is objectively worse or leads into a nastier state, but simply because deep inside some structure is pointing in the wrong direction.
Which, to me, is very bad. Once you reach this state of mental R- and L- incompatibility, no middle ground is possible and the outcome is decided by an outright war. Which is not fun and brings a lot of causalties. My 2c.
This was my takeaway as well. Yishan is arguing that social media companies aren't picking sides they're just aiming for a happy community, but the end result of that is that the loudest and angriest group(s) end up emotionally bullying the moderation algorithm into conforming. This is precisely the problem that Elon seems to be tackling.
I thought this point was overstated, twitter certainly has some controversial content related rules and while as the CEO of Reddit he may have been mostly fighting macro battles, there are certainly content related things that both networks censor.
Reddit's content policy has also changed a LOT since he was CEO. While the policy back then may not have been as loose as "is it illegal?" it was still far looser than what Reddit has had to implement to gain advertisers.
It's kinda funny that many of the problems he's mentioning is exactly how moderation on reddit currently works.
Hell, newly revamped "block user" mode got extra gaslighting as a feature, now person blocked can't reply to anyone under the comment of person that blocked them, not just the person that blocked them so anyone that doesn't like people discussing how they are wrong can just ban the people that disagree with them and they will not be able to answer to any of their comments.
Seems reasonable to me. IRL I can walk away from a voluntary discussion when I want. If people want to continue talking after I’ve left they can form their own discussion group and continue with the topic.
Think this is good because it usually stops a discussion from dissolving into a meaningless flame war.
It allows the power of moderation to stay within the power of those in the discussion.
Now imagine if some random other people in the group who happen to have posts higher in the tree were able to silently remove you without anyone knowing.
Is there a better name than "rational jail" for the following phenomenon:
We are having a rational, non-controversial, shared-fact based discussion. Suddenly the first party in the conversation goes off on a tangent and starts saying values or emotions based statements instead of facts. The other party then gets angry and or confused. The first party then gets angry and or confused.
The first party did not realize they had broken out of the rational jail that the conversation was taking place in; they thought they were still being rational. The second party detected some idea that did not fit with their rational dataset, and detected a jailbreak, and this upset them.
1) Everyone agrees that spam should be "censored" because (nearly) everyone agrees on what spam is. I'm sure (nearly) everyone would also like to censor "fake news", but not everyone agrees on the definition of fake news, which is why the topic is more contentious than spam.
2) Having a "1A mode", where you view an unmoderated feed, would be interesting, if only to shut up people who claim that social media companies are supposed to be an idealistic bastion of "free speech." I'm sure most would realize the utility is diminished without some form of moderation.
There were indeed some intelligent, thoughtful, novel insights about moderation in that thread. There were also... two commercial breaks to discuss his new venture? Eww. While discussing how spam is the least controversial type of noise you want to filter out? I appreciate the good content, I'm just not used to seeing product placement wedged in like that.
> No, whatʻs really going to happen is that everyone on council of wise elders will get tons of death threats, eventually quit...
Yep if you can't stand being called an n* (or other racial slurs) don't be a reddit moderator. Also I've been called a hillary boot licker and a trump one.
Being a reddit moderator isn't for the thin of skinned.I hosted social meetups so this could have run out in the real world..Luckily I had a strong social support in the group where that would have been taken care of real quick. I've only had one guy that tried to threaten to come and be disruptive at one of the meetups. He did come out. He did meet me.
----
> even outright flamewars are typically beneficial for a small social network:
He's absolutely correct. It also helps to define community boundries and avoid extremism. A lot of this "don't be mean" culture only endorses moderators stepping in and dictating how a community talks and how people who disagree are officially bullied.
I am having trouble reconciling his one tweet that says there is no truth (adjudicating truth is impossible) and the other tweet saying social media companies have the truth of user behavior, and I'm not being flippant.
What’s wrong with this thread? It seems really level headed and exactly accurate to the people I know IRL who are insane-but-left and insane-but-right who won’t shut up about censorship while if you look at their posts it’s just “unhinged person picks fights with and yells at strangers.”
HN in general is convinced that social media is censoring right ideas because it skews counterculture and “grey tribe” and there have been a lot of high profile groups who claim right views while doing the most vile depraved shit like actively trying to harass people into suicide and celebrating it or directing massive internet mobs at largely defenseless not public figures for clout.
As I said in my post, he never justifies this point. To then turn it upon me to prove a negative?
Devils advocating against myself: I do believe the parler deplatforming is the proof for what he says. The world has indeed changed, but anyone who knows the details sure isn't saying why. Why? Because revealing how the world has changed, in the usa, would have some pretty serious consequences.
I don't know. I wish I could have a closed door, off record, tell me everything, conversation with yishan to have him tell me why he believes the world changed, in the context of social media censorship.
In terms of public verified knowledge, nothing at all has changed in the context of censorship. I stand by the point. Elon obviously stands by this as well. Though elon's sudden multiweek delays on unbanning... im expecting he suddenly knows as well.
>You're posting too fast. Please slow down. Thanks.
Guess I'm not allowed to reply again today. No discussion allowed on HN.
I do find it funny they say 'you're posting too fast' but I haven't been able to post on HN or reply to you for an hour. How "fast" am I really going. I expect it will be a couple more hours before I am allowed to post again. How dare I discuss a forbidden subject.
Maybe I'm too techno-utopic, but can't we model AI to detect how these vectors combine to configure moderation?
Ex: Ten years ago masks were niche, therefore unsubstantiated news on the drawbacks of wearing masks were still considered safe because very few people were paying attention and/or could harm themselves, so that it was not controversial and did not require moderation. Post-covid, the vector values changed, questionable content about masks could be flagged for moderation with some intensity indexes, user-discretion-advised messages and/or links to rebuttals if applicable.
Let the model and results be transparent and reviewable, and, most important, editorial. I think the greatest mistake of moderated social networks is that many people (and the network themselves) think that these internet businesses are not "editorial", but they are not very different from regular news sources when it comes to editorial lines.
I personally believe this won't be a solvable problem, or at least the problem will grow a long tail. One example would be hate groups co-opting the language of the victim group in an intentional manner. Then as the hate group is moderated for their behaviors, the victim group is caught up in the action by intentional user reporting for similar language.
It's a difficult problem to deal with as at least some portion of your userbase will be adversarial and use external signaling and crowd sourced information to cause issues with your moderation system.
Not a good idea. Your example already has flaws. An AI could perform on a larger scale, but the result would be worse. Probably far worse.
I specifically don't want any editor for online content. Just don't make it boring or worse turn everything into astroturfing. Masks are a good example already.
He says there is no principled reason to ban spam, but there's an obvious one, it isn't really speech. The same goes for someone who posts the same opinion everywhere with no sense of contextual relevance. That's not real speech, it's just posting.
It's basically just public nuisance, like driving up and down a street blaring your favorite club banger at 3AM. More uncharitably its a lot like littering, public urination, or graffiti.
That's sort of what I mean. It's like putting up a billboard on someone else's property. Taking down the billboard isn't about the content of the billboard but rather the non-permitted use of the space.
Reddit had the benefit of subreddit moderators policing their own. Twitter has no such thing. Maybe if you squint enough, big accounts block/muting bad actors in their replies can sort of count as self-policing but that does not prevent the bad actor from being a troll in someone else's replies.
The $8 doesn't moderate anything. Most spammers, scammers and paid trolls already spend money on software to help them spam, scam and troll. $8 a month is hardly a deterrent.
In my eyes, $8 creates more problems than solves. It's the normal, human twitter users who are more likely to NOT want to pay. The people who use Twitter to make money or obviously going to be most likely to pay the $8.
I'm one of those who likes to bring out the "fire in a theater" or doxxing as the counterexample to disprove literally nobody is a free speech absolutist. This on top of it not being a 1A issue anyway because the first five words are "Congress shall pass no law".
But spam is a better way to approach this and show it really isn't a content problem but a user behaviour problem. Because that's really it.
Another way to put this is that the total experience matters, meaning the experience of all users: content creators, lurkers and advertisers. Someone could go into an AA meeting and not shut up about scientology or coal power and you'll get kicked out. Not because your free speech is being violated but because you're annoying and you're worsening the experience of everyone else you come in contact with.
Let me put it another way: just because you have a "right" to say something doesn't mean other people should be forced to hear it. That platform has a greater responsibility that your personal interests and that's about behaviour (as the article notes), not content.
We've seen some laws passed recently, which attempt to prevent social media companies from effective moderation. Yishan repeatedly makes a point here, that most forms of spam are not illegal. Rather recent case law[1, 2] has confirmed that even panhandling is protected speech. Prior to that, we saw Lloyd vs Tanner[3], which ruled that private property could function as a "town square" and censorship runs afoul of the first amendment. Section 230 of the Communications Decency Act carves out a special exemption for websites that host user-generated content, and politicians on both sides of the aisle have set their sights on remodeling that law.
I'm really curious to see how this plays out. As far as I see it, a well-lawyered bot operator could completely undermine the ability of websites to moderate their content, and as Yishan aptly points out, they wouldn't stop at inflammatory content. Their goal would be to open the floodgates for commercial communications. It could completely ruin the open internet as we know it. Or, perhaps, it would merely limit the size of social media companies: once their user-base crosses whatever "town square" threshold is decided on, spammers have free reign. Interesting times we live in.
While the points made were interesting, I had to stop reading almost half way because I found this post insincere and way too manipulative. And unlike most people on HN, I am very tolerant of marketing and enjoy receiving unsollicited commercial offers via email. This is the first time in many years that someone puts me off like this author, despite the fact that the points made are quite interesting.
However his content marketing scheme just felt way too inauthentic for me and made me feel that this guy isn't here to educate me, doesn't have my best interest in mind and does not give a crap about me.
Just posted it because many people on HN are "cargo-culting" (as people say here) tech figures, so wanted to advise people not to imitate this kind of marketing.
I guess my real problem here is that his product plugs are way too intellectually dishonest.
What yishan is missing is that the point of a council of experts isn't to effectively moderate a product. The purpose is to deflect blame from the company.
It's also hilarious that he says "you canʻt solve it by making them anonymous" because a horde of anonymous mods is precisely how subreddits are moderated.
Isn't it inconsistent to both say "moderation decisions are about behavior, not content", and "platforms can't justify moderation decisions because of privacy reasons".
It seems like you wouldn't need to reveal any details about the content of the behavior, but just say "look, this person posted X times, or was reported Y times", etc... I find the author to be really hand-wavy around why this part is difficult.
I work with confidential data, and we track personal information through our system and scrub it at the boundaries (say, when porting it from our primary DB to our systems for monitoring or analysis). I know many other industries (healthcare, education, government, payments) face very similar issues...
So why don't any social network companies already do this?
For one, giving specific examples gives censured users an excuse to do point-by-point rebuttals. In my experience, point-by-point rebuttals are one of the behaviors that should be considered bad behavior and moderated against because they encourage the participant to think only of each point individually and ignore the superlinear effect of every point taken together. For another, the user can latch on specific examples that seem innocuous out of context and allow them to complain that their censorship was obviously heavy handed, and if the user is remotely well known then it's famous person's word versus random other commenters trying to add context. The ultimate result is that people see supposed problems with moderation far more than anyone ever says "man I sure am glad that user's gone" so there's a general air of resentment against the moderation and belief in its ineffectiveness
Point-by-point rebuttals are probably very annoying for the moderators, but knowing what actually gets moderated makes working with the rules easier. Imagine acting in a society where an often-invisible police enforces secret laws, you can blindly appeal, but you don't get to know what they're alleging you did, you don't get to defend yourself, the court is held in secrecy, and it's being made up entirely from members of the police. Predicting what is desired and then being maximally conformist is the safe way, or you just roll the dice, act the way you want, and hope that the secret police is similar enough to you that they'll tolerate it.
"The community (Beatingwomen), which featured graphic depictions of violence against women, was banned after its moderators were found to be sharing users' personal information online"
"According to Reddit administrators, photos of gymnast McKayla Maroney and MTV actress Liz Lee, shared to 130,000 people on popular forum r/TheFappening, constitute child pornography"
You mean like the people who are telling us that happened also said:
> CNN is not publishing “HanA*holeSolo’s” name because he is a private citizen who has issued an extensive statement of apology, showed his remorse by saying he has taken down all his offending posts, and because he said he is not going to repeat this ugly behavior on social media again. In addition, he said his statement could serve as an example to others not to do the same.
>CNN reserves the right to publish his identity should any of that change.
> It resulted in rampant child pornography, doxxing, death threats, gory violence etc. It epitomised the worst of humanity.
It resulted in reddit. That style of moderation is how reddit became reddit; so it should also get credit for whatever you think is good about reddit. The new (half-decade old) reddit moderation regime was a new venture that was hoping to retain users who were initially attracted by the old moderation regime.
My Reddit account is 16 years old. I was there in the very early days of the site well before the Digg invasion and well before it gained widespread popularity.
It was never because it allowed anything. It was because it was a much more accessible version of Slashdot. And it was because Digg did their redesign and it ended up with a critical mass of users. Then they started opening up the subreddits and it exploded from there.
The fact that Reddit is growing without that content shows that it wasn't that important to begin with.
He frames this as a behavior problem, not content problem. The claim is that your objective as a moderator should to get rid of users or behaviors that are bad for your platform, in the sense that they may drive users away or make them less happy. And that if you do that, you supposedly end up with a fundamentally robust and apolitical approach to moderation. He then proceeds to blame others for misunderstanding this model when the outcomes appear politicized.
I think there is a gaping flaw in this reasoning. Sometimes, what drives your users away or makes them less happy is challenging the cultural dogma of a particular community, and at that point, the utilitarian argument breaks down. If you're on Reddit, go to /r/communism and post a good-faith critique of communism... or go to /r/gunsarecool and ask a pro-gun-tinged question about self-defense. You will get banned without any warning. But that ban passes the test outlined by the OP: the community does not want to talk about it precisely because it would anger and frustrate people, and they have no way of telling you apart from dozens of concern trolls who show up every week. So they proactively suppress dissent because they can predict the ultimate outcome. They're not wrong.
And that happens everywhere; Twitter has scientifically-sounding and seemingly objective moderation criteria, but they don't lead to uniform political outcomes.
Once you move past the basics - getting rid of patently malicious / inauthentic engagement - moderation becomes politics. There's no point in pretending otherwise. And if you run a platform like Twitter, you will be asked to do that kind of moderation - by your advertisers, by your users, by your employees.
That is a byproduct of Reddit specifically. With 90s style forums, this kind of discussion happens just fine because it ends up being limited to a few threads. On Reddit, all community members must interact in the threads posted in the last day or two. After two days they are gone and all previous discussion is effectively lost. So maybe this can be fixed by having sub-reddits sort topics by continuing engagement rather than just by age and upvotes.
A good feature would be for Reddit moderators to be able to set the desired newness for their subreddit. /r/aww should strive for one or two days of newness (today's status quo). But /r/communism can have one year of newness. That way the concerned people and concern trolls can be relegated to the yearly threads full of good-faith critiques of communism and the good-faith responses and everyone else can read the highly upvoted discussion. Everything else could fall in-between. /r/woodworking, which is now just people posting pictures of their creations, could split: set the newness to four months and be full of useful advice; set the newness for /woodworking_pics to two days to experience the subreddit like it is now. I feel like that would solve a lot of issues.
The whole idea of "containment threads" is a powerful one that works very well in older-style forums, but not nearly as well on Reddit. "containment subs" isn't the same thing at all, and the subs that try to run subsubs dedicated to the containment issues usually find they die out.
> Moderating spam is very interesting: it is almost universally regarded as okay to ban (i.e. CENSORSHIP) but spam is in no way illegal.
Interesting, in my country spam is very much illegal and I would hazard a guess that it is also illegal in the US, similar to how littering, putting up posters on peoples buildings/cars/walls, graffiti (a form of spam), and so on is also illegal. If I received the amount of spam I get in email as phone calls I would go as far as calling it harassment, and of course robot phone calls are also illegal. Unsolicited email spam is also again the law.
And if spam is against the service agreement on twitter then that could be a computer crime. If the advertisement is fraudulent (as is most spam), it is fraud. Countries also have laws about advertisement, which most spam are unlikely to honor.
So I would make the claim that there is plenty of principled reasons for banning spam, all backed up by laws of the countries that the users and the operators live in.
Unsolicited phone calls are somewhat illegal, but it's dependent on circumstances. It's the same with email spam and mail spam. One person's spam is another person's cold call. Where do you draw the line? Is mailing a flier with coupons spam? Technically yes, but some people find value in it.
In the US, spam is protected speech, but as always, no company is required to give anybody a platform.
> In the US, spam is protected speech, but as always, no company is required to give anybody a platform.
Commercial speech in the US is not protected speech and may be subject to a host of government regulation [0]. The government has broad powers to regulate the time, place, and content of commercial speech in ways that it does not for ordinary speech.
It is dependent on circumstances, and the people who would draw the line in the end would be the government followed by the court.
Not all speech is protected speech. Graffiti is speech, and the words being spoken could be argued as protected, but the act of spraying other people properties with it is not protected. Free speech rights does not overwrite other rights. As a defense in a court I would not bet my money on free speech in order to get away with crimes that happens to involves speech.
Historically the US court has defined speech into multiple different categories. One of those are called fraudulent speech which is not protected by free speech rights. An other category is illustrated with the anti-spam law in Washington State, which was found to not be in violation of First Amendment rights because it prevent misleading emails. Washington’s statue regulate deceptive commercial speech and thus passed the constitutional test. An other court ruling, this one in Maryland, confirmed that commercial speech was less protected than other forms of speech and that commercial speech had no protection when it was demonstrably false.
In theory a spammer could make non-commercial, non-misleading, non-fraudulent speech, and a site like twitter would then actually have to think about questions like first-amendment. I can't say I have ever received or seen spam like that.
> In theory a spammer could make non-commercial, non-misleading, non-fraudulent speech, and a site like twitter would then actually have to think about questions like first-amendment. I can't say I have ever received or seen spam like that.
While I don't think I have seen it on Twitter (then again I only read it when it's linked) I have seen plenty of it in some older forums & IRC. Generally it's just nonsense like "jqrfefafasok" or ":DDDDDD" being posted lots of times in quick succession, often to either flood out other things, to draw attention to poster or to show annoyance about something (like being banned previously).
You got a point. Demonstration as a form of free speech is an interesting dilemma. Review spam/bombing for example can be non-commercial, non-misleading, non-fraudulent, while still being a bit of a grey-zone. Removing them is also fairly controversial. Outside the web we have a similar problem when demonstrations and strikes are causing disruption in society. Obviously demonstration and strikes should be legal and are protected by free speech, but at the same time there are exceptions when they are not.
I am unsure if one would construct a objective fair model for how to moderate such activity.
>a site like twitter would then actually have to think about questions like first-amendment.
I wish people understood that the first amendment does not have anything to do with social media sites allowing people to say anything. Twitter is not a public square, no matter how much you want it to be.
It is both yes, and no. CAN-SPAM do only apply to electronic mail messages, usually shorten down to email. However...
In late March, a federal court in California held that Facebook postings fit within the definition of "commercial electronic mail message" under the Controlling the Assault of Non-Solicited Pornography and Marketing Act ("CAN-SPAM Act;" 15 U.S.C. § 7701, et seq.). Facebook, Inc. v. MAXBOUNTY, Inc., Case No. CV-10-4712-JF (N.D. Cal. March 28, 2011).
There is also two other court cases: MySpace v. The Globe.com and MySpace v. Wallace.
In the later, the court concluded that "[t]o interpret the Act in the limited manner as advocated by [d]efendant would conflict with the express language of the Act and would undercut the purpose for which it was passed." Id. This Court agrees that the Act should be interpreted expansively and in accordance with its broad legislative purpose.
The court defined "electronic mail address" as meaning nothing more specific than "a destination . . . to which an electronic mail message can be sent, and the references to local part and domain part and all other descriptors set off in the statute by commas represent only one possible way in which a destination can be expressed.
Basically, in order to follow the spirit of the law the definition of "email" expanded, with traditional email like user@example.invalid being just one example of many forms of "email".
Nudity and porn are other examples of legal speech that have broad acceptance among the public (at least the U.S. public) to moderate or ban on social media platforms.
Yishan's point is, most people's opinions on how well a platform delivers free speech vs censorship will index more to the content of the speech, rather than the pattern of behavior around it.
The biggest problems with Twitter's moderation are what OP explicitly didn't talk about.
1. There isn't enough communication from moderators about why tweets are removed and users are banned. There is a missed learning opportunity when users don't get to hear why they are being moderated.
2. Bans are probably too harsh. If you can't come back having learned from your mistakes, why learn at all?
Most of it is a scaling issue, which is the same reason that popular subreddits are a predictably negative experience while niche subreddits tend to be well regarded.
Not only that, another issue that no system of moderation at that scale will ever be fully consistent, and the more detail you give, the more people will pick at the subtle inconsistencies of the moderation. The classic "X did something similar and didn't get banned for it".
Most people aren't the nihilists you might expect them to be. Most people want discussion to lead somewhere new. That's why there is value in moderation: we use it to trim the patterns of discussion that lead nowhere.
So when a moderator deletes your comment because it was leading to nowhere, they can also tell you so. If they can teach you how you might lead your next discussion somewhere, then there is a good chance that you will do that next time. If they just quietly delete your comment, then that guarantees you will follow the same tired pattern next time, because you won't know any better. Rinse and repeat.
Of course, the nihilists (trolls) will still be there, but it's worth at least trying to teach people how to not be one.
Honestly this comes across as a fairly disingenuous take from yishan given how moderation has actually played out on Reddit.
Reddit was able to scale by handing off moderation to the communities themselves and to the unpaid volunteers who wanted to moderate them. In general, I think it is obvious to any casual observer that those volunteers don't see moderation in the same way (or with the same goals) as the platform. For example, many (most?) moderators on Reddit absolutely do ban people not because they are starting flame wars or spamming but because said users aren't toeing the party line. A huge number of subreddits are created specifically for that purpose – "this community has X opinion about Y and if you don't like that you can GTFO".
However even if you ignore the unpaid volunteers moderating subreddits and focus only on the "Admins" that were specifically chosen by Reddit, you can see that the only priority was not increasing the signal-to-noise ratio, including during yishan's tenure. In most cases when a community is banned it is not because the signal-to-noise ratio is too high but because that community has received too much of the negative PR in the press that yishan referred to. Sure, the claim is still "we're trying to maintain the integrity of the platform as a whole and are banning communities for brigading, etc", but you can see based on which communities are banned that this is clearly not the whole story.
His argument makes no sense. If this is indeed why they are banning people, why keep the reasoning a secret? Honestly, every ban should come with a public explanation from the network, in order to deter similar behavior. The way things are right now, it's unclear if, when, and for what reason someone will be banned. People get banned all the time with little explanation or explanations that make no sense or are inconsistent. There is no guidance from Twitter on what behavior or content or whatever will get you banned. Why is some rando who never worked at Twitter explaining why Twitter bans users?
And how does Yishan know why Twitter bans people? And why should we trust that he knows? As far as I can tell, bans are almost completely random because they are enacted by random low-wage contract workers in a foreign country with a weak grasp of English and a poor understanding of Twitter's content policy (if there even is one).
Unlike what Yishan claims, it doesn't seem to me like Twitter cares at all about how pleasant an experience using Twitter is, only that its users remain addicted to outrage and calling-out others, which is why most Twitter power-users refer to it as a "hellsite".
He’s offering advice that differs from what Reddit does in practice. They absolutely ban content rather than behavior. Try questioning “the science” and it doesn’t matter how considerate you are, you will be banned.
He covers that further down in the tweets, near the end of the thread. He doesn't necessarily agree with the Reddit way of doing things, but it has interesting compromises wrt privacy.
Because no one has developed a moderation framework based on behavior. Content is (somewhat) easy, a simple regex can capture that. Behavior is far more complicated and even more subject to our biases.
Honestly it seems like you didn't read the thread. He's not talking about how Twitter itself works but about problems in moderation more generally based on his experience at Reddit. Also, he specifically advocates public disclosure on ban justifications (though acknowledges it is a lot of work).
He also makes an important and little-understood point about asymmetry: the person who posts complaints about being treated unfairly can say whatever they want about how they feel they were treated, whereas the moderation side usually can't disclose everything that happened, even when it would disprove what that user is saying, because it's operating under different constraints (e.g. privacy concerns). Ironically, sometimes those constraints are there to protect the very person who is making false and dramatic claims. It sucks to be on that side of the equation but it's how the game is played and the only thing you can really do is learn how to take a punch.
> Honestly, every ban should come with a public explanation from the network, in order to deter similar behavior
This only works on non-adversarial systems. Anywhere else, it will be like handing over to bad actors (i.e. people whose interests will never align with operator's) a list of blindspots
"You have been found guilty of crimes in $State. Please submit yourself to $state_prison on the beginning of the next month. We're sorry, but we cannot tell you what you are guilty of."
"Look, I'd like you to stop being a guest in my house, you're being an asshole."
"PLEASE ENUMERATE WHICH EXACT RULES I HAVE BROKEN AND PROVIDE ME WITH AN IMPARTIAL AVENUE FOR APPEAL."
---
When you're on a platform, you are a guest. When you live in society, you don't have a choice about following the rules. That's why most legal systems provide you with clear avenues for redress and appeal in the latter, but most private property does not.
Imagine for a moment what would happen if this rationale were extended to the criminal justice system. Due process is sanctified in law for a good reason. Incontestable assumptions of adversarial intent are the slow but sure path to the degradation of any community.
There will always be blind spots and malicious actors, no matter how you structure your policies on content moderation. Maintaining a thriving and productive community requires active, human effort. Automated systems can be used to counteract automated abuses, but at the end of the day, you need human discretion/judgement to fill those blind spots, adjust moderation policies, proactively identify troublemakers, and keep an eye on people toeing the line.
> Imagine for a moment what would happen if this rationale were extended to the criminal justice system.
It already is!
The criminal justice system is a perfect example of why total information transparency is a terrible idea: never talk to the cops even if they just want to "get one thing cleared up" - your intentions don't matter, you're being given more rope to hang yourself with.
It's an adversarial system where transparency gets you little, but gains your adversary a whole lot. You should not ever explain your every action and reasoning to the cops without your lawyer telling you when to STFU.
Due process is sanctified, but the criminal justice system is self-aware enough to recognize that self-incrimination is a hazard, and rightly does not place the burden on the investigated/accused, why should other adversarial system do less?
1. To keep people from cozying up to the electric fence. If you don't know where the fence is you'll probably not risk a shock trying to find it. There are other ways one can accomplish this like bringing the banhammer down on everyone near the fence every so often very publicly but point 2 kinda makes that suck.
2. To not make every single ban a dog and pony show when it's circulated around the blogspam sites.
I'm not gonna pass judgement as to whether it's a good thing or not but it's not at all surprising that companies plead the 5th in the court of public opinion.
Sorta related to (1) but not really: there are also more "advanced" detection techniques that most sites use to identify things like ban evasion and harassment using multiple accounts. If they say "we identified that you are the same person using this other account and have reason to believe you've created this new account solely to evade that ban" then people will start to learn what techniques are being used to identify multiple accounts and get better at evading detection.
He’s specifically referring to Reddit’s content moderation which actually has two levels of bans. Bans by mods from a specific subreddit are done by mods from that specific subreddit and having an explanation isn’t required but is sometimes given - these bans apply to just the subreddit and are more akin to a block by the community. Bans by admins happen to people that have been breaking a site rule, not a subreddit rule.
Both types of bans have privacy issues that result in lack of transparency with bans.
I think tiktok is doing incredibly well in this regards and in almost every social network aspect. Call me crazy but I now prefer the discussions there as HN's most of the time. I find high-quality comments (and there is still good jokes in the middle). The other day I felt upon a video about physics which had the most incredibly deep and knowlegeable comments Ive ever seen (edit: found the video, it is not as good as I remembered but still close to HN level imo). It's jaw dropping how well it works.
There is classical content moderation (the platform follows local laws) but mostly it kind of understand you so well that it put you right in the middle of like minded people. At least it feels that way.
I dont have insider hinsights on how it trully works I can only guess but the algorithm feels like a league or two above everything I have seen so far. It feels like it understand people so well that it prompted deep thought experiments on my end. Like let say I want to know someone I could simple ask "show me your tiktok". It's just a thought experiments but it feels like tiktok could tell how good of a person you are or more precisely what is your level of personal development. Namely, it could tell if youre racist, it could tell if youre a bully, a manipulator or easily manipulated, it could tell if youre smart (in the sense of high IQ), if you have fine taste, if you are a leader or a loner... And on and on.
Anyway, this is the ultimate moderation: follow the law and direct the user to like minded people.
>mostly it kind of understand you so well that it put you right in the middle of like minded people
Doesn't this build echo chambers where beliefs get more and more extreme? Good from a business perspective (nobody gets pissed off and leaves because they don't see much that they object to). But perhaps bad from a maintaining-democracy perspective?
Interesting point about SNR, social network and moderation. Ten years ago, when I interviewed for the Chinese clone of Facebook(renren 人人网), I told that the one of the major problems of social media is the SNR, and my interviewer insisted it was a feature problem. I didn't get the job, but within 3-4 years, renren goes downhill, and today nobody uses it anymore. Increasing SNR is actually a difficult question, because user behavior doesn't actually mean signal, something that is eye-catching, click-baity, is actually a good signal for the user.
Something interesting about Tiktok is it was designed for optimizing SNR, especially, only having one item(which is a video) each screen, instead of having a list of content. This, and autoplaying videos was breaking all the rules of app design guidelines. Now it is copied everywhere.
So how does Tiktok influence SNR, if user behavior does not compeletely correlate on what the real signal is. The secret recipe for Tiktok is human moderation, not volunteers, but tens of thousands of people, curating, moderating, to complement its realtime recommendation system.
For some reason, this makes me wonder how Slashdot's moderation would work in the current age. Too nerdy? Would it get overwhelmed by today's shit posters?
People don't care enough about the "community" anymore. It might work on a smallish-scale but the reality is everything is shitposting, even here.
Even in Slashdot's heyday the number of metamoderators was vanishingly small. The best thing it had was the ability to filter anonymous cowards and the ability to browse from -5 to +5 if you wanted to.
The idea of a single point of moderation will not work imo. We need to empower users (both individuals & groups) to moderate and curate their own information feeds. Create a market for moderation and curation!
Verified accounts will be instrumental as well. It’s important to understand who or what you are having a conversation with.
> No, you canʻt solve it by making them anonymous, because then you will be accused of having an unaccountable Star Chamber of secret elites (especially if, I dunno, you just took the company private too). No, no, they have to be public and “accountable!”
This is bulls... Sorry.
Who cares what you are accused of doing?
Why does it matter if people perceive that there is a star chamber. Even that reference. Sure the press cares and will make it an issue and tech types will care because well they have to make a fuss about everything and anything to remain relevant.
After all what are grand juries? (They are secret). Does the fact that people might think they are star chambers matters at all?
You see this is exactly the problem. Nobody wants to take any 'heat'. Sometimes you just have to do what you need to do and let the chips fall where they fall.
The number of people who might use twitter or might want to use twitter that would think anything at all about this issue is infinitesimal.
I like yishan's content and his climate focus, but this "we interrupt your tweet thread for sponsored content" style tangent is a bit annoying - not directly for doing it or its content, but because I can see other thread writers picking this up and we end up the same as Youtube with sponsored sections of content that you can't ad block.
FWIW With YT you can block them with Sponsorblock, which works with user submitted timestamps of sponsored sections in videos. If this tweet technique takes off I'd imagine a similar idea for tweets.
While many YouTube videos provide very interesting content most twitter „threads“ are just inane ramblings by some blue checkmark. So for yt videos I go the extra steps to install an extension. For twitter though? I just close the tab and never return.
How can people who are not totally dopamine deprived zombies find twitter and this terrible „thread“ format acceptable? Just write a coherent blog post pls.
> but this "we interrupt your tweet thread for sponsored content" style tangent is a bit annoying
I found this hilarious. I don't use Twitter and so was unaware that these annoying tangents are common on the platform. As a result, I thought Yishan was using them to illustrate how it's not necessarily the content (his climate initiative) but a specific pattern of behavior (saying the 'right' thing at the wrong time, in this case) that should be the target of moderation.
In real life we say: "it's not what you said, it's just the way you said it!" Perhaps the digital equivalent of that could be: "it's not what you said, it's just when you said it."
>, but this "we interrupt your tweet thread for sponsored content" style tangent is a bit annoying
It is annoying but it can be seen as part of his argument. How can spam be moderated if even trustworthy creators create spam?
According to him, it's not spam because it doesn't fulfill the typical patterns of spam, which shows that identifying noise does require knowledge of the language.
It could be interesting to turn his argument around. Instead of trying to remove all spam, a platform could offer the tools to handle all forms of spam and let its users come up with clever ways to use those tools.
At a deeper level, content moderation isn't about stopping hate speech and harmful speech. That's just chasing the symptom, not the cause. The cause is a certain type of mentality that becomes obsessed with the idea of beating their thoughts into the fabric of the universe, no matter what it takes. These are the ones who stoop to spamming, flaming, mocking meme GIFs, hate-speech, death threats, etc. (and generally spend all day online posting such things).
This is why Reddit has been so successful. Community moderation is much more effective than top-down moderation at combatting people with this mentality, because it discourages them as soon as they show their hostility, not once they have passed some threshold of badness.
Reddit has terrible moderation. So bad that it's a literal joke/meme at this point, down to a personal level in some cases even. Why would anyone ask for moderation advice in that general direction? To get a script on what not to do?
> working on climate: removing CO2 from the atmosphere is critical to overcoming the climate crisis, and the restoration of forests is one of the BEST ways to do that.
As a tangent, Akira Miyawaki has developed a method for 'reconstitution of "indigenous forests by indigenous trees"' which "produces rich, dense and efficient protective pioneer forests in 20 to 30 years"
> Rigorous initial site survey and research of potential natural vegetation
> Identification and collection of a large number of various native seeds, locally or nearby and in a comparable geo-climatic context
> Germination in a nursery (which requires additional maintenance for some species; for example, those that germinate only after passing through the digestive tract of a certain animal, need a particular symbiotic fungus, or a cold induced dorming phase)
> Preparation of the substrate if it is very degraded, such as the addition of organic matter or mulch, and, in areas with heavy or torrential rainfall, planting mounds for taproot species that require a well-drained soil surface. Hill slopes can be planted with more ubiquitous surface roots species, such as cedar, Japanese cypress, and pine.
> Plantations respecting biodiversity inspired by the model of the natural forest. A dense plantation of very young seedlings (but with an already mature root system: with symbiotic bacteria and fungi present) is recommended. Density aims at stirring competition between species and the onset of phytosociological relations close to what would happen in nature (three to five plants per square metre in the temperate zone, up to five or ten seedlings per square metre in Borneo).
> Plantations randomly distributed in space in the way plants are distributed in a clearing or at the edge of the natural forest, not in rows or staggered.
@yishan doesn't mention the most obvious solution — let the public vote on content. Wait, Twitter and Reddit already have voting mechanisms in place? What's wrong with them? Oh, they semi-secretly sell access to their voting mechanisms and allow unscrupulous entities to manipulate vote counts to astroturf? Oh...
The real problem is the lack of transparency as platforms fight tooth and nail to retain total control over content while appearing to foster freedom of speech.
The guy is literally describing how to shut down discussion on topics by escalating behaviours around it.
The great problem with this approach is that there are very many groups happy to see discussion of divers topics quashed and they're already familiar with how to get it done on platforms like Twitter.
The constitutional "freedom of the press" allows all citizens to use machinery to mass produce their written messages for distribution to others; the initial realization of such a machine being the printing press.
So the freedom of speech doesn't belong to the bots but to the citizens who control them.
It seems like he's arguing that people claiming moderation is censoring them are wrong, because moderation of large platforms is dispassionate and focused on limiting behavior no one likes, rather than specific topics.
I have no problem believing this is true for the vast majority of moderation decisions. But I think the argument fails because it only takes a few exceptions or a little bit of bias in this process to have a large effect.
On a huge platform it can simultaneously be true that platform moderation is almost always focused on behavior instead of content, and a subset of people and topics are being censored.
> On a huge platform it can simultaneously be true that platform moderation is almost always focused on behavior instead of content, and a subset of people and topics are being censored.
He made this exact point in a previous post. Some topics look like they're being censored only because they tend to attract such a high concentration of bad actors who simultaneously engage in bullying type behavior. They get kicked off for that behavior and it looks like topic $X is being censored when it mostly isn't.
That's not the same point. Again, I have no problem believing that what you say happens, even often. Even still, some topics may really be censored. They may even be the same topics; just because there's an angry mob on one side of a topic doesn't mean that everyone on that side of the topic is wrong, and that's the hardest situation to moderate dispassionately. Maybe even impossible. Which is when I can imagine platforms getting frustrated and resorting to topic censorship.
Could also be that some objectionable behavior patterns are much more common in some ideological groups than others, which makes it appear as if the moderation is biased against them. It is, just not in the way they think.
Having read Yishan's older threads, the point he makes about spam is important, it's to increase people's comfort level with platforms. I think chat is maybe the easiest to feel comfortable on, then forums and social media platforms.
Every one is different and moderation isn't going to be to everyone's liking, but as long as it's encouraging respectful engagement and rejecting the trolls who show no such interest, there is enough social media for everyone who keeps an open mind on all sorts of subjects.
Reddit and Twitter are 2 of the top reasons why this country, and the world for that matter, is so divided politically. Keep in mind that if there weren't division politicians would have less influence.
> Our current climate of political polarization makes it easy to think itʻs about the content of the speech, or hate speech, or misinformation, or censorship, or etc etc.
Are we sure that it is not the other way around? Didn't social platforms created or increased polarization?
I always see this comments from social platforms that take as fact that society is polarized and they work hard to fix it, when I believe that it is the other way around. Social media has created the opportunity to increase polarization and they are not able to stop it for technical, social or economic reasons.
In fact it seems that people were always polarized, it's just that the political parties (R & D in the US) didn't really bother sorting themselves on topics until the 1960s: even in the 1970s and early 1980s it was somewhat common to vote for (e.g.) an R president but a D representative (or vice versa). Straight-through one-party voting didn't really become the majority until the late-1980s and 1990s.
There's a chapter or two in the above book describing psychology studies showing that humans form tribes 'spontaneously' for the most arbitrary of reasons. "Us versus them" seems to be baked into the structure of humans.
I think political parties only later began astroturfing on social media and split users in camps. Formerly content on reddit in default subreddits often had low quality, but you still got some nice topics here and there. Now it is a propaganda hellhole that is completely in the hands of pretty polarized users.
> "Us versus them" seems to be baked into the structure of humans.
Not quite, but one of the most effective temptations one can offer is giving people a moral excuse to hate others. Best when see as those as responsible for all evil in the world. It feels good to judge, it distracts from your own faults, flaws, insecurities, fears and problems. This is pretty blatant and has become far, far worse than the formerly perhaps populist content on reddit. We especially see this on political topics, but also the pandemic as an example.
It's quite interesting that the USSR collapsed in 1991, which removed the biggest external "us vs them" actor.
But on the other hand there are also countless other factors that are going to affect society at scale: rise internet, rise of pharmaceutical psychotropics, surge in obesity, surge in autism, declines in testosterone, apparent reversal of Flynn effect, and more.
With so many things happening it all feels like a Rorschach test when trying to piece together anything like a meaningful hypothesis.
I think that you should look into the history of talk radio, or maybe just radio in general. Then maybe a history of American journalism, from Robert McCormick's Chicago Tribune back to the the party newspapers set up in the first years of the republic.
Yepp, same message different medium. Having someone in your family who “listens to talk radio” was the “they went down the far right YouTube rabbit hole” of old.
I mean the big names in talk radio are still clucking if you want to listen to them today.
>I visited Mr. Cain in West Virginia after seeing his YouTube video denouncing the far right. We spent hours discussing his radicalization. To back up his recollections, he downloaded and sent me his entire YouTube history, a log of more than 12,000 videos and more than 2,500 search queries dating to 2015.
Society is a closed system, twitter is not outside of society.
The people on twitter are real people (well, mostly, probably), and have real political opinions.
If you talk to people, by and large they'll profess moderate opinions, because in person discussions still trigger politeness and non-confrontational emotions in most people, so the default 'safe' thing to say is the moderate choice, no matter what their true opinion happens to be.
The internet allows people to take the proverbial mask off.
I would disagree about proverbial masks. Majority of people in the world including US are simply too preoccupied with their everyday routine, problems and work to end up with extreme political views.
What Internet does have is ease of changing masks and joining diverse groups. Trying something unusual without reprecussions appeal to a lot of people who usually simply dont have time to join such groups offline.
The real problem is that unfortunately propoganda has evolved too with all new research about human phychology, behaviors and fallacies. Abusing weaknesses of monkey brain on scale is relatively easy and profitable.
So most of the republicans I run into are extremely frank about the way the country ought to be run. When I was younger it was the same way with democrats.
I used to think this until several instances of various neighbors getting drunk enough to shed the veil of souther hospitality and reveal how racist they are.
Plenty of people have radical thoughts and opinions, but are smart enough to keep it to themselves IRL
"The first thing most people get wrong is not realizing that moderation is a SIGNAL-TO-NOISE management problem"
Which your entire staff ignored when one user destroyed several LED businesses thanks to one user vote-manipulating everything, despite every one of those people coming to you with verifiable proof of vote-manipulation.
This Ex-CEO has zero room to be speaking about anything like this while they've not fixed the problems their ignorance directly-caused.
He was CEO of a company that has volunteer moderators, what he knows about handling moderation is tainted by the way reddit is structured. Also, reddit's moderation is either heavy handed or totally ineffective depending on the case so not sure he's the right person to talk to.
Also, I stopped reading when he did an ad break on a twitter thread. Who needs ads in twitter threads? It makes him seem desperate and out of touch. Nobody needs his opinion, and they need his opinion with ad breaks even less.
This thread was a great read, even the tree parts, but it fails to address the line which I feel that social media crossed in 2020: I was inattentively ok with the behavioral spam filtering, I didn't notice it but very likely would have appreciated it if I had.
The line was crossed by fact checks and user bans that were clearly all about content and not about machine-detectable behavior patterns. This thread seems to avoid or ignore that category of moderation.
>Why is this? Because it has no value? Because itʻs sometimes false? Certainly itʻs not causing offline harm.
>No, no, and no.
Fundamentally disagree with his take on spam. Not only does spam have no value, it has negative value. The content of the spam itself is irrelevant when the same message is being pushed out a million times and obscuring all other messages. Reducing spam through rate-limiting is certainly the easiest and most impactful form of moderation.
Yishan's points are great, but there is a more general and fundamental question to discuss...
Moderation is the act removing content. i.e. of assigning a score of 1 or 0 to content.
If we generalize, we can assign a score from 1 to 0 to all content. Perhaps this score is personalized. Now we have a user's priority feed.
How should Twitter score content using personalization? Filter bubble? Expose people to a diversity of opinions? etc. Moderation is just a special case of this.
Some people want to escape the filter bubble, to expose their ideas to criticism, to strengthen their thinking and arguments through conflict.
Other people want a community of like-minded people to share and improve ideas and actions collectively, without trolls burning everything down all the time.
Some people want each of those types of community depending on the topic and depending on their mood at the time.
A better platform would let each community decide, and make it easy to fork off new communities with different rules when a subgroup or individual decides the existing rules aren't working for them.
>Machine learning algorithms are able to accurate identify spam, and itʻs not because they are able to tell itʻs about Viagra or mortgage refinancing, itʻs because spam has unique posting behavior and patterns in the content
I'm amazed that this is still true (assuming Yishan is right). Would have though GPT-3 spam would be the normal already & it becomes a cat and mouse game from there
Can there be a moderation bot that detects flamewars and steps in? It could enforce civility by limiting discussion to only go through the bot and by employing protocols like "each side summarize issues", "is this really important here", or "do you enjoy this".
Engaging with the bot is supposed to be a rational barrier, a tool to put unproductive discussions back on track.
The commentary is interesting, but it does unfortunately gloss over the very real issue of actually controversial topics. Most platforms don't typically set out to ban controversial stuff from what I can tell, but the forces that be (advertisers, government regulators, payment processors, service providers, etc.) tend to be quite a bit more invested in such topics. Naughty language on YouTube and porn on Twitter are some decent examples; these are not and never have been signal to noise ratio problems. While the media may be primarily interested in the problem of content moderation as it impacts political speech, I'd literally filter all vaguely politically charged speech (even at the cost of missing plenty of stuff I'd rather see) if given the option.
I think that the viewpoints re: moderation are very accurate and insightful, but I honestly have always felt that it's been more of a red herring for the actual scary censorship creep happening in the background. Go find the forum threads and IRC logs you have from the 2000s and think about them for a little while. I think that there are many ways in which I'd happily admit the internet has improved, but looking back, I think that a lot of what was discussed and how it was discussed would not be tolerated on many of the most popular avenues for discourse today—even though there's really nothing particularly egregious about them.
I think this is the PoV that one has as a platform owner, but unfortunately it's not the part that I think is interesting. The really interesting parts are always off on the fringes.
It’s hard for me to imagine what “scary actual censorship” is happening — that is, to identify topics or perspectives that cannot be represented in net forums. If such topics/perspectives exist, then the effectiveness must be near total to the point where I’m entirely unaware of them, which I guess would be scary if people could provide examples. But usually when I ask, I’m supplied with topics which I have indeed seen discussed on Twitter, Reddit, and often even HN, so…
Nobody wants to answer this, because to mention a controversial topic is to risk being accused of supporting it.
You could look at what famous people have gotten into trouble over. Alex Jones or Kanye West. I assume there have been others, but those two were in the news recently.
The problem is that it's not really about censorship the way that people think about it; it's not about blanket banning the discussion of a topic. You can clearly have a discussion about extremely heated debate topics like abortion, pedophilia, genocide, whatever. However, in some of these topics there are pretty harsh chilling effects that prevent people from having very open and honest discussion about them. The reason why I'm being non-specific is twofold: one is because I am also impacted by these chilling effects, and another is because making it specific makes it seem like it's about a singular topic when it is about a recurring pattern of behaviors that shift topics over time.
If you really don't think there have been chilling effects, I put forth two potential theories: one is that you possibly see this as normal "consequences for actions" (I do not believe this: I am strictly discussing ideas and opinions that are controversial even in a vacuum.) OR: perhaps you genuinely haven't really seen the fringes very much, and doubt their existence. I don't really want to get into it, because it would force me to pick specific examples that would inextricably paint me into those arguments, but I guess maybe it's worth it if it makes the point.
> The problem is that it's not really about censorship the way that people think about it; it's not about blanket banning the discussion of a topic.
Then we're far away enough from the topic of censorship that we should be using different language for what we're discussing. It's bad enough that people use the term "censorship" colloquially when discussing private refusal to carry content vs state criminalization. It's definitely not applicable by the time we get to soft stakes.
As someone whose life & social circle is deeply embedded in a religious institution which makes some claims and teachings I find tenuous to objectionable, I'm pretty familiar with chilling effects and other ways in which social stakes are held hostage over what topics can be addressed and how. And yet I've found these things:
(1) It's taught me a lot about civil disagreement and debate, including the fact that more often than not, there are ways to address even literally sacred topics without losing the stakes. It takes work and wit, but it's possible. Those lessons have been borne out later when I've chosen to do things like try to illuminate merits in pro-life positions while in overwhelmingly pro-choice forums.
(2) It's made me appreciate the value of what the courts have called time/place/manner restrictions. Not every venue is or should be treated the same. Church services are the last time/place to object to church positions, and when one does choose that it's best to take on the most obligation in terms of manner, making your case in the terms of the language, metaphors, and values of the church.
(3) Sometimes you have to risk the stakes, and the only world in which it would actually be possible for there NOT to be such stakes (and risk and conflict over them) would be one in which people have no values at all
To be clear I am 100% suggesting that advertisers, staff and others participate in silencing this speech. Reddit is actually a funny example because Reddit admins have been accused by multiple subreddits of stepping in and forcing new rules to be instituted in subreddits.
"I would like to take this time to repeat my claim that important topics are being silenced without recognition of any obligation to actually even make the pretense that I'm replying or otherwise engaging in discourse, which may be a giveaway my motives/values here are not actually about discourse."
At least, that's what my translator makes of your comment, but freedom of speech being what it is, I certainly can't compel you to acknowledge it.
A bunch of things that make sense about banning spam and spammy behaviour and then the payload: How banning discussion of the lab leak hypothesis back then made sense and wasn't politically motivated.
Of course it wasn't about the content, of course. Neither was Hunter's laptop story ban about the content, no, of course not.
I wouldn't broadcast I have this limited reading comprehension if the CIA was waterboarding me.
Conspiracy theories coming due to being motivated by the content is literally discussed on the text and you decide to skip all that and just do the thing he is saying is a problem. With no hint of irony whatsoever...
Here is the specific tweets you might need to re-read
> Now back to where we were… when we left off, I was talking about how people are subconsciously influenced by the specific content thatʻs being moderated (and not the behavior of the user) when they judge the moderation decision.
When people look at moderation decisions by a platform, they are not just subconsciously influenced by the nature of the content that was moderated, they are heavily - overwhelmingly - influenced by the nature of the content!
Would you think the moderation decision you have a problem with would be fair if the parties involved were politically reversed?
> there will be NO relation between the topic of the content and whether you moderate it, because itʻs the specific posting behavior thatʻs a problem
I get why Yishan wants to believe this, but I also feel like the entire premise of this argument is then in some way against a straw man version of the problem people are trying to point to when they claim moderation is content-aware.
The issue, truly, isn't about what the platform moderates so much as the bias between when it bothers to moderate and when it doesn't.
If you have a platform that bothers to filter messages that "hate on" famous people but doesn't even notice messages that "hate on" normal people--even if the reason is just that almost no one sees the latter messages and so they don't have much impact and your filters don't catch it--you have a (brutal) class bias.
If you have a platform that bothers to filter people who are "repetitively" anti large classic tech companies for the evil things they do trying to amass money and yet doesn't filter people who are "repetitively" anti crypto companies for the evil things they do trying to amass money--even if it feels to you as the moderator that the person seems to have a point ;P--that is another bias.
The problem you see in moderation--and I've spent a LONG time both myself being a moderator and working with people who have spent their lives being moderators, both for forums and for live chat--is that moderation and verification of everything not only feels awkward but simply doesn't scale, and so you try to build mechanisms to moderate enough that the forum seems to have a high enough signal-to-noise ratio that people are happy and generally stay.
But the way you get that scale is by automating and triaging: you build mechanisms involving keyword filters and AI that attempt to find and flag low signal comments, and you rely on reports from users to direct later attention. The problem, though, is that these mechanisms inherently have biases, and those biases absolutely end up being inclusive of biases that are related to the content.
Yishan seems to be arguing that perfectly-unbiased moderation might seem biased to some people, but he isn't bothering to look at where or why moderation often isn't perfect to ensure that moderation actually works the way he claims it should, and I'm telling you: it never does, because moderation isn't omnipresent and cannot be equally applied to all relevant circumstances. He pays lip service to it in one place (throwing Facebook under the bus near the end of the thread), and yet fails to then realize this is the argument.
At the end of the day, real world moderation is certainly biased. And maybe that's OK! But we shouldn't pretend it isn't biased (as Yishan does here) or even that that bias is always in the public interest (as many others do). That bias may, in fact, be an important part of moderating... and yet, it can also be extremely evil and difficult to discern from "I was busy" or "we all make mistakes" as it is often subconscious or with the best of intentions.
When problem #1 is spam then problem #0 is bots and paid trolls.
Whenever there is a profile with handle in a format similar to: @jondoe123456, emojis in the name, and emojis and hashtags in bio, especially related to political/religious topics: 99% chance that this is a bot or a troll with multiaccount.
There seem to be no mention of (de)centralization or use of reputation in the comments here or in the twitter thread.
Everyone is discussing a failure mode of a centralized and centrally moderated system and aren't questioning those properties, but it's really counter to traditional internet based communication platforms like email, usenet, irc etc.
Nobody has the answers. Social media is an experiment gone wrong. Just like dating apps and other pieces of software that exist that are trying to replace normal human interaction. These first generation prototypes have a basic level of complexity and I expect by 2030 technology should evolve to the point where better solutions exist.
"The fallacy is that it is very easy to think itʻs about WHAT is said, but Iʻll show you why itʻs not…"
Let's test this theory. Create a post across 10 social media platforms disparaging the white race. Now do the same amount jews. See which set of posts gets taken down at a higher rate.
I think this is a limitation of faceless communication, and boils down to the respect that users of a platform have for the other users and the platform itself. Ie - there isn't enough. And that's ok, because we should spend more time talking in real life.
Reposting this paper yet again, to rub in the point that social media platforms play host to communities and communities are often very good at detecting interlopers and saboteurs and pushing them back out. And it turns out the most effective approach is to let people give bad actors a hard time. Moderation policies that require everyone to adhere to high standards of politeness in all circumstances are trying to reproduce the dynamics of kindergartens, and are not effective because the moderators are easily gamed.
Also, if you're running or working for a platform and dealing with insurgencies, you will lose if you try to build any kind of policy around content analysis. Automated context analysis is generally crap because of semantic overloading (irony, satire, contextual humor), and manual context analysis is labor-intensive and immiserating, to the point that larger platforms like Facebook are legitimately accused of abusing their moderation staff by paying them peanuts to wade through toxic sludge and then dumping them as soon as they complain or ask for any kind of support from HR.
To get anywhere you need to look at patterns of behavior and to scale you need to do feature/motif detection on dynamic systems rather than static relationships like friend/follower selections. However, this kind of approach is fundamentally at odds with many platforms' goal of maximizing engagement as means to the end of selling ad space.
> Spam is typically easily identified due to the repetitious nature of the posting frequency, and simplistic nature of the content (low symbol pattern complexity).
Now that we have cheap language models, you could create endless variations of the same idea. It's an arms race.
There are so few places anymore where you can't guess exactly what will be in the comments threads, and you go between feeling either frustrated or occasionally relieved to see someone who is open-minded. The few that popped up either seem to become echo chambers with their own biases, mysteriously disappear, or get flooded by organized campaigns to ruin the landscape.
I've always thought that slashdot handled comment moderation the best. (And even that still had problems.)
In addition to that these tools would help:
(1)Client-side: Being able to block all content from specific users and the replies to specific users.
(2)Server-side: If userA always 'up votes' comments from userB apply a negative weighting to that upvote (so it only counts as 0.01 of a vote).
Likewise, with 'group-voting'; if userA, userB, and userC always vote identically down-weight those votes. (this will slow the 'echo chamber' effect)
(3)Account age/contribution scale: if userZ has been a member of the site since it's inception, AND has a majority of their posts up-voted, AND contributes regularly, then give their votes a higher weighted value.
Of course these wouldn't solve everything, as nothing ever will address every scenerio; but I've often thought that these things combined with how slashdot allowed you to score between -1 to 5, AND let you set the 'post value' to 2+, 3+, or 4+ would help eliminate most of the bad actors.
Side note: Bad Actors, and "folks you don't agree with" should not be confused with each other.
One thing that's easy to forget is that super-complex weighted moderation/voting systems can get computationally expensive at the scale of something like Twitter or Facebook etc..
Slashdot had a tiny population, relatively speaking, so could afford to do all that work.
But when you're processing literally millions of posts a minute, it's a different order of magnitude I think.
Tiny, specific, non-generalist population. As soon as that changed, /. went down the drain like everything else. I still like /.'s moderation system better than most, but the specifics of how the system works are a second order concern at best.
I wonder about your (2) idea... If the goal is to reduce the effect of bots that vote exactly the same, then ok, sure. (Though if it became common, I'm sure vote bots wouldn't have a hard time being altered to add a little randomness to their voting.) But I'm not sure how much it would help beyond that, since if it's not just identifying _exact_ same voting, then you're going to need to fine tune some percentage-the-same or something like that. And I'm not sure the same fine-tuned percentage is going to work well everywhere, or even across different threads or subforums on the same site. I also feel like (ignoring the site-to-site or subforum-to-subforum differences) that it would be tricky to fine tune correctly to a point where upvotes still matter. (Admittedly I have nothing solid to base this on other than just a gut feeling about it.)
It's an interesting idea, and I wonder what results people get when trying it.
> Server-side: If userA always 'up votes' comments from userB apply a negative weighting to that upvote
This falls down, hard, in expert communities. There are a few users who are extremely knowledgeable that are always going to get upvoted by long term community members who acknowledge that expert's significant contributions.
> Being able to block all content from specific users and the replies to specific users.
This is doable now client side, but when /. was big, it had to be done server side, which is where I imagine all the limits around friend/foes came from!
The problem here is, trolls can create gobs of accounts easily, and malevolent users group together to create accounts and upvote them, so they have plenty of spare accounts to go through.
Imagine there would be many shades of up and down voting in HN, according to your earned karma points, and to your interactions outside of your regular opinion echo Chambers.
Reddit has a different structure than Twitter. In fact, go back to before Slashdot and Digg and the common (HN, Reddit) format of drive-by commenting was simply not a thing. Usenet conversations would take place over the course of days, weeks, or even months.
Business rules. Twitter is driven by engagement. Twitter is practically the birthplace of the "hot take". It's what drives a lot of users to the site and keeps them there. How do you control the temper of a site when your goal is inflammatory to begin with?
Trust and Good Faith. When you enter into a legal contract, both you and the party you are forming a contract with are expected to operate in good faith. You are signaling your intent is to be fair and honest and to uphold the terms of the contract. Now, the elephant in the room here is what happens when the CEO, Elon Musk, could arguably (Matt Levine has done so, wonderfully) not even demonstrate good faith during the purchase of Twitter, itself. Or has been a known bully to Bill Gates regarding his weight or sex appeal, or simply enjoys trolling with conspiracy theories. What does a moderation system even mean in the context of a private corporation owned by such a person? Will moderation apply to Elon? If not, then how is trust established?
There is a lot to talk about on that last point. In the late '90s a site called Advogato[1] was created to explore trust metrics. It was not terribly successful, but it was an interesting time in moderation. Slashdot was also doing what they could. But then it all stopped with the rise of corporate forums. Corporate forums, such as Reddit, Twitter, or Facebook, seem to have no interest in these sorts of things. Their interest is in conflict: they need to onboard as many eyeballs as possible, as quickly as possible, and with as little user friction as possible. They also serve advertisers, who, you could argue, are the real arbiters of what can be said on a site.
free speech might be self regulating. A place that gets excessive spam attracts no one and then there wouldn't be much motivation to spam it anymore.
I don't recall spam restrictions on old IRC. A moderator could boot you off. My own theory is having an exponential cool off timer on posts could be the only thing needed that still is technically 100% free speech.
This is a great article. Thoughtful and substantive.
Nothing in this article has anything to do with why all the platforms banned Alex Jones.
Which is the part no one seems to be addressing.
Once we accepted banning Alex Jones, which was relatively easy to accept because he is so hated, we opened the door to deplatforming as something distinct from moderation.
But the distinction isn’t made, and it all gets conflated instead.
That is how the platforms lost all of our trust. This must be directly addressed.
We're working on some solutions around this problem - a browser level filter on toxic comments / blatant misinformation found on ad-supported platforms, helpful context as a layer on top of content you're reading, and moderated community debate around current events, with enforced norms. Still early if anyone wants to join what's likely to be a non-profit:
I think email might be a good system to model this on. In addition to an inbox, almost all providers provide a Spam folder, and others like Gmail separate items into 'Promotions' and 'Social' folders/labels. I imagine almost nobody objects to this.
Why can't social media follow a similar methodology? There is no requirement that FB/Twitter/Insta/etc feeds be a single "unit". The primary experience would be a main feed (uncontroversial), but additional feeds/labels would be available to view platform-labeled content. A "Spam Feed" and a "Controversial Feed" and a "This Might Be Misinformation Feed".
Rather than censoring content, it segregates it. Users are free to seek/view that content, but must implicitly acknowledge the platform's opinion by clicking into that content. Just like you know you are looking at "something else" when you go to your email Spam folder, you would be aware that you are venturing off the beaten path when going to the "Potential State-Sponsored Propaganda Feed". There must be some implicit trust in a singular feed which is why current removal/censorship schemas cause such "passionate" responses.
I think the most interseting thing as a result from that post is the realization that given an intelligence tasked with reducing the most harm for humanity as a whole, it will identify behaviors that lead to physical confrontation and censor/out-gas/diminish/remove interactions/prevent paths to that behavior interacting with the network for as long as it sees the predicted behaviors leading to harm.
tl;dr AI bans any content that is likely to lead to physical confrontation. It's not the content that sucks, it's that people suck. It's not that people suck, it's that people are easily influenced into a state of mind and behavior that leads to harming other humans.
The bigger questionm which is also the oldest question: is human nature so luke-warm? Can we aspire to be 'wise elders' throughout the entire species while still retaining child-like curiosity and wonder?
I got my first experience in running a small-medium sized (~1000 user) game community over the past couple years. This is mostly commentary on running such a community in general.
Top-level moderation of any sufficiently cliquey group (i.e. all large groups) devolves into something resembling feudalism. As the king of the land, you're in charge of being just and meting out appropriate punishment/censorship/other enforcement of rules, as well as updating those rules themselves. Your goal at the end of the day is continuing to provide support for your product, administration/upkeep for your gaming community, or whatever else it was that you wanted to do when you created the platform in question. However, the cliques (whether they be friend groups, opinionated but honest users, actual political camps, or any other tribal construct) will always view your actions through a cliquey lens. This will happen no matter how clear or consistent your reasoning is, unless you fully automate moderation (which never works and would probably be accused of bias by design anyways).
The reason why this looks feudal is because you still must curry favor with those cliques, lest the greater userbase eventually buys into their reasoning about favoritism, ideological bias, or whatever else we choose to call it. At the end of the day, the dedicated users have much more time and energy to argue, or propagandize, or skirt rules than any moderation team has to counteract it. If you're moderating users of a commercial product, it hurts your public image (with some nebulous impact on sales/marketing). If you're moderating a community for a game or software project, it hurts the reputation of the community and makes your moderators/developers/donators uneasy.
The only approach I've decided unambiguously works is one that doesn't scale well at all, and that's the veil of secrecy or "council of elders" approach which Yishan discusses. The king stays behind the veil, and makes as few public statements as possible. Reasoning is only given insofar as is needed to explain decisions, only responding directly to criticism as needed to justify actions taken anyways. Trusted elites from the userbase are taken into confidence, and the assumption is that they give a marginally more transparent look into how decisions are made, and that they pacify their cliques.
Above all, the most important fact I've had to keep in mind is that the outspoken users, both those legitimately passionate as well as those simply trying to start shit, are a tiny minority of users. Most people are rational and recognize that platforms/communities exist for a reason, and they're fine with respecting that since it's what they're there for. When moderating, the outspoken group is nearly all you'll ever see. Catering to passionate, involved users is justifiable, but must still be balanced with what the majority wants, or is at least able to tolerate (the "silent majority" which every demagogue claims to represent). That catering must also be done carefully, because "bad actors" who seek action/change/debate for the sake of stoking conflict or their own benefit will do their best to appear legitimate.
For some of this (e.g. spam), you can filter it comfortably as Yishan discusses without interacting with the content. However, more developed bad actor behavior is really quite good at blending in with legitimate discussion. If you as king recognize that there's an inorganic flamewar, or abuse directed at a user, or spam, or complaint about a previous decision, you have no choice but to choose a cudgel (bans, filters, changes to rules, etc) and use it decisively. It is only when the king appears weak or indecisive (or worse, absent) that a platform goes off the rails, and at that point it takes immense effort to recover it (e.g. your C-level being cleared as part of a takeover, or a seemingly universally unpopular crackdown by moderation). As a lazy comparison, Hacker News is about as old as Twitter, and any daily user can see the intensive moderation which keeps it going despite the obvious interest groups at play. This is in spite of the fact that HN has less overhead to make an account and begin posting, and seemingly more ROI on influencing discussion (lots of rich/smart/fancy people post here regularly, let alone read).
Due to the need for privacy, moderation fundamentally cannot be democratic or open. Pretty much anyone contending otherwise is just upset at a recent decision or is trying to cause trouble for administration. Aspirationally, we would like the general direction of the platform to be determined democratically, but the line between these two is frequently blurry at best. To avoid extra drama, I usually aim to do as much discussion with users as possible, but ultimately perform all decisionmaking behind closed doors -- this is more or less the "giant faceless corporation" approach. Nobody knows how much I (or Elon, or Zuck, or the guys running the infinitely many medium-large discord servers) actually take into account user feedback.
I started writing this as a reply to paradite, but decided against that after going far out of scope.
In the real world, when you're unhinged, annoying, intrusive...you face almost immediate negative consequences. On social media, you're rewarded with engagement. Social media owners "moderate" behavior that maximizes the engagement they depend on, which makes it somewhat of a paradox.
It would be similar to a newspaper "moderating" their journalists to bring news that is balanced, accurate, fact-checked, as neutral as possible, with no bias to the positive or negative. This wouldn't sell any actual news papers.
Similarly, nobody would watch a movie where the characters are perfectly happy. Even cartoons need villains.
All these types of media have exploited our psychological draw to the unusual, which is typically the negative. This attention hack is a skill evolved to survive, but now triggered all day long for clicks.
Can't be solved? More like unwilling to solve. Allow me to clean up Twitter:
- Close the API for posting replies. You can have your weather bot post updates to your weather account, but you shouldn't be able to instant-post a reply to another account's tweet.
- Remove the retweet and quote tweet buttons. This is how things escalate. If you think that's too radical, there's plenty of variations: a cap on retweets per day. A dampening of how often a tweet can be retweeted in a period of time to slow the network effect.
- Put a cap on max tweets per day.
- When you go into a polarized thread and rapidly like a hundred replies that are on your "side", you are part of the problem and don't know how to use the like button. Hence, a cap on max likes per day or max likes per thread. So that they become quality likes that require thought. Alternatively, make shadow-likes. Likes that don't do anything.
- When you're a small account spamming low effort replies and the same damn memes on big accounts, you're hitchhiking. You should be shadow-banned for that specific big account only. People would stop seeing your replies only in that context.
- Mob culling. When an account or tweet is mass reported in a short time frame and it turns out that it was well within guidelines, punish every single user making those reports. Strong warning, after repeated abuse a full ban or taking away the ability to report.
- DM culling. It's not normal for an account to suddenly receive hundreds or thousands of DMs. Where a pile-on in replies can be cruel, a pile-on in DMs is almost always harassment. Quite a few people are OK with it if only the target is your (political) enemy, but we should reject it by principle. People joining such campaigns aren't good people, they are sadists. Hence they should be flagged as potentially harmful. The moderation action here is not straightforward, but surely something can be done.
- Influencer moderation. Every time period, comb through new influencers manually, for example those breaking 100K followers. For each, inspect how they came to power. Valuable, widely loved content? Or toxic engagement games? If it's the latter, dampen the effect, tune the alghoritm, etc.
- Topic spam. Twitter has "topics", great way to engage in a niche. But they're all engagement farmed. Go through these topics manually every once in a while and use human judgement to tackle the worst offenders (and behaviors)
- Allow for negative feedback (dislike) but with a cap. In case of a dislike mob, take away their ability to dislike or cap it.
Note how none of these potential measures address what it is that you said, it addresses behavior: the very obvious misuse/abuse of the system. In that sense I agree with the author. Also, it doesn't require AI. The patterns are incredibly obvious.
All of this said, the above would probably make Twitter quite an empty place. Because escalated outrage is the product.
Let's take the core points at the end in reverse order:
> 3: Could you still moderate if you canʻt read the language?
Except, moderators do read the language. If think it is pretty self-serving to say that users views of moderation decisions are biased by content but moderators views are not.
> 2: Freedom of speech was NEVER the issue (c.f. spam)
Spam isn't considered a free speech issue because we generally accept that spam moderation is done based on behavior in a content-blind way.
This doesn't magically mean that any given moderation team isn't impinging free speech. Especially when there are misinformation policies in place which are explicitly content-based.
> 1: It is a signal-to-noise management issue
Signal-to-noise management is part of why moderation can be good, but it doesn't even justify the examples from the twitter thread. Moderation is about creating positive experiences on the platform and signal-to-noise is just part of that.
My second statement answers that question. I don't want moderation advice from someone who was involved in a platform that purposely sets moderation policies to create political polarization. A comment by someone below sums it up nicely.
> ...and it's triply not true on yishan's reddit which both through administrative measures and moderation culture targets any and all communities that do not share the favoured new-left politics.
In yishan's defense however, I am not sure if those problems with reddit started before or after he left.
Citation needed. r/ChapoTrapHouse was banned, and there's many large alt-right subreddits in existence right now that haven't been banned (like r/tucker_carlson).
I don't have a reddit account, I just lurk without logging in. All the subreddits on the front page, which I assume are the default subreddits, sway the same way politically. Try commenting on r/news, r/worldnews, r/science or any of the front page subreddits with anything that doesn't match the party narrative and see how fast you get banned.
There are people on the far-left that say the same thing. Everyone with unhinged extremist views feels this way. That feeling by itself isn't data in support of the claim.
I am talking about any disagreement from the party line getting you banned, not "unhinged extremist views". The fact that you call it that highlights my initial point about political polarization very well. There are definitely subreddits on the other side of the political spectrum which also do the same thing. Point here is that the front page/default subreddits are curated with a clear politcal slant.
One category that yishan doesn't bring up in his content ladder of spam|non-controversial|controversial is copyright infringing content like the latest Disney movie. It fits along with spam as obviously okay to moderate. But take a moment and self-reflect on why that's the case, and how much you've bought into capitalism as a solution for distributing scarce goods when some goods aren't scarce.
Meh. This theory doesn't fit to the reality of clearly politically motivated moderation on reddit and twitter and elsewhere. Banning Jordan Peterson for calling a Trans person by their old name is not a "pattern of misbehavior" and Jordan Peterson is not known for causing any offline violence. Heck, reddit banned one of the largest subreddits because they supported trump.
I wonder if the problems the author describes can be solved by artifically downvoting and not showing spam and flamewar content, not banning people.
- Spam: don't show it to anyone, since nobody wants to see it. Repeatedly saying the same thing will get your posts heavily downvoted or just coalesced into a single post.
- Flamewars: again, artifically downvote them so that your average viewer doesn't even see them (if they aren't naturally downvoted). And also discourage people from participating, maybe by explicitly adding the text "this seems like a stupid thing to argue about" onto the thread and next to the reply button. The users who persist in flaming each other and then get upset, at that point you don't really want them on your platform anyways
- Insults, threats, etc: again, hide and reword them. If it detects someone is sending an insult or threat, collapse it into "<insult>" or "<threat>" so that people know the content of what's being sent but not the emotion (though honestly, you probably should ban threats altogether). You can actually do this for all kinds of vitriolic, provocative language. If someone wants to hear it, they can expand the "<insult>" bubble, the point is that most people probably don't.
It's an interesting idea for a social network. Essentially, instead of banning people and posts outright, down-regulate them and collapse what they are saying while remaining the content. So their "free speech" is preserved, but they are not bothering anyone. If they complain about "censorship", you can point out that the First Amendment doesn't require anyone to hear you, and people can hear you if they want to, but the people have specified and algorithm detects that they don't.
EDIT: Should also add that Reddit actually used to be like this, where subreddits had moderators but admins were very hands-off (actually just read about this yesterday). And it resulted in jailbait and hate subs (and though this didn't happen, could have resulted in dangerous subs like KiwiFarms). I want to make clear that I still think that content should be banned. But that content isn't what the author is discussing here: he is discussing situations where "behavior" gets people banned and then they complain that their (tame) content is being censored. Those are the people who should be down-regulated and text collapsed instead of banned.
TL;DR: Run your platform to confirm to the desires of the loudest users. Declare anything your loudest users don't want to see to be "flamewar" content and remove it.
My take: "Flamebait" is a completely accurate label for the content your loudest users don't want to see, but it's by definition your loudest users who are actually doing the flaming, and by definition they disagree with the things they're flaming. So all this does is reward people for flamewars, while the moderators effectively crusade on behalf of the flamers. But it's "okay" because, by definition, the moderators are going to be people who agree with the political views of the loudest viewers (if they weren't they'd get heckled off), so the mods you actually get will be perfectly happy with this situation. Neither the mods nor the loudest users have any reason to dislike or see any problem with this arrangement. So why is it a problem? Because it leads to what I'll call a flameocracy: whoever flames loudest gets their way as the platform will align with their desires (in order to reduce how often they flame). The mods and the platform are held hostage by these users but are suffering literal Stockholm Syndrome as they fear setting off their abusers (the flamers).
I like Yishan's reframing of content moderation as a 'signal-to-noise ratio problem' instead of a 'content problem', but there is another reframing which follows from that: moderation is also an outsourcing problem, in that moderation is about users outsourcing the filtering of content to moderators (be they all other users through voting mechanisms, a subset of privileged users through mod powers, or an algorithm).
Yishan doesn't define what the 'signal' is, or what 'spam' is, and there will probably be an element of subjectivity to these which varies between each platform and each user on each platform. Thus successful moderation happens when moderators know what users want, i.e. what the users consider to be 'good content' or 'signal'. This reveals a couple of things about why moderation is so hard.
First, this means that moderation actually is a content problem. For example, posts about political news are regularly removed from Hacker News because they are off-topic for the community, i.e. we don't consider that content to be the 'signal' that we go to HN for.
Second, moderation can only be successful when there is a shared understanding between users and moderators about what 'signal' is. It's when this agreement breaks down that moderation becomes difficult or fails.
Others have posted about the need to provide users with the tools to do their own moderation in a decentralised way. Since the 'traditional'/centralised approach creates a fragile power dynamic which requires this shared understanding of signal, I completely understand and agree with this: as users we should have the power to filter out content we don't like to see.
However, we have to distinguish between general and topical spaces, and to determine which communities live in a given space and what binds different individuals into collectives. Is there a need for a collective understanding of what's on-topic? HN is not Twitter, it's designed as a space for particular types of people to share particular types of content. Replacing 'traditional' or centralised moderation with fully decentralised moderation risks disrupting the topicality of the space and the communities which inhabit it.
I think what we want instead is a 'democratised' moderation, some way of moderating that removes a reliance on a 'chosen few', is more deliberate about what kinds of moderation need to be 'outsourced', and which allows users to participate in a shared construction of what they mean by 'signal' or 'on-topic' for their community. Perhaps the humble upvote is a good example and starting point for this?
Finally in the interest of technocratic solutions, particularly around spam (which I would define as repetitive content), has anyone thought about rate limits? Like, yeah if each person can only post 5 comments/tweets/whatever a day then you put a cap on how much total content can be created, and incentivise users to produce more meaningful content. But I guess that wouldn't allow for all the sick massive engagement that these attention economy walled garden platforms need for selling ads...
I like the idea that you don't want to moderate content, but behavior. And it let me to these thoughts. I'm curious about your additions to these thoughts.
Supply moderation of psychoactive agents never worked. People have a demand to alter the state of their consciousness, and we should try to moderate demand in effective ways. The problem is not the use of psychoactive agents, it is the abuse. And the same applies to social media interaction which is a very strong psychoactive agent [1]. Nevertheless it can be useful. Therefore we want to fight abuse, not use.
I would like to put up to discussion the usage and extension of techniques for demand moderation in the context of social media interactions which we know to somewhat work already in other psychoactive agents. Think something like drugs education in schools, fasting rituals, warning labels on cigarettes, limited selling hours for alcohol, trading food stamps for drug addicts etc.
For example, assuming the platform could somehow identify abusive patterns in the user, it could
- show up warning labels that their behavior might be abusive in the sense of social media interaction abuse
- give them mandatory cool-down periods
- trick the allostasis principle of their dopamine reward system by doing things intermittently, e.g. by only randomly letting their posts to go through to other users, or only randomly allow them to continue reading the conversation (maybe only for some time), or only randomly shadow ban some posts
- make them read documents about harmful social media interaction abuse
- hint to them how abusive patterns in other people look like
- give limited reading or posting credits (e.g. "Should I continue posting in this flamewar thread and then not post somewhere else where I find it more meaningful at another time?")
- etc.
I would like to hear your opinions about this in a sensible discussion.
_________
[1] Yes, social media interaction is a psychoactive and addictive agent, just like any other drug or your common addiction like overworking yourself, but I digress. People use social media interactions to among others raise their anger, to feed their addiction to complaining, to feel a high of "being right"/owning it up to the libs/nazis/bigots/idiots etc., to feel like they learned something useful, to entertain themselves, to escape from reality etc. Many people suffer from compulsively or at least habitual abuse of social media interactions, which has been shown by numerous studies (Sorry, to lazy to find a paper now to cite). Moreover the societal effects of abuse of social media interactions and their dynamics and influence on democratic politics are obviously detrimental.
Maybe this works on a long enough timeline, but by your analogy entire generations of our population are now hopelessly addicted to this particular psychoactive agent. We might be able to make a new generation that is immune to it, but in the mean time these people are strongly influencing policy, culture, and society in ways that are directly based on that addiction. This is a 'planting trees I know I will never feel the shade of' situation.
Re: "Hereʻs the answer everyone knows: there IS no principled reason for banning spam."
The author is making the mistake that "free speech" has been about saying whatever you want and whenever you want. This was never the case, including at the time of the founding of the U.S. constitution. There has always been a tolerance window which defines what you can say and what you can't say without repercussions, often and usually enforced by society and societal norms.
The 1st amendment was always about limiting what the government can do to curtail speech, but, as we know, there are plenty of other actors in the system that have and continue to moderate communications. The problem with society today is that those in power have gotten really bad at defining a reasonable tolerance window, and in fact, political actors have worked hard to shift the tolerance window to benefit them and harm their opponents.
So, he makes this mistake and then builds on it by claiming that censoring spam violates free speech principles, but that's not really true. And then he tries to equate controversy with spam, saying it's not so much about the content itself but how it affects users. And that, I think leads into another major problem in society.
There has always been a tension between someone getting reasonably versus unreasonably offended by something. However, in today's society, thanks in part to certain identitarian ideologies, along with a culture shift towards the worship or idolization of victimhood, we've given tremendous power to a few people to shut down speech by being offended, and vastly broadened what we consider reasonable offense versus unreasonable offense.
Both of these issues are ultimately cultural, but, at the same time, social media platforms have enough power to influence culture. If the new Twitter can define a less insane tolerance window and give more leeway for people to speak even if a small but loud or politically motivated minority of people get offended, then they will have succeeded in improving the culture and in improving content moderation.
And, of course, there is a third, and major elephant in the room. The government has been caught collaborating with tech companies to censor speech indirectly. This is a concrete violation of the first amendment, and, assuming Republicans gain power this election cycle, I hope we see government officials prosecuted in court over it.
I think that's a mischaracterization of what was written about spam.
The author wrote that most people don't consider banning spam to be free speech infringement because the act of moderating spam has nothing to do with the content and everything to do with the posting behavior in the communication medium.
The author then uses that point to draw logical conclusions about other moderation activity.
Leading with a strawman weakens your argument, I think.
Fortunately it's not a strawman. From the article:
=====
Moderating spam is very interesting: it is almost universally regarded as okay to ban (i.e. CENSORSHIP) but spam is in no way illegal.
Spam actually passes the test of “allow any legal speech” with flying colors. Hell, the US Postal Service delivers spam to your mailbox.
When 1A discussions talk about free speech on private platforms mirroring free speech laws, the exceptions cited are typically “fire in a crowded theater” or maybe “threatening imminent bodily harm.”
Spam is nothing close to either of those, yet everyone agrees: yes, itʻs okay to moderate (censor) spam.
=====
He's saying directly that censoring spam is not supported by any free speech principle, at least as he sees it, and in fact our free speech laws allow spam. He also refers to the idea of "allow any legal speech" as the "free-speech"-based litmus test for content moderation, and chooses spam to show how this litmus test is insufficient.
What about my framing of his argument is a strawman? it looks like a flesh-and-blood man! I am saying that his litmus test is an invalid or inaccurate framing, of what a platform that supports free speech should be about. Even if the government is supposed to allow you to say pretty close to whatever you want whenever you want, it's never been an expectation that private citizens have to provide the same support. Individuals, institutions, and organizations have always limited speech beyond what the government could enforce. Therefore, "free speech" has never meant that you could say whatever is legal and everyone else will just go along with it.
On the other hand, Elon Musk's simple remark of saying that he knows he's doing a good job if both political extremes are equally offended shows to me that he seems to understand free speech in practice better than this ex-Reddit CEO does! (https://www.quora.com/Elon-Musk-A-social-media-platform-s-po...)
For the record, I agree with your points in your original post regarding the nature of free speech and with regard to the Overton window for tolerable speech (if there is such a thing).
I disagree with the notion that Yishan made a mistake in how he wrote about spam. You used that as a basis for disclaiming his conclusions.
Yishan was not making a point about free speech, he was making the point that effective moderation is not about free speech at all.
A) saying moderation is not about free speech is, I think, making a point about free speech. Saying one thing is unrelated to another is making a point about both things.
B) Even framed this way, I think Yishan is either wrong or is missing the point. If you want to do content moderation that better supports free speech, what does that look like? I think Yishan either doesn't answer that question at all, or else implies that it's not solvable by saying the two are unrelated. I don't think that's the case, and I also think his approach of focusing less on the content and more on the supposed user impact just gives more power to activists who know how to use outrage as a weapon. If you want your platform to better support free speech, then I think the content itself should matter as much or more than peoples' reaction to it, even if moderating by content is more difficult. Otherwise, content moderation can just be gamed by generating the appropriate reaction to content you want censored.
If you take a look and analyze the people that were fired, you will find developers who cannot code, people who run Bitcoin nodes on company electricity, people with no skills or qualification for what they do. Elon Musk is trying to implement a meritocracy, it remains to be seen if he will do it right or botch it.
I think you have to be quite credulous to engage in this topic of "twitter moderation" as if it's in good faith. It's not about about creating a good experience for users, constructive debate or even money. It's ALL about political influence.
> Iʻm heartened to know that @DavidSacks is involved.
I'm not. I doubt he is there because Twitter is like Zenefits, it's because his preoccupation over the last few years has been politics as part of the "New Right" Thiel, Master, Vance etc. running fund raisers for DeSantis and endorsing Musk's pro-Russian nonsense on Ukraine.
“ The entitled elite is not mad that they have to pay $8/month. They’re mad that anyone can pay $8/month.”
There must be quite a few people in here who are well versed in customer relations, at least in the context of a startup, can anyone explain to me why Musk and Sacks seem to have developed the strategy of insulting their customers and potential customers?
I can think of two reasons
1. They think twitter has a big enough most of obsessed people that they can het away with whatever they want.
2. They think that there really is a massive group of angry “normies” they can rile up to pay $8 a month for twitter blue, but isn’t ironically the goal of twitter blue to get priority access to the “anointed elite”? For sure I’m not paying $8 a month to get access to the feeds of my friends and business associates.
David Sacks’ tweet does feel very Trumpian in a way though, which supports the notion of bringing trump back and starting the free speech social network.
Aside from the weird elite baiting rhetoric, does this mean that blue checkmark no longer means "yes, this is that famous person/thing you've heard of, not an impersonator" but now just means "this person gave us 8 dollars?"
I think their general plan will be to discourage/silence influential left-wing voices with enough cover to keep the majority of the audience for an emboldened right-wing.
If thinking imaginatively, then the proposal framed as "$8/mo or have your tweets deranked" is a deal they actively don't want left-wingers to take. They want to be able to derank their tweets with a cover of legitimacy.
The more they can turn this fee into a controversial "I support Musk" loyalty test, the more they can discourage left-wing / anti-Musk subscribers while encouraging right-wing / pro-Musk subscribers who will all have their tweets boosted.
Feels conspiratorial but it's a fee that mostly upsets existing blue tick celebrities which should be the last group Twitter The Business would want to annoy but they are the influential left-wingers. If you look at who Musk picked fights with about it e.g. AOC and Stephen King, then that is even more suggestive of deliberate provocation.
Whether planned or not, I suspect that this is how it play out.
I think because Elon and Co. are acting so dismissive and entitled. They're acting frickin weird. Admittedly I think who you think sounds more entitled depends on your worldview. I do think the journalist reactions are strange, but probably just because they're acting to something so strange.
Elon is hardly describing a vision for this new version of twitter that people might be inspired to spend $8 for, yes something vague about plebs vs nobility, and half has many ads, but his biggest call to action has been "Hey we need the money". They're acting so shitty to everyone it's hardly a surprise people aren't fawning in confidence back. Plus I can't help but feel that these people are really just echoing what everyone else is thinking. Why am I paying $8 a month for Twitter?
> Elon is hardly describing a vision for this new version of twitter that people might be inspired to spend $8 for, yes something vague about plebs vs nobility,
Yeah, Elon calls the status quo a “lords & peasants system” and says that to get out of that model Twitter should have a two-tier model where the paid users get special visual flair, algorithmic boosts in their tweets prominence and reach, and a reduced-ads feed experience compared to free users.
When the context of the discussion is twitter moderation in the wake of Musk's takeover and who his team is, it's already political. For Yishan to pump up Sacks and his confidence in him to fix moderation, without acknowledging that today he is a political operator, is close to dishonest. Contributing this information is hopefully helpful.
The public thinks about moderation in terms of content. Large social networks think in terms of behavior. Like let's say I get a chip on my shoulder about... the Ukraine war, one direction or another. And I start finding a way to insert my opinion on every thread. My opinion on the Ukraine war is fine. Any individual post I might make is fine and contextual to the convo. But I'm bringing down the over-all quality of discussion by basically spamming every convo with my personal grievance.
Some kinds of content also gets banned, like child abuse material and other obvious things. But the hard part is the "behavior" type bans.
> Any individual post I might make is fine and contextual to the convo. But I'm bringing down the over-all quality of discussion by basically spamming every convo with my personal grievance.
Isn't this how a healthy society functions?
Political protests are "spam" under your definition. When people are having a protest in the street, it's inconvenient, people don't consent to it, it brings down the quality of the discussion (Protestors are rarely out to have a nuanced conversation). Social Media is the public square in the 21st century, and people in the public square should have a right to protest.
Yeah well, Yishan failed miserably at topic moderation on Reddit, and generally speaking Reddit has notoriously awful moderation policies that end up allowing users to run their own little fiefdoms just because they name-squatted earliest on a given topic. Additionally, Reddit (also notoriously) allowed some horrendously toxic behavior to continue on its site (jailbait, fatpeoplehate, the_donald, conservative currently) for literal years before taking action, so even when it comes to basic admin activity I doubt he's the guy we should all be listening to.
I think the fact that this is absurdly long and wanders at least twice into environmental stuff (which is super interesting btw, definitely read those tangents) kind of illustrates just how not-the-best Yishan is as a source of wisdom on this topic.
Very steeped in typical SV "this problem is super hard so you're not allowed to judge failure or try anything simple" talk. Also it's basically an ad for Block Party by the end (if you make it that far), so... yeah.
Yeah, it's interesting how much reddit's content moderation at a site-wide level is basically the opposite of what he said in this thread. Yeah, good content moderation should be about policing behavior... so why weren't notorious brigading subs banned?
Reading these threads on twitter is like listening to a friend having a bad mdma trip replaying his whole emotional life to you in a semi incoherent diarrhea like stream of thoughts
Please write a book, or at the very least an article... posting on twitter is like writing something on a piece of paper, showing it to your best friend and worst enemy before throwing it in the trash
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
It was a massive stream of tweets, with two long digressions, and several embeds. The only thing that would have made it worse is if every tweet faded in on scroll.
If we're going to pedantically point out rules, why don't we add one that says "No unrolled Twitter threads."?
It is not pedantic, it is people derailing possibly interesting discussion of the content with completely off-topic noise discussion of the presentation. If you do not like the presentation there are ways to change it.
If people are constantly complaining about the same thing, the thing should be fixed. Then you'll have no more complaints about the thing or people complaining about complaints about the thing. I'm tackling world peace next.
If we’re going to pedantically point out rules why don’t we cook hamburgers on the roof of parliament? Or something else that isn’t pedantically point out rules?
What got me was him weaving in (2-3 times) self promotion tweets of some tree planting company he funds/founded(?). He basically personally embedded ads into his thread, which is actually kind of smart I suppose, but very confusing as a reader.
Kind of genius to put it in the middle. Most normal people write a tweet that blows up and then have to append "Check out my soundcloud!" on the end. Or an advert for the nightsky lamp.
At the same time (as much as I strongly support climate efforts, and am impressed by his approach, so give him a pass in this instance), that 'genius move' sort of needs to be flagged as his [Category #1 - Spam], which should be moderated. It really is inserting off-topic info into another thread.
The saving grace may be that both small enough volume and sufficiently interesting to his audience to be just below the threshold.
Imagine it is a text and you can mark any paragraph. You can save that paragraph, like it, or even reply to it. So the interaction can grow like tentacles (edit: or rather like a tree).
Right now, I could make a comment on either your first or second paragraph, or on your entire comment. However, there is no way to determine which category my reply falls into until you have read it entirely. On a platform like Twitter, where there can be up to 100,000 comments on a given piece of content, this is very useful.
Better yet, it allows the author himself to dig down into tangent. In theory, someone could create an account and then have all of their interactions stay on the same tree without ever cutting off. Essentially turning their account into an interconnected "wiki" where everyone can add information.
With enough time your brain no longer registers the metadata around the tweet. If you ignore it and read it as an entire text it is not very different from a regular article or long form comment: https://threadreaderapp.com/thread/1586955288061452289.html
What you're saying is that we should optimise the way we debate things to please the algorithm and maximise user engagement instead of maximising quality content and encouraging deep reflexions
The truth is people can't focus for more than 15 seconds so instead of reading a well researched and deep article or book that might offer sources, nuances, &c. they'll click "like" and "retweet" whoever vomited something that remotely goes their way while ignoring the rest
> If you ignore it and read it as an entire text it is not very different from a regular article or long form comment
It is extremely different as each piece is written as a independent 10s thought ready to be consumed and retweeted. Reading it on threadreaderapp makes it even more obvious, your brain need to work 300% harder to process the semi incoherent flow, some blogs written by 15 years old are more coherent and pleasant to read than this
> What you're saying is that we should optimise the way we debate things to please the algorithm and maximise user engagement instead of maximising quality content and encouraging deep reflexions
Not at all, in my opinion being able to interact with every piece of an exchange allows to dig down into specific points of a debate.
There is a soft stop at the end of every tweet because it's a conversation and not a presentation. It's an interactive piece of information and not a printed newspaper. You can interact during the thread and it might change its outcome.
When you are the person interacting, it's similar to a real life conversation. You can cut someone and talk about something else at any time. The focus conversation will shift for a short moment and then come back to the main topic.
For someone arriving after the fact, you have a time machine of the entire conversation.
---
About the link, it is only the first result on Google because I don't use those services and not me vetting for this specific one. I also use ad blockers at all levels (from pi-hole to browser extension to VPN level blocking), so I don't see ads online.
If I go meta for a second, this is the perfect example of how breaking ideas into different tweets can be useful.
Were I to share your comment on its own, it contains that information about a link that is not useful to anyone but you and I.
For someone reading our comments, they have to go through this interaction on the ads and this product. If instead this were two tweets it would have allowed us to comment on this in parallel. If it was HN, imagine if you had made two replies under my comments and we could have commented under each. However, that's the wrong way on this platform.
> Right now, I could make a comment on either your first or second paragraph, or on your entire comment. However, there is no way to determine which category my reply falls into until you have read it entirely
That is exactly what quoting does, and is older than the web itself.
> Right now, I could make a comment on either your first or second paragraph, or on your entire comment. However, there is no way to determine which category my reply falls into until you have read it entirely. On a platform like Twitter, where there can be up to 100,000 comments on a given piece of content, this is very useful.
Oh, look, I have managed to reply to your second paragraph without having to use twatter, how quaint!
There would be a lot of noise if everyone left 5 comments under every comments. This is not the way HN is built. Commenting too quickly even blocks you from interacting.
This is brilliant, I had never thought about it like this before. I’d maybe say grow like a tree rather than tentacles although you might have a point in that if you’re speaking with the wrong person it could be pretty cthulonic.
These things seem to be fine when it's 5-6 tweets in a coherent thread. There's even that guy who regularly multi-thousand-word threads that are almost always a good read.
Just NOT in twitter. I gave up on twitter and signed out of it years ago and refuse to sign back in.
I spent a good hour of my life looking for ways to read this thread. I personally know Yishan and value the opinions he cares to share so I new this would be interesting if I could just manage to read it.
Replacing the url to nitter.net helped but honestly it was most cohesive in threadreaderapp although it missed some of the referenced sidebar discussions (like the appeal to Elon to not waste his mental energy on things that aren’t real atom problems).
"Hereʻs the answer everyone knows: there IS no principled reason for banning spam. We ban spam for purely outcome-based reasons:
It affects the quality of experience for users we care about, and users having a good time on the platform makes it successful."
And also this Chinese Room argument: "once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didnʻt understand the language they were being spoken in?""
In other words, there are certain kinds of post which trigger escalating pathological behavior - more posts - which destroy the usability platform for bystanders by flooding it. He argues that it doesn't matter what these posts mean or whose responsibility is it for the escalation, just the simple physics of "if you don't remove these posts and stop more arriving, your forum will die".
I would argue that the signal-to-noise ratio outcome-based reason is the principle: it's off-topic. You could also argue another principle: you're censoring a bot, not a human.
my guess is that people can like () individual posts.
The positive of that is:
a) possibility to like () just one post, or 2, 3… depending of who good the thread is
b) the fine granular way to like () gives the algorithm way better possibilities to whom to show a thread and even better, to first show just one intereting post out of that thread (also people can mores easily quote or retweet individual parts of a thread)
I've never understood this, it's just reading: you start at the beginning of tweet, you read it, then go to the next tweet and read it. How is that different from reading paragraphs?
The amount of UI noise around each tweet and how much you have to scroll, coupled with the need to trigger new loads once Twitter has truncated the number of replies and also HOW MUCH YOU HAVE TO SCROLL makes this a terrible experience
I understand why people tweet rather than write blogs. Twitter gives more visibility and is a far lower barrier of entry than sitting down and writing an article or a blog. That Twitter hasn't solved this problem after years of people making long threads and this being a big way that people consume content on the platform is a failure on their part. Things like ThreadReader should be in-built and much easier to use. I think they acquired one of these thread reader apps too
I think this is important enough to highlight. Tweets are very different from other forms of communication on the internet. You can see it even here on HN in the comments section.
Twitter railroads the discussion into a particular type by the form of discourse. Each tweet, whether it's meant to or not, is more akin to a self contained atomic statement then a paragraph relating to a whole. This steers tweets into short statements of opinion masquerading as humble, genuine statements of fact. Often times each tweet is a simple idea that's given more weight because it's presented in tweet form. An extreme example is the joke thread of listing out each letter of the alphabet [0] [1].
When tweets are responding to another tweet, it comes off as one of the two extreme choices of being a shallow affirmation or a combative "hot take".
Compare this with the comments section here. Responses are, for the most part, respectful. Comments tend to address multiple points at once, often interweaving them together. When text is quoted, it's not meant as a hot take but a refresher on the specific point that they're addressing.
The HN comments section has its problems but, to me, it's night and day from Twitter.
I basically completely avoid responding to most everything on Twitter for this reason. Anything other than a superficial "good job" or "wow" is taken as a challenge and usually gets a nasty response. I also have to actively ignore many tweets, even from people I like and respect, because the format over emphasizes trivial observations or opinions.
The post says that moderation is first and foremost a signal-to-noise curation. Writing long form content in a Twitter thread greatly reduces the signal-to-noise ratio.
But also an example of how moderation or lack therein would help to serve a particular end goal. ex. HackerNews is a pretty well moderated forum, however sometimes the content (PC being related to technology) is within the rules, but the behavior it elicited in the other users is detrimental to the overall experience.
What’s funny is he’s arguing that moderation should be based on behavior, not content. And that you could identify spam if it was written in Loren Ipsum.
If this thread and self-referential Tweeting was written in Loren Ipsum, it would definitely look like spam to me.
So I guess I disagree with one of the main points. For me, the content matters much more than the behavior. Pretty sure that’s how the Supreme Court interprets 1A rights as well. The frequency and intensity of the speech hasn’t played a part in any 1A cases that I can remember, it’s exclusively if the content of the speech violates someone’s rights and then deciding which outcome leads to bigger problems, allowing the speech or not.
There’s a study to be done on the polarization around twitter threads. I have zero problem with them and find overall that lots of great ideas are posted in threads, and the best folks doing it end up with super cogent and well written pieces. I find it baffling how many folks are triggered by them and really hate them!
This is likely because threads are a "high engagement" signal for Twitter and therefore prone to being gamed.
There are courses teaching people how to game the Twitter algo. One of those took off significantly in the past 18 months. You can tell by the number of amateurs creating threads on topics far beyond their reach. The purpose of these threads is for it to show up on people's feeds under the "Topic" section.
For example, I often see see random posts from "topics" Twitter thinks I like (webdev, UI/UX, cats, old newspaper headlines). I had to unsubscribe from 'webdev' and "UI/UX" because the recommended posts were all growth hackers. It wasn't always that way.
I'm not the only one, others have commented on it as well, including a well known JS developer:
> This is likely because threads are a "high engagement" signal for Twitter and therefore prone to being gamed.
You mean this is the reason folks respond differently to the form of twitter thread? This is one that is definitely not from a growth hacker but folks here still seem to hate it.
And hilariously he starts with "How do you solve the content moderation problems on Twitter?" and never actually answer it. Just rambles on about a dissection of the problem. Guess we know now why content moderation was never "solved" at Reddit, nor will it ever be.
He kinda did in roundabout way; the "perfect" moderation, even if possible, will turn it into nice and cultured place to have discussion and that doesn't bring controversy and sell ads.
You would have way less media "journalists" making a fuss about what someone said on that social network and would have problems just getting it to be popular, let alone displace any of the big ones. It would maybe be possible with existing one but that's a ton of work and someone needs to pay for that work.
And it's entirely possible for smaller community to have that, but the advantage with this is small community about X will also have moderators that care about X so
* any on-topic bollocks can be spotted by mods and it is no longer "unknown language"
* any off-topic bollocks can be just dismissed with "this is a forum about X, if you don't like it go somewhere else
> the "perfect" moderation, even if possible, will turn it into nice and cultured place to have discussion and that doesn't bring controversy and sell ads.
That's not a solution though since every for profit business is generally seeking to maximize profit, and furthermore we already knew this to be the case - nothing he is saying is novel. I guess that's where I'm confused.
I have actually worked in this area. I like a lot of Yishan's other writing but I find this thread mostly a jumbled mess without much insight. Here are a couple assorted points:
>In fact, once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didnʻt understand the language they were being spoken in?
I'm not sure what the big point is here but there are a couple parts to how this works in the real world:
1) Some types of content removal do not need you to understand the language: visual content (images/videos), legal takedowns (DMCA).
2) Big social platforms contract with people around the world in order to get coverage of various popular languages.
3) You can use Google Translate (or other machine translation) to review content in some languages that nobody working in content moderation understands.
But some content that violates the site's policies can easily slip through the cracks if it's in the right less-spoken language. That's just a cost of doing business. The fact that the language is less popular will limit the potential harm but it's certainly not perfect.
>Hereʻs the answer everyone knows: there IS no principled reason for banning spam. We ban spam for purely outcome-based reasons:
>
>It affects the quality of experience for users we care about, and users having a good time on the platform makes it successful.
Well, that's the same principle that underlies all content moderation: "allowing this content is more harmful to the platform than banning it". You can go into all the different reasons why it might be harmful but that's the basic idea and it's not unprincipled at all. And not all spam is banned from all platforms--it could just have its distribution killed or even be left totally alone, depending on the specific cost/benefit analysis at play.
You can apply the same reasoning to every other moderation decision or policy.
The main thrust of the thread seems to be that content moderation is broadly intended to ban negative behavior (abusive language and so on) rather than to censor particular political topics. To that I say, yeah, of course.
FWIW I do think that the big platforms have taken a totally wrong turn in the last few years by expanding into trying to fight "disinformation" and that's led to some specific policies that are easily seen as political (eg policies about election fraud claims or covid denialism). If we're just talking about staying out of this business then sure, give it a go. High-level blabbering about "muh censorship!!!" without discussion of specific policies, is what you get from people like Musk or Sacks, though, and that's best met with an eye roll.
If I wanted quality content, I would just do the Something Awful approach and charge $x per account.
If I wanted lots of eyeballs (whether real or fake) to sell ads, I would just pay lip service to moderation issues, while focusing on only moderating anything that affects my ability to attract advertisers.
But what I want, above all, because I think it would be hilarious to watch, is for Elon to activate Robot9000 on all of Twitter.
Robot9000 really didn't improve quality in the places it was deployed though and people just game it.
edit: that said I think Something Awful arguably has the best approach to this does it not? The site is over 20 years old at this point. That is absolutely ancient compared to all the public message forums that exist.
I agree. I think the SA approach is the best I've ever seen. But as I'm flippantly pointing out: it only works if you really only care about fostering quality social interaction.
The mistake SA is making is not fixating on revenue as the chief KPI. ;)
SA is also not the font of internet culture that it once was, either, so clearly the price of admission is not sufficient to make it successful. It seems to me it was, at most, a partial contributor.
I think it's an interesting argument about what SA is now. I hear membership is growing again. It has a dedicated group there. I think that's what's most interesting. That's really not much different than a Reddit board in theory. But Reddit boards seem to come and go constantly and suffer from all sorts of problems. I am not a redditor but SA seems like a better board than a specific sub reddit.
My point is that maybe what SA is now is the best you can hope for on the internet, and it's going strong(?).
Also, SA has "underperformed" as an internet platform- Lowtax notoriously failed to capitalize on the community and grow it into something bigger (and more lucrative). So it remains a large old-school vBulletin-style internet forum instead of a potential Reddit or even greater, albeit with its culture and soul intact.
Not suggesting you meant it this way, but there's an amusing "money person with blinders on" angle to the statement. It's the "what's the point of anything if you're not making money?!"
Perhaps, but it's not even an HN/startup point of view. Goons regularly make fun of Lowtax for squandering the large audience (some of whom included popular Internet personalities who would go on to reach greater success and celebrity) and lead SA had. It all plays into the "loser admin" mythos.
This really was an outstanding read and take on Elon, Twitter, and what's coming up.
But it literally could not have been posted in a worse medium for communicating this message. I felt like I had to pat my head and rub my tummy at the same time reading through all this, and to share it succinctly with colleagues resulted in me spending a good 15 minutes cutting and pasting the content.
I've never understood people posting entire blog type posts to.... Twitter.
It was incoherent rambling and none of really works for Twitter.
Twitter is ultimately at the behest of its advertisers who are constantly on a knife edge about whether to bother using it or not. We have already seen GM and L'Oreal pull ad spend and many more will follow if their moderation policies are not in-line with community standards.
If Musk wants to make Twitter unprofitable then sure relax the moderation otherwise might want to keep it the same.
Did he begin answering the question, drop some big philosophical terms, and then just drift off into here is what I think we should do about climate change in 4 steps...?
Yes, he is leveraging his audience. This like going to a conference with a big-name keynote, but the lunch sponsor gets up and speaks for 5 minutes first.
We're on the thread to read about content moderation. But since we're there, he's going to inject a few promos about what he is working on now. Just like other ads, I skimmed past them until he got back on track with the main topic.
he does come back to the point after his little side-piece about trees, but after a while I didn't feel he was actually providing any valuable information, so I stopped reading
The best part of his engrossing Twitter thread is that he inserts a multitweet interstitial "ad" for his passion project promoting reforestation right in the middle of his spiel.
I'm sure it works across the population at large but I avoid doing business with people who engage in that kind of manipulation. They're fundamentally untrustworthy in my experience.
"Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."
I find it interesting that dang’s post about not just HN rules but also his personal feelings about yishan’s thread:
* appear in the same post—- there clearly is no personal boundary with dang
* direct replies to dang’s post, now the top of the comments section, are disabled
whenever dang tries to “correct the record” or otherwise engage in frivolous squabbles with other HN commenters, I really hope this article pops up in the linked material. some may argue that yishan here is doing inapropriate self-promotion and that might undermine trust in his message. i hope HN readers notice how partial dang is, how he’s used HN technical features ti give his own personal feelings privilege, and the financial conflicts of interest here.
I'm amazed at the number of people in this thread who are annoyed that someone would insert mention of a carbon capture initiative into an unrelated discussion. The author is clearly tired of answering the same question, as stated in the first tweet, and is desperately trying to get people to think more critically about the climate crisis that is currently causing the sixth mass extinction event in the history of the planet.
Being annoyed that someone "duped" you into reading about the climate crisis is incredibly frustrating to activists because it's SO important to be thinking about and working on, and yet getting folks to put energy into even considering climate crisis is like pulling teeth.
I wonder if any of the folks complaining about the structure of the tweets has stopped to think about why the author feels compelled to "trick" us into reading about carbon capture.
The simple fact of the matter is that too many people are either resigned to a Hobbesian future of resource wars, or profiting too much from the status quo to go beyond a perfunctory level of concern.
$44bn of real-world cash was just spent on Twitter, and HN users alone have generated tens of thousands of comments on the matter.
How many climate tech related stories will have the same level of interest?
To add another perspective (albeit with politics rather than climate change):
I worked in political communications for a while. Part of the reason it was so toxic to my mental health and I burnt out was that it was nearly impossible to avoid politics online even in completely unrelated spaces. So I'd work 40 hours trying to improve the situation, log in to discuss stupid media/fan shit, and have to wade through a bunch of stuff that reminded me how little difference I was making, stuff assuming I wasn't involved/listening, etc. It was INFURIATING. Yes, I had the option to not go online, but I'm a nerd living in a small city. There isn't enough people here that share my interests to go completely offline.
Staying on topic helps people who are already involved in important causes to step away and preserve their mental health, which in turn makes them more effective.
Twitter is text based. Video games have had text based profanity filters for online games for years.
Make it easy for users to define a regex list saved locally. On the backend train a model that filters images of gore and genitals. Aim users who opt in to that experience at that filtered stream.
This problem does not require a long winded thesis.
Because we focus on abstract problem statements, coded appeals to authority (as if ex-Reddit CEO is that special; there are a few), rather than concrete engineering?
User demand to control what they see is there. It’s why TV was successful; don’t like what’s on History? Check out Animal Planet.
Tech CEOs need their genius validated and refuse to concede anyone else knows what’s best for themselves. What everyone else sees is a problem for a smurt CEO to micromanage to death, of course.
If it was for some random app or gadget I’d be mad but it’s literally trying to save humanity so I give it a pass. We need to be talking about mitigating and surviving catastrophic climate change more, not less.
more like "oh never click on yishan threads ever again, this guy wants to put ads in twitter threads, who has time and patience for that? not me."
Brilliant? For immediately getting large amounts of readers to click away and discrediting himself into the future, sure that might be brilliant I guess.
It makes him seem desperate for attention and clueless.
tldr tangential babbling that HN protects and wants us to admire...because reddit YC darlings. it almost makes me feel nostalgic.
Why are we to take yishan as an authority on content moderation, have you BEEN to reddit?! the kind of moderation of repetitive content he's referring to is clearly not done AT ALL.
He does not put forth any constructive advice. be "operationally excellent". ok, thanks. you're wrong about spam. you're wrong about content moderation. ok, thanks. who is his audience? he's condescending the people who are dialed into online discourse inbetween finding new fun ways to plant trees and design an indulgent hawaiian palace. i expected more insight, to be honest. but time and time again we find the people at the top of internet companies are disappointingly common in their perspective on the world. they just happened to build something great once and it earned them a lifetime soapbox ticket.
> Machine learning algorithms are able to accurate identify spam
Nope. Not even close.
> and itʻs not because they are able to tell itʻs about Viagra or mortgage refinancing
Funny, because they can't even tell that.
Which is why mail is being ruined by google and microsoft. Yes you could argue that they have incentives to do just that. But that doesn't change the fact that they can't identify spam.
My experience has been that Google successfully filters spam from my Inbox, consistently.
I get (just looked) 30-40 spam messages a day. I've been on Gmail since the invite-only days, so I'm in a lot of lists I guess..
Very Very rarely do they get through the filter.
I also check it every couple of days to look for false-positives, and maybe once a month or less I find a newsletter or automated promo email in there for something I was actually signed up for, but never anything critical.
You’re talking way too hyperbolically to take seriously.
Yes, GMail does, in fact, “have a clue.” They do pretty well. They’re not perfect, and I have specific complaints, but to pretend they’re totally clueless and inept discredits anything else you’re saying.
Just as saying that machine learning can identify spam discredits anything else ex-reddit CEO says.
I'm sure gmail have a clue from their point of view, but those doesn't align with mine (nor, I'd argue, most of their users). Their view also as a coincidence happens to strengthen their hold on the market but who cares?
Key word here: ex (joking)... but seriously I'm absolutely baffled why someone would look to a former reddit exec for advice on moderation.
I guess you could say that they have experience, having made all the mistakes, and figured it out through trial and error! This seems to be his angle.
What I got from the whole reddit saga is how horrible the decision making was, and won't be looking to them for sage advice. These people are an absolute joke.
Who is doing a good job at scale? Is there really anyone we can look to other than people who “have experience, having made all the mistakes, and figured it out through trial and error”?
Sorry if this wasn't clear, but that's just his perspective. Mine is that they're a bunch of clowns with little to offer anyone. Who cares what this person thinks more than you, I, or a player for the Miami Dolphins.
Twitter is going to have to moderate at least exploitative and a ton of abusive content, eventually. I don't understand how this rant is helpful in the slightest. Seemed like a bunch of mumbo jumbo.
You do have a good point about there not being very many good actors, if any.
Who cares what this person thinks? They actually have experience tackling the problem. You or I have never been in a position of tackling the problem. Of course I am interested in the experience of someone who has seen this problem inside and out.
These random detours into climate-related topics are insanely disruptive of an otherwise interesting essay. I absolutely hate this pattern. I see what he's trying to do - you don't want to read about climate change but you want to read this other thing so I'm going to mix them together so you can't avoid the one if you want the other - but it's an awful dark pattern and makes for a frustrating and confusing reading experience. I kept thinking he was making an analogy before realizing he was just changing topics at random again. It certainly isn't making me more interested in his trees project. If anything I'm less interested now.
Since the argument was so well-structured, the interim detour to climate related topics was odd. The very argument was that spam can be detected by posting behaviors, yet the author engaged in those for his favored topic.
>No one argues that speech must have value to be allowed (c.f. shitposting).
>Hereʻs the answer everyone knows: there IS no principled reason for banning spam.
The whole threads seems like it revolves around this line of reasoning, which strawmans what free speech advocates are actually arguing for. I've never heard of any of them, no matter how principled, fighting for the "right" of spammers to spam.
There is an obvious difference between spam moderation and content suppression. No recipient of spam wants to receive spam. On the other hand, labels like "harmful content" are most often used to stop communication between willing participants by a 3d party who doesn't like the conversation. They are fundamentally different scenarios, regardless of how much you agree or disagree with specific moderation decisions.
By ignoring the fact that communication always has two parties you construct a broken mental model of the whole problem space. The model will then lead you stray in analyzing a variety of scenarios.
In fact, this is a very old trick of pro-censorship activists. Focus on the speaker, ignore the listeners. This way when you ban, say, someone with millions of subscribers on YouTube you can disingenuously pretend that it's an action affecting only one person. You can then draw false equivalency between someone who actually has a million subscribers and a spammer who sent a message to million email addresses.
A fun idea that I'm certain no one has considered with any level of seriousness: don't moderate anything.
Build the features to allow readers to self-moderate and make it "expensive" to create or run bots (e.g., make it so API access is limited without an excessive fee, limit screen scrapers, etc). The "pay to play" idea will eliminate an insane amount of the junk, too. Any free network is inherently going to have problems of chaos. Make it so you can only follow X people with a free account, but upgrade to follow more. Limit tweets/replies/etc based on this. Not only will it work, but it will remove the need for all of the moderation and arguments around bias.
As for advertisers (why any moderation is necessary in the first place beyond totalitarian thought control): have different tiers of quality. If you want a higher quality audience, pay more. If you're more concerned about broad reach (even if that means getting junk users), pay less. Beyond that, advertisers/brands should set their expectations closer to reality: randomly appearing alongside some tasteless stuff on Twitter does not mean you're vouching for those ideas.
> Build the features to allow readers to self-moderate
This is effectively impossible because of the bullshit asymmetry principle[1]. It's easier to create content that needs moderation than it is to moderate it. In general, there is a fundamental asymmetry to life that it takes less effort to destroy than it does to create, less work to harm than heal. With a slightly sharpened piece of metal and about a newton of force, you can end a life. No amount of effort can resurrect it.
It simply doesn't scale to let bad actors cause all the harm they want and rely on good actors to clean up their messes after the fact. The harm must be prevented before it does damage.
> make it "expensive" to create or run bots (e.g., make it so API access is limited without an excessive fee, limit screen scrapers, etc).
The simplest approach would be no API at all, but that won't stop scammers and bad actors. It's effectively impossible to prohibit screen scrapers.
> Make it so you can only follow X people with a free account, but upgrade to follow more. Limit tweets/replies/etc based on this. Not only will it work, but it will remove the need for all of the moderation and arguments around bias.
This is, I think, the best idea. If having an identity and sharing content costs actual money, you can at least make spamming not be cost effective. But that still doesn't eliminate human bad actors griefing others. Some are happy to pay to cause mayhem.
There is no simple technical solution here. Fundamentally, the value proposition of a community is the other good people you get to connect to. But some people are harmful. They may not always be harmful, or may be harmful only to some people. For a community to thrive, you've got to encourage the good behaviors and police the bad ones. That takes work and human judgement.
> But some people are harmful. They may not always be harmful, or may be harmful only to some people.
This is a fundamental reality of life that cannot be avoided. There is no magical solution (technical or otherwise) to prevent this. At best, you can put in some basic safeguards (like what you/I have stated above) but ultimately people need to learn to accept that you can't make everything 100% safe.
Also, things like muting/blocking work but the ugly truth is that people love the negative excitement of fighting online (it's an outlet for life's pressure/disappointments). Accepting that reality would do a lot of people a lot of good. A staggering amount of the negativity one encounters on social media is self-inflicted by either provoking or engaging with being provoked.
1. There are plenty of places online that "don't moderate anything". In fact, nearly all of the social networks started off that way.
The end result is … well, 4Chan.
2. "Self-moderation" doesn't work, because it's work. User's don't want to have constantly police their feeds and block people, topics, sites, etc. It's also work that never ends. Bad actors jump from one identity to the next. There are no "static" identifiers that are reliable enough for a user to trust.
3. Advertisers aren't going to just "accept" that their money is supporting content they don't want to be associated with. And they're also not interested in spending time white-listing specific accounts they "know" are good.
And? Your opinion of whether that's bad is subjective, yet the people there are happy with the result (presumably, as they keep using/visiting it).
> Self-moderation" doesn't work, because it's work.
So in other words: "I'm too lazy to curate a non-threatening experience for myself which is my responsibility because the offense being taken is my own." Whether or not you're willing to filter things out that upset you is a personal problem, not a platform problem.
> Advertisers aren't going to just "accept" that their money is supporting content they don't want to be associated with.
It's not. Twitter isn't creating the content nor are they financing the content (e.g. like a Netflix type model). It's user-generated which is completely random and subject to chaos. If they can't handle that, they shouldn't advertise there (hence why a pay-to-play option is best as it prevents a revenue collapse for Twitter). E.g., if I I'm selling crucifixes, I'm not going to advertise on slutmania.com
---
Ultimately, people need to quit acting like everything they come into contact with needs to be respectful of every possible issue or disagreement they have with it. It's irrational, entitled, and childish.
1. I didn't imply whether it was good or bad, just that the product you're describing already exists.
2. It's a platform problem. If you make users do work they don't want to do in order to make the platform pleasant to use, they won't do the work, the platform will not be pleasant to use, and they'll use a different platform that doesn't make them do that work.
3. "If they can't handle it, they shouldn't advertise there." Correct! They won't advertise there. That's the point.
There are already unmoderated, "you do the work, not us", "advertisers have to know what they're getting into" platforms, and those platforms are niche, with small audiences, filled with low-tier/scam ads and are generally not profitable.
> I didn't imply whether it was good or bad, just that the product you're describing already exists.
A product exists with those properties, but not something like Twitter.
> It's a platform problem.
It's not. I don't use stuff like 4chan because it's not of interest to me but it is of interest to others. There's zero requirement for Twitter to be a universally acceptable platform (that delusion is why there continues to be issues around moderation), just like 4chan needn't cater to everyone.
> There are already unmoderated, "you do the work, not us", "advertisers have to know what they're getting into" platforms
Right. And there's no requirement for Twitter to offer advertising. That's why I think it'd be wise for them to adopt a pay-to-play model instead of chasing that rabbit.
You seem to be demanding that Twitter adopt policies and features despite the fact the market has shown those policies and features will shrink their user base and their profit.
Demanding? No. I think it's a smart business decision and removes a ton of overhead and stress.
And no, App.net was Dalton Caldwell's halfhearted attempt (the UI was okay at best) after Twitter already had the network effect in motion (six years after Twitter launched).
There are no successful "pay for access" social media networks, and the argument that Twitter's dominance some how held App.net back ignores all the other social networks that succeeded despite being founded after Twitter.
Yes. The reason a social network works is because of its brand, adoption, and network effects—not features (App.net sounds like a Boomer social network).
Twitter is so well-established that introducing a premium paid version would be de-facto successful—if done properly, not like Twitter Blue—for producing revenue. Especially because so many people rely on it to communicate.
A rough example:
Twitter Basic = Free. Follow up to 50 people. Bookmark up to 100 tweets. 140 character limit. Ad supported.
Twitter Pro = $5/month. Follow up to 500 people. Unlimited bookmarking. 280 character limit. Up to 10 minute videos. Customized ads.
Twitter Business = $10/month. Follow up to 2500 people + all of the above.
Twitter Elite = $15/month. Elite Checkmark, unlimited follows, all of the above, + 420 character limit and up to 30 minute videos.
---
It can certainly be done, but it requires creativity and speed. Both of which waved bye bye to Silicon Valley about 10 years ago so it will likely be a mediocre version of the above at best.
Social networks work because people enjoy using them. As people tell their friends about the network, they use it, and network effects allow it to grow. As it grows, its brand is established.
Twitter is a distant fourth or fifth place network because, as it is, most people do not enjoy using it. Its Brand is wildly seen as synonymous with toxic parts of our culture.
And your plan is to make it worse and then charge people to use it.
It really depends on how you use it. Just based on your attitude it's clear you're engaging with bad stuff and then blaming that on the greater experience of the app/network as a whole. If you follow a bunch of politics stuff (which is inherently inflammatory) or seek out arguments with people you know you disagree with, yeah, you're not going to have a good time.
You can only follow bible quotes and motivation accounts and not once encounter any "toxic" behavior. It's 100% up to the user. Ignoring that is just ignoring reality (which like I said earlier in the thread, most people need to take responsibility for, but won't).
Usenet and IRC used to be self-moderated. The mods in each group or channel would moderate their own userbase, ban people who were causing problems, step in if things were getting too heated. At a broader level net admins dealt with the spam problem system wide, coordinating in groups in the news.admin hierarchy or similar channels in IRC.
This worked fine for many years, but then the internet got big. Those volunteer moderators and administrators could no longer keep up with the flood of content. Usenet died (yes, it's still around, but it's dead as any kind of discussion forum) and IRC is a shell of its former self.
Right, which is solved by the pay to play limits. This would essentially cut the problem off immediately and it would be of benefit to everyone. If it actually cost people to "do bad stuff" (post spam, vitriol, etc), they're far less-likely to do it as the incentives drop off.
The dragon folks seem to be chasing is that Twitter should be free but perfect (which is a have your cake and eat it too problem). That will never happen and it only invites more unnecessary strife between sociopolitical and socioeconomic factions as they battle for narrative control.
If you dislike long-form Twitter, here you go: https://threadreaderapp.com/thread/1586955288061452289.html - and please don't comment about that here. I know it can be annoying, but so is having the same offtopic complaints upvoted to the top of every such thread. This is why we added the site guideline: "Please don't complain about tangential annoyances—e.g. article or website formats" (and yes, this comment is also doing this. Sorry.)
Similarly, please resist being baited by the sales interludes in the OP. They're also offtopic and, yes, annoying, but this is why we added the site guideline "Please don't pick the most provocative thing in an article to complain about—find something interesting to respond to instead."
https://news.ycombinator.com/newsguidelines.html
* even more so than https://news.ycombinator.com/item?id=33446064, which was also above the median for this topic.