Hacker News new | past | comments | ask | show | jobs | submit login

This is a very good way to pitch your afforestation startup accelerator in the guise of a talk on platform moderation. /s

I'm pretty sure I've got some bones to pick with yishan from his tenure on Reddit, but everything he's said here is pretty understandable.

Actually, I would like to develop his point about "censoring spam" a bit further. It's often said that the Internet "detects censorship as damage and routes around it". This is propaganda, of course; a fully censorship-resistant Internet is entirely unusable. In fact, the easiest way to censor someone online is through harassment, or DDoS attacks - i.e. have a bunch of people shout at you until you shut up. Second easiest is through doxing - i.e. make the user feel unsafe until they jump off platform and stop speaking. Neither of these require content removal capability, but they still achieve the goal of censorship.

The point about old media demonizing moderation is something I didn't expect, but it makes sense. This is the same old media that gave us cable news, after all. Their goal is not to inform, but to allure. In fact, I kinda wish we had a platform that explicitly refused to give them the time of day, but I'm pretty sure it's illegal to do that now[0], and even back a decade ago it would be financial suicide to make a platform only catering to individual creators.

[0] For various reasons:

- The EU Copyright Directive imposes an upload filtering requirement on video platforms that needs cooperation with old media companies in order to implement. The US is also threatening similar requirements.

- Canada Bill C-11 makes Canadian content (CanCon) must-carry for all Internet platforms, including ones that take user-generated content. In practice, it is easier for old media to qualify as CanCon than actual Canadian individuals.




> This is a very good way to pitch your afforestation startup accelerator in the guise of a talk on platform moderation. /s

Also for anyone have Deja-Vu, he had almost the same thread a few months ago (and probably a few more times before/after.

https://twitter.com/yishan/status/1514938507407421440


I've often pointed out that the concept of censorship as being only or primarily through removal of speech is an antiquated concept from a time before pervasive communications networks had almost effortlessly connected most of the world.

Censorship in the traditional sense is close to impossible online today.

Today censorship is often and most effectively about suppressing your ability to be heard, often by flooding out the good communications with nonsense, spam, abuse, or discrediting it by association (e.g. fill the forums of a political opponents with apparent racists). This turns the neigh uncensorability of modern communications methods on its head and makes it into a censorship tool.

And, ironically, anyone trying to use moderation to curb this sort of censorious abuse is easily accused of 'censorship' themselves.

I remain convinced that the best tool we have is topicality: When a venue has a defined topic you can moderate just to stay onto the topic without a lot of debatable value judgements (or bruised egos-- no reason to feel too bad about a post being moved or removed for being offtopic). Unfortunately, the structure of twitter pretty much abandons this critical tool.

(and with reddit increasingly usurping moderation from subreddit moderators, it's been diminished there)

Topicality doesn't solve all moderation issues, but once an issue has become too acrimonious it will inherently go off-topic: e.g. if your topic is some video game well participants calling each other nasty names is clearly off-topic. Topicality also reduces the incidence of trouble coming in from divisive issues that some participants just aren't interested in discussing-- If I'm on a forum for a video game I probably don't really want to debate abortion with people.

In this thread we see good use of topicality at the top with Dang explicitly marking complaints about long form twitter offtopic.

When it comes to spam scaling considerations mean that you need to be able to deal with much of it without necessarily understanding the content. I don't think this should be confused with content blindness being desirable in and of itself. Abusive/unwelcoming interactions can occur both in the form (e.g. someone stalking some around from thread to thread or repeating an argument endlessly) and and in the content (continually re-litigating divisive/flame-bate issues that no one else wants to talk about, vile threatening messages, etc.)

Related to topicality is that some users just don't want to interact with each other. We don't have to make a value judgement about one vs the other if we can provide space so that they don't need to interact. Twitter's structure isn't great for this either, but more the nature of near-monopoly mega platforms isn't great for it. Worse, twitter actively make it hard-- e.g. if you've not followed someone who is network-connected to other people you follow twitter continually recommends their tweets (as a friend said: "No twitter, there is a reason I'm not following them") and because blocking is visible using it often creates drama.

There are some subjects on HN where I might otherwise comment but I don't because I'd prefer to avoid interacting with a Top Poster who will inevitably be active in those subjects. Fortunately, there are plenty of other places where I can discuss those things where that poster isn't active.

Even a relatively 'small' forum can easily have as many users as many US states populations at the founding of the country. I don't think that we really need to have mega platforms with literally everyone on them and I see a fair amount of harm from it (including the effects of monoculture moderation gone wrong).

In general, I think the less topic constrained you can make a venue the smaller it needs to be-- a completely topic-less social venue probably should have no more than a few dozen people. Twitter is both mega-topicless and ultra-massive-- an explosive mixture which will inevitably disappoint.

Another tool I think many people have missed the value of is procedural norms including decorum. I don't believe that using polite language actually makes someone polite (in fact, the nastiest and most threatening remarks I've ever received were made with perfectly polite language)-- but some people are just unable to regulate their own behavior. When there is an easily followed set of standards for conduct you gain a bright line criteria that makes it easier to eject people who are too unable to control themselves. Unfortunately, I think the value of a otherwise pointless procedural conformity test is often lost on people today, though they appear common in historical institutions. (Maybe a sign of the ages of the creators of these things: As a younger person I certainly grated against 'pointless' conformity requirements, as an older person I see more ways that their value can pay for their costs: I'd rather not waste my time on someone who can't even manage to go through the motions to meet the venue's standards)

Early on in Wikipedia I think we got a lot of mileage out of this: the nature of the site essentially hands every user a loaded gun (ability to edit almost everything, including elements on the site UI) and then tells them not to do use it abusively rather than trying to technically prevent them from using it abusively. Some people can't resist and are quickly kicked out without too much drama. Had those same people been technically prevented they would have hung around longer and created trouble that was harder to kick them out over (and I think as the site added more restrictions on new/casual users the number of issues from poorly behaved users increased).


I've had a slogan bouncing around my head for a while now without an outlet for it; here goes:

Censorship is policy, content removal is capability.


I love that he’s for flame wars, go figure that’s all Reddit is




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: