Hacker News new | past | comments | ask | show | jobs | submit login

In the real world, when you're unhinged, annoying, intrusive...you face almost immediate negative consequences. On social media, you're rewarded with engagement. Social media owners "moderate" behavior that maximizes the engagement they depend on, which makes it somewhat of a paradox.

It would be similar to a newspaper "moderating" their journalists to bring news that is balanced, accurate, fact-checked, as neutral as possible, with no bias to the positive or negative. This wouldn't sell any actual news papers.

Similarly, nobody would watch a movie where the characters are perfectly happy. Even cartoons need villains.

All these types of media have exploited our psychological draw to the unusual, which is typically the negative. This attention hack is a skill evolved to survive, but now triggered all day long for clicks.

Can't be solved? More like unwilling to solve. Allow me to clean up Twitter:

- Close the API for posting replies. You can have your weather bot post updates to your weather account, but you shouldn't be able to instant-post a reply to another account's tweet.

- Remove the retweet and quote tweet buttons. This is how things escalate. If you think that's too radical, there's plenty of variations: a cap on retweets per day. A dampening of how often a tweet can be retweeted in a period of time to slow the network effect.

- Put a cap on max tweets per day.

- When you go into a polarized thread and rapidly like a hundred replies that are on your "side", you are part of the problem and don't know how to use the like button. Hence, a cap on max likes per day or max likes per thread. So that they become quality likes that require thought. Alternatively, make shadow-likes. Likes that don't do anything.

- When you're a small account spamming low effort replies and the same damn memes on big accounts, you're hitchhiking. You should be shadow-banned for that specific big account only. People would stop seeing your replies only in that context.

- Mob culling. When an account or tweet is mass reported in a short time frame and it turns out that it was well within guidelines, punish every single user making those reports. Strong warning, after repeated abuse a full ban or taking away the ability to report.

- DM culling. It's not normal for an account to suddenly receive hundreds or thousands of DMs. Where a pile-on in replies can be cruel, a pile-on in DMs is almost always harassment. Quite a few people are OK with it if only the target is your (political) enemy, but we should reject it by principle. People joining such campaigns aren't good people, they are sadists. Hence they should be flagged as potentially harmful. The moderation action here is not straightforward, but surely something can be done.

- Influencer moderation. Every time period, comb through new influencers manually, for example those breaking 100K followers. For each, inspect how they came to power. Valuable, widely loved content? Or toxic engagement games? If it's the latter, dampen the effect, tune the alghoritm, etc.

- Topic spam. Twitter has "topics", great way to engage in a niche. But they're all engagement farmed. Go through these topics manually every once in a while and use human judgement to tackle the worst offenders (and behaviors)

- Allow for negative feedback (dislike) but with a cap. In case of a dislike mob, take away their ability to dislike or cap it.

Note how none of these potential measures address what it is that you said, it addresses behavior: the very obvious misuse/abuse of the system. In that sense I agree with the author. Also, it doesn't require AI. The patterns are incredibly obvious.

All of this said, the above would probably make Twitter quite an empty place. Because escalated outrage is the product.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: