Hacker News new | past | comments | ask | show | jobs | submit login

I've always thought that slashdot handled comment moderation the best. (And even that still had problems.)

In addition to that these tools would help:

(1)Client-side: Being able to block all content from specific users and the replies to specific users.

(2)Server-side: If userA always 'up votes' comments from userB apply a negative weighting to that upvote (so it only counts as 0.01 of a vote). Likewise, with 'group-voting'; if userA, userB, and userC always vote identically down-weight those votes. (this will slow the 'echo chamber' effect)

(3)Account age/contribution scale: if userZ has been a member of the site since it's inception, AND has a majority of their posts up-voted, AND contributes regularly, then give their votes a higher weighted value.

Of course these wouldn't solve everything, as nothing ever will address every scenerio; but I've often thought that these things combined with how slashdot allowed you to score between -1 to 5, AND let you set the 'post value' to 2+, 3+, or 4+ would help eliminate most of the bad actors.

Side note: Bad Actors, and "folks you don't agree with" should not be confused with each other.




> I've always thought that slashdot handled comment moderation the best.

A limited number of daily moderation "points". A short list of moderation reasons. Meta-moderation.


One thing that's easy to forget is that super-complex weighted moderation/voting systems can get computationally expensive at the scale of something like Twitter or Facebook etc..

Slashdot had a tiny population, relatively speaking, so could afford to do all that work.

But when you're processing literally millions of posts a minute, it's a different order of magnitude I think.


> Slashdot had a tiny population...

Tiny, specific, non-generalist population. As soon as that changed, /. went down the drain like everything else. I still like /.'s moderation system better than most, but the specifics of how the system works are a second order concern at best.


The real problem with ALL moderation systems is Eternal September.

Once the group grows faster than some amount, the new people never get assimilated into the group, and the group dies.

"Nobody goes there, it's too popular."


I wonder about your (2) idea... If the goal is to reduce the effect of bots that vote exactly the same, then ok, sure. (Though if it became common, I'm sure vote bots wouldn't have a hard time being altered to add a little randomness to their voting.) But I'm not sure how much it would help beyond that, since if it's not just identifying _exact_ same voting, then you're going to need to fine tune some percentage-the-same or something like that. And I'm not sure the same fine-tuned percentage is going to work well everywhere, or even across different threads or subforums on the same site. I also feel like (ignoring the site-to-site or subforum-to-subforum differences) that it would be tricky to fine tune correctly to a point where upvotes still matter. (Admittedly I have nothing solid to base this on other than just a gut feeling about it.)

It's an interesting idea, and I wonder what results people get when trying it.


> Server-side: If userA always 'up votes' comments from userB apply a negative weighting to that upvote

This falls down, hard, in expert communities. There are a few users who are extremely knowledgeable that are always going to get upvoted by long term community members who acknowledge that expert's significant contributions.

> Being able to block all content from specific users and the replies to specific users.

This is doable now client side, but when /. was big, it had to be done server side, which is where I imagine all the limits around friend/foes came from!

The problem here is, trolls can create gobs of accounts easily, and malevolent users group together to create accounts and upvote them, so they have plenty of spare accounts to go through.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: