> Your participation in a community would involve reading, posting as desired once you've been in a community for a certain amount of time, taking a turn at evaluating N comments that have been flagged, and taking a turn at evaluating disputes about evaluations, with the latter 2 being spread around so as to not take up a lot of time (though, having those duties could also reinforce your investment in a community).
This is an interesting idea, and I'm not sure it even needs to be that rigorous. Active evaluations are almost a chore that will invite self-selection bias. Maybe we use sentiment analysis/etc to passively evaluate how people present and react to posts?
It'll be imperfect in any small sample, but across a larger body of content, it should be possible to derive metrics like "how often does this person compliment a comment that they also disagree with" or "relative to other people, how often do this person's posts generate angry replies", or even "how often does this person end up going back and forth with one other person in an increasingly angry/insulting style"?
It still feels game-able, but maybe that's not bad? Like, I am going to get such a great bogus reputation by writing respectful, substantive replies and disregarding bait like ad hominems! That kind of gaming is maybe a good thing.
One fun thing is this could be implemented over the top of existing communities like Reddit. Train the models, maintain a reputation score externally, offer an API to retrieve, let clients/extensions decide if/how to re-order or filter content.
This is an interesting idea, and I'm not sure it even needs to be that rigorous. Active evaluations are almost a chore that will invite self-selection bias. Maybe we use sentiment analysis/etc to passively evaluate how people present and react to posts?
It'll be imperfect in any small sample, but across a larger body of content, it should be possible to derive metrics like "how often does this person compliment a comment that they also disagree with" or "relative to other people, how often do this person's posts generate angry replies", or even "how often does this person end up going back and forth with one other person in an increasingly angry/insulting style"?
It still feels game-able, but maybe that's not bad? Like, I am going to get such a great bogus reputation by writing respectful, substantive replies and disregarding bait like ad hominems! That kind of gaming is maybe a good thing.
One fun thing is this could be implemented over the top of existing communities like Reddit. Train the models, maintain a reputation score externally, offer an API to retrieve, let clients/extensions decide if/how to re-order or filter content.