I was part of some social computing facilities and burnt my own and friends’ money to offer basic stuff for people to publish basic html pages, small twitter like service, one nextcloud and one invidious instance. Within weeks, nextcloud was filled with pirated content sharing, html pages were hosting fake bank login phishing pages and moderation became horror. Our colo provider sent us some written warnings to sort this out immediately.
While I still inspire to launch a twitter 1.0 plain old less featured version, but the thought of moderation kills my motivation immediately. :(
There are still public computing facilities here, like sdfeu, envs.net etc, am not sure how they handle the moderation of spam and pirated content attack.
Mastodon effectively democratizes the moderation to the huge group of server admins - who do a massive amount of unpaid labor. It definitely keeps some people away, and burns others out.
Sorry, "effectively" was an ambiguous word to use. I meant it in the sense of "for all practical purposes; in effect," rather than implying it was good.
To be clear, I think the admins are mostly white knuckling the moderation problem.
>the other part has always been endemic to the Internet (and communities!).
Indeed. But unlike machines (and AI, in the future), humans don't scale well. managing a potential audience of 10000 user with a few hundred commenters focused in a few countrie is an order of magnitude different from the millions of users with thousands of commenters and a very well established malicious actors with no respect for community.
It's more work, for more people, for less appreciation. It's also harder than ever to pay for labor as a small community, unless you are very well off.
It's definitely a thing, but IME hosting a small instance, it's really not that bad. My advice for new admins would be to 1. find a small, decently up-to-date block list as a starting point (I can recommend FediNuke [0]), 2. check the #FediBlock hashtag (using any Mastodon UI or the RSS feed automatically generated) once a day or so for any warnings about bot attacks, or any other things you would like to be super proactive about, 3. promptly respond to the (likely very infrequent) reports you get from your users.
I could see why this might not scale as well to instances with thousands of users, or if a user is a frequent target of harassment (where opt-in federation might be the only solution), but for our lab, the existence of all the nazi or whatever instances has been resolved via a simple "Oh, these again. click, click, click There, the whole instance is banned." a few times a year. I have a suspicion that Dunbar's number or lower might be the optimal size for any federated social media instance, since knowing every user on my instance makes it a lot easier to tailor the little effort I have to put in.
While I still inspire to launch a twitter 1.0 plain old less featured version, but the thought of moderation kills my motivation immediately. :(
There are still public computing facilities here, like sdfeu, envs.net etc, am not sure how they handle the moderation of spam and pirated content attack.