Hacker News new | past | comments | ask | show | jobs | submit login

As I've said for a long time, I don't mind moderation, I just want to be in charge of what I see. Give me the tools that the moderators have, let me be able to filter out bots at some confidence level; let me see "removed" posts, banned accounts; don't mess with my searches unless I've asked for that explicitly.

Power to the people.




I don't think that really deals with beheading videos, incitement to terrorism, campaigns to harass individuals and groups, child porn, and many cases where online communities document or facilitate crimes elsewhere.


Child porn is illegal. Are beheading videos illegal? Incitement to terrorism is probably a crime (though I'd argue that it should be looked at under the imminent lawless action test[1] as it's speech). So all of these would be removed and are not part of a moderation discussion.

As to "many cases where online communities document or facilitate crimes elsewhere", why criminalise the speech if the action is already criminalised?

That leaves only "Campaigns to harass individuals and groups". Why wouldn't moderation tools as powerful as the ones employed by Twitter's own moderators deal with that?

[1] https://mtsu.edu/first-amendment/article/970/incitement-to-i...


The problem here is that the default assumption is that everyone on the internet is under the jurisdiction of US law, when the majority in fact are not.

These are global platforms with global membership, simply stating that “if it is free speech in America it should be allowed” isn’t a workable concept.


How about saying that if it is free speech in America it should be allowed in America, but censored in countries where it is against the law? It seems very easy to say.


So different users aren’t able to see full threads based on their location? You’re seemingly randomly able to respond in some circumstances and not others?

When there are people all over the globe participating in the same discussion, you can’t realistically have an odd patchwork of rules. It’s very common for people on this forum, for example, to be commenting on their experiences in Europe, where free speech is heavily curtailed in comparison to the states. How do you manage such threads?


> "So different users aren’t able to see full threads based on their location? You’re seemingly randomly able to respond in some circumstances and not others?"

Of course. That is what they've demanded, so that is what they get.

> "When there are people all over the globe participating in the same discussion, you can’t realistically have an odd patchwork of rules. "

On the contrary: You must have this. As a matter of law. There is no alternative, other than withdrawing from those countries entirely and ignoring the issue of people accessing your site anyway (which is what happens in certain extreme situations, states under sanctions, etc)

> " It’s very common for people on this forum, for example, to be commenting on their experiences in Europe, where free speech is heavily curtailed in comparison to the states. How do you manage such threads? "

Here are the options:

1) Do not do business in those countries.

2) Provide different services for those countries to reflect their legal requirements.

There is no way to provide a globally consistent experience because laws are often in mutual conflict (one state will for example prohibit discussion of homosexuality and another state will prohibit discriminating on the basis of sexual preference)


They already do for copyrighted works eg region locked for Disney movies or World Cup games


Why not? Sounds workable to me.


That's correct and that's actually how it works right now (Germany has different speech laws and Twitter attempts to comply with them[1]). However, it is an American company and it's not unreasonable to follow the American law in America. I would also think it's quite possible to use the network effect of the service to bully places like Germany into allowing greater expression, or simply providing it on the sly by making it easy for Germans to access what they want. Although, I do see the EU is trying to do the same in reverse, probably to (as is its wont) to create a tech customs union that allows its own tech unicorns to appear (something it has failed miserably at, in part because of its restrictive laws).

If I had a tool that could (at least attempt to) filter out anti-semitism or Holocaust denial, then Germany could have that set to "on" to comply with the law. I'm all for democracies deciding what laws they want.

[1] https://www.reuters.com/article/us-germany-hatecrime-idUSKBN...


'x is illegal' is a cop-out (albeit often unintentional) and I wish people would stop using it. anything can be made illegal, are you just going to roll over if expressing an unpopular idea becomes a crime? Conversely, illegality doesn't deter a lot of people and many are skilled at playing with the envelope of legality, so absent any moderation you'll get lots of technically/arguably legal content that is designed or degrade or disrupt the operation of a target forum.

It's unhealthy to just throw every difficult problem at courts; the legal system is clumsy, unresponsive, and often tends to go to unwanted extremes due to a combination of technical ignorance, social frustration, and useless theatrics.


We're talking about a social media service adhering to one of the most liberal set of speech laws and norms in the world, not the imposition on the population of an unjust law. Tell me I can't say the word "gay" on threat of imprisonment and I'll say it more but that's not relevant to this discussion.


It's the "documentation of the crime" aspect of child pornography that makes it illegal. It is still technically illegal in parts of the US to possess, say, drawn illustrations of pornography featuring minors (what 日本人 call "lolicon") but the legal precedents are such that it can't really be prosecuted.

That is, it's not clear in the US you can ban something on the basis of it being immoral, you need to have the justification that it is "documentation of a crime".


What makes child porn illegal is the argument that anyone who views or distributes it is re-abusing the victim. Otherwise, it would be justifiably illegal to create but not to view, possess, or distribute. Yet all are illegal in the USA.

This does not stop the FBI from being a major child porn distributor, despite that meaning the FBI is re-abusing thousands of victims under this rubric.


> What makes child porn illegal is the argument that anyone who views or distributes it is re-abusing the victim.

That's what makes it illegal? What if it's done on a private forum that the victim never finds out about? What if the victim is, say, dead? I don't think those change the legality.


Here's there's a major difference between USA and EU law, and I daresay culture as well: how private information is viewed.

As far as I understand in the EU private information is part of the self. Thus, manipulating, exchanging, dealing with private information without the person's consent is by itself a kind of aggression or violation of their rights. Even if the person never finds out.

In the USA however private information is an asset. The aggression or violation of right only happens when it actually damages the victim's finances. So if the victim never finds out about discussions happening somewhere else in the world, well… no harm done I guess?

Both views are a little extreme in my opinion, but the correct view (rights are only violated once the victim's own life has been affected in any way), is next to impossible to establish: in many cases the chain of events that can eventually affect a person's life is impossible to trace. Because of that I tend to suggest caution, and lean towards the EU side of the issue.

Especially if it's the documentation of a crime as heinous as child abuse.


That's the rubric courts and legislatures in the USA have used.

It is, in general, really really difficult to pass speech laws in the USA because of that pesky First Amendment -- even if they're documentation of a crime. Famously, Joshua Moon of Kiwi Farms gleefully hosted the footage from the Christchurch shooting even when the actual Kiwis demanded its removal.

But if you can argue that procurement or distribution of the original material perpetuates the original crime, that is, if it constitutes criminal activity beyond speech -- then you can justify criminalizing such procurement or distribution. It's flimsy (and that makes it prone to potentially being overturned by some madlad Supreme Court in the future with zero fucks to give about the social blowbacks), but it does the job.

In other countries it's easy to pass laws banning speech based on its potential for ill social effects. Nazi propaganda and lolicon manga are criminalized in other countries, but still legal in the USA because they're victimless.

If this makes you wonder whether it's time to re-evaluate the First Amendment -- yes. Yes, it is.


I'm in favor of the First Amendment remaining at least this strong. None of the above things strike me as nearly as dangerous as "the ruling party being able to suppress criticism and opposition by claiming that their opponents' words have potential for ill social effects".


Well, based on https://cbldf.org/criminal-prosecutions-of-manga/, it seems you probably can beat the charges, but it will take years and an expensive legal defense. People have been prosecuted and usually take plea bargains, so some amount of jail time can be expected. Simple cases of "manga is child porn! yadda yadda" can probably be overlooked but if the police don't like you for some reason, getting arrested is definitely a risk. Although there is supposed to be "innocent until proven guilty" even getting arrested can disqualify you from many jobs.


> even getting arrested can disqualify you from many jobs.

That's something that I think is seriously wrong with the USA right now: the idea of an "arrest record", or at least the idea of it being accessible by anyone other than the police.

There are a number of situations where it is perfectly reasonable to arrest innocent people, then drop all charges. Let's say the cop arrive at a crime scene, there's a man on the ground lying in a pool of blood, and another man standing with a smoking gun holstered at their hip. Surely it would be reasonable to arrest the man that's still standing and confiscate their gun, at least for the time necessary to establish the facts?

But then once all charges has been cleared (say the dead guy had a knife and witnesses identify him as the aggressor), that arrest should be seen as nothing more as either a mistake or a necessary precaution. It's none of potential employer's business. In fact, I'd go as far as make it illegal to even ask for arrest records, or discriminate on that basis.

Criminal records of course are another matter.


That's genuinely interesting (have an upvote) but a social media site's responsibility in a situation such as this is to assess legality, not prosecutability, hence it would be removed.


Laws don't enforce themselves.

Anime image boards are not in a hurry to expunge "lolicon" images because they don't face any consequence from having them.

I wouldn't blame Tumbler from banning ero images a few years back because ero image of real people are a lot of trouble. You have child porn, revenge porn, etc. Pornography produced by professionals has documentation about provenance (every performer showed somebody their driver's license, birth certificate, probably got issued a 1099) if this was applied to people posting images from the wild they would say people's privacy is being violated.


I'm not here to debate the legality of child porn or lolicon images, and I fail to see the relevance of what you've written to the provision of moderation tools to the users of Twitter.

> Laws don't enforce themselves.

What has that got to do with Twitter? Please try to stay on track.


> It's the "documentation of the crime" aspect of child pornography that makes it illegal.

And protection of victim rights, I suppose.


Exactly.


The vast majority of moderator removed comments and posts on Reddit have nothing to do with the illegal activities you mention.

The vast majority of removed comments are made to shape the conversations.

I think most people would be ok with letting admins remove illegal content, while allowing moderators shape content, as long as users could opt-in to seeing content the mods censored.

This is a win-win. If people don't want to see content they feel is offensive, they don't have to.

Let the user decide.


Legal vs illegal cannot be enforced on a private platform because the truth procedure for "legal vs illegal" involves a judge, lawyers, often waiting for years.

What you can enforce is "so and so says it is illegal" (accurate 90% or 99% or 99.9% of the time but not 100%) or some boundary that is so far away from illegal that you never have to use the ultimate truth procedure. The same approach works against civil lawsuits, boycotts and other pressure which can be brought to bear.

I think of a certain anime image board which contains content so offensive it can't even host ads for porn that stopped taking images of cosplayers or any real life people because it eliminated moderation problems that otherwise would be difficult.

There is also spam (should spam filters for email be banned because the violate the free speech of spammers?) and other forms of disingenuous communication. When you confront a troll inevitably they will make false comparisons (e.g. banning Kiwi Farms is like banning talk to the effect that trans women could damage the legitimacy of women's sports just when people are starting to watch women's sports)

On top of that there are other parties involved. That anime site I mention above has no ads and runs at very low cost but has sustainability problems because it used to sell memberships but got cut off by payment providers. You might be happy to read something many find offensive but an advertiser might not want to be seen next to it. The platform might want to do something charitable but hosting offensive talk isn't it.


> (should spam filters for email be banned because the violate the free speech of spammers?)

I submit that spam filters should be under the sole control of their end users. If I'm using a Yahoo or Gmail account (I'm not) I should have the option to disable the spam filter entirely, or to only use personal parameters that are trained on the mail only I received, and not email should ever be summarily blackholed without letting me know in some way. If an email bounces, the sender should know. If it's just filtered, it should be in the recipient's spam folder.


> because the truth procedure for "legal vs illegal" involves a judge

This part is not correct. Private companies block what they believe to be illegal activities in their systems constantly - in order to limit the legal liability of being an accomplice to a crime. This is the case in all industries - and is standard practice from banking, to travel, to hotels, to retail... it's commonplace for companies to block services.

For spam, I would recommend that it gets a separate filter-flag allowing users to toggle it and see spam content, separately toggled from moderated content.


Yes, and why should they? Social media is not a tool to reduce crime, the government can fund its own social media if it so desires.


All of those are legal except for child porn. Not even your insane nanny state agrees with you.


> let me see "removed" posts, banned accounts

This is what HN does with "showdead".


I find them unnecessarily hard to read without tweaking the styles, because of the low contrast.


If you click on a timestamp for such a comment, you're taken to the comment page where it remains readable.


Hey, thanks for this tip! I didn’t know this. :)


You can also just select the text, that selection styles has decent contrast


You can't reply to dead comments, effectively killing the conversation, so no, it's not the same thing.


That's what the vouch link on the comment page is for.


Vouching for a comment makes it visible for everyone else. That means that dang reviews whether the comment breaks the rules and if it does he'll take away your ability to vouch in the future. Therefore, vouching is not a way to continue commenting on a sub-thread that breaks the rules.


There's "breaking the rules" and then there's "breaking the rules according to a certain interpretation" and then there's "I don't like what he said, so I'm going to interpret the rules in a way that justifies removing his post".

It all comes down to some guy telling me how to talk. I don't like it. Anybody who likes it has rocks in his head.


Your free to create your own site and talk on it however you like.

What you want is someone else's audience, and I'm not exactly sure what makes you think you have the right to that?


Do you like to be told how to talk?


Ah, all the nuance of a bull in a china shop.

Since I was not born with a language, yes I've been told how to talk for a sizeable portion of my life.

In fact learning things like tact and politeness, especially as it relates to the society I live in, has been monumental in my success.

Do you go to your parents house and tell them to screw off? Do you go to work and open your mouth like a raging dumpster fire? Do you have no filter talking to your husband/wife/significant other? Simply put your addition to the discussion is that of an impudent child. "I want everything and I want to give up nothing" is how an individual becomes an outcast, and I severely doubt this is how you actually live outside the magical place known as the internet, though I may be surprised.


I think it's possible to know how to control one's tongue and also not like being told what to say or how to speak, quite possibly because one knows how to do that and because they have their own mind.


I mean, I also don't like that I'm not infinitely wealthy, have a limited lifespan, and am subject to the cruelties of entropy. My like or dislike of these has little to do with addressing the problems by all the above situations in a rational and realistic manner while considering the outcomes in granting myself unlimited godlike power in doing so.


And by moving the goalposts you aren't addressing my comment, but still…

What "godlike power" are you referring to? The ability to moderate what turns up in your own social media feed? The ability to respond to comments someone else has deemed to break rules. I would hope for a bit more than that for godlike power.


That's some well-lubricated evasion.

Let's assume that you are not a child, that you are confident in your ability to manage your snark and, most of all, highly value your conversation.

I'm going to conclude that yes, you definitely dislike being told how to talk.


> you definitely dislike being told how to talk.

I may be unique in this regard, but I am aware of the fact that sometimes I make mistakes, and I don’t highly value all of my conversation, sometimes I just rant, or enjoy engaging in more superfluous conversation. Sometimes the best conversations aren’t “highly valuable”!


>that you are confident in your ability to manage your snark

Oops.


> It all comes down to some guy telling me how to talk.

Nobody is "telling you how to talk." People are free to choose their terms for voluntary social interactions. You don't have a right to inflict yourself on others who wish not to interact with you.


I like it even less when 100 IQ people are not told how to talk, repeatedly - they render coherent conversation and decision-making impossible.

I'll give up some of my freedom, to limit everyone else's freedom every day of the week - the only concern I have is roughly the IQ of the people doing the limiting.


I completely agree with you, but that's the way The Orange Website works. As a lowly user there is nothing we can do about it.


You can actually email dang/the mods and make your case. Make sure to read up on the extensive documentation he shares for how he moderates and how he thinks about moderation right here on HN first, plus any discussion on past suggestions. Mastering the search box helps. A lot of modern HN policy and functionality started as recurring suggestions that became experiments.

https://hn.algolia.com/help

https://hn.algolia.com/settings

dang often references past discussions with search links, so here's a good starting point: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

https://hn.algolia.com/?dateRange=all&page=7&prefix=true&que...


If it's possible to train models that show people the exact ads they like, then I have absolutely no doubt one could do the same to only show the content you like. Learning from downvoted and reported posts, etc.

On second thought.. I suppose that's Tiktok.


So, basically, you want moderation as it's described in the article. So, arguably, you mind about moderation as much as it's author, since that's their point.


I read the article and second it, but I want more than that, I want the tools the current moderators have, not just ones they provide me. If they give me a tool, I want it to be as powerful as their tool, not a compromise. If I'm to moderate my own feed then why do I not get the tools of a moderator?


Love the idea. Not sure if I understand though.

So you want a moderator to moderate. but then you also want to have tools to see what has been moderated away and unlock those? Right? So moderate yes, but also unmoderate by the users.

Power to the people!


I don't know if this is what OP meant, but I really like your interpretation

Mods exist and can ban/lock/block people and content but users can see everything that was banned, removed or locked, as well as the reason why; what policy did the user violate?

I think the only exception would be actually illegal content; that should be removed entirely, but maybe keep a note from the mods in its place stating "illegal content".

That way users can actually scrutinise what the mods do and one doesn't wonder whether or not the mods removed a post because they are biased or for ligit reasons, and opinions are not entirely removed, as they are still readable, but you can't respond to them


So where does spam fit in this?

In addition crap floods? If I submit half a billion posts do you really want that handled by moderation?

Being a server operator I've seen how bad the internet actually sucks, this may be something the user base does not have to experience directly. When 99.9% of incoming attempts are dropped or ban listed you start to learn how big the problem can be.


Spam can work the same way — that's how our email spam filters work. To use the OP's "censorship vs moderation" dichotomy, the current "censorship" regime would be like if your email filter marked an email as spam, and not only did not give you the option to disagree with the filter (i.e. mark as "Not Spam"), but didn't even give you the option to see the offending message to begin with.

Spam may still leak into our inboxes today, but the level of user control over email spam is generally a stable equilibrium, the level of outrage around spam filters — and to be clear, there are arguments to be made that spam filters are increasingly biased — is much MUCH lower than that around platform "censorship".


Spam is advertising, right? That doesn't need special protection. Flooding is like the heckler's veto so that could also be against the rules, it doesn't need special protection either.

As to moderation, why not be able to filter by several factors, like "confidence level this account is a spammer"? Or perhaps "limit tweets to X number per account", or "filter by chattiness". I have some accounts I follow (not on Twitter, I haven't used it logged in in years) that post a lot, I wish I could turn down the volume, so to speak.


>Spam is advertising, right?

What is spam... exactly? Especially when it comes to a 'generalized' forum. I mean would talking about Kanye be spam or not? It's this way with all celebrities, talking about them increases engagement and drives new business.

Are influencers advertising?

Confidence systems commonly fail across large generalized populations with focused subpopulations. Said subpopulations tend to be adversely affected by moderation because their use of communication differs from the generalized form.


> would talking about Kanye be spam or not?

No.

> Are influencers advertising?

That spam is advertising does not make all advertising spam.

We already have filters based on confidence for spam via email, with user feedback involvement too, so I don't need to define it, users of the service can define it for me.


Simply put we keep putting more and more and more filtering on the user with complete disregard for physical reality here, and ignore the costs.

The company that provides the service defines the moderation because the company pays for the servers. If you start posting 'bullshit' that doesn't eventually pay for the servers and/or drives users away money will be the moderator. There is no magic free servers out there in the world capable of unlimited space and processing power.


> Simply put we keep putting more and more and more filtering on the user

Who would be put upon by this? The average user doesn't have to be, they could use the default settings which are very anodyne. The rest of us get what we want, that's what the article stated. Who's finding this a burden?

As to the reality of things, Twitter's just been bought for billions and there's plenty of bullshit being posted there. That's the reality, and several people who've made a lot of money by working out how to balance value and costs think it can do better.


The old Something Awful forums did something similar. If someone posted something that was unacceptable, the comment would generally stay and the comment would get a tag saying “the user was banned for this post”. They also had a moderation history so you could go back and see mod comments on why they gave bans/probations.


here is the thing. this has been discussed on fediverse before and the general consensus is, if a post is deleted by a mod or something, that is "gone". they record a log of "deletion" but not what because it should not exist.....


> Not sure if I understand though.

My apologies.

> So you want a moderator to moderate.

I don't care whether they continue to moderate centrally but it would suit those who do.

> but then you also want to have tools

Yes.

> to see what has been moderated away and unlock those?

Yes.

If an app you download has settings but they are either:

a) only available to the developers or company

b) the defaults always override your settings

would you be happy? Why, you might ask, do you not get access to the settings and to set them as you wish?


I would say something more akin to SPAM scoring would be good.

Contextual filters/scanners would score a piece of content, give it a "score" based on what ever categorizations are being filtered (NSFW, Non-Inclusive Lang, Slurs, Disinfo, etc)

Then both the creator and the consumer should be able to see the score in transparent manner, with the consumer being able to set a threshold to filter out any post that is higher then what they choose

Free Speech Absolutist could set it to 0, Default could be 50, and go from there


I agree, this is the only part I'm doubtful upon whether I can see an individual's score, as it might create a prejudice against that individual that is felt by that individual ("I see you're a bot/troll/like hate speech…"). It also makes me wonder if individual centralised mods should be able to see more than that, but I digress.

Scores across a range of measures would be best, in my view.


Yes! It's surprising that federated networks don't do moderation / antispam using tagging/voting and user-managed filters.


Mastodon decided to go the guilt-by-assiciation route instead:

Instead of just leaving it to users or admins to block users or servers they didn’t want to deal with, a large subset decided to block for anyone who didn’t block certain other servers.


Which still makes sense, a lot.


I agree with the spirit, but I think we have to consider the structure.

>> Give me the tools that the moderators have

Whatever tools a site like twitter or youtube gives you, (A) most people will never use them and (B) they still control how the tools work. These two are enough to achieve any censorship goal you might have, and enough to make censorship inevitable.

I don't think we get power to the people while Alphabet/Elon/Whatnot own the platform. It's a shame that FOSS failed on these fronts. But, the internet has produced powerful proofs of concept. The WWW itself, for the first 20 years. Wikipedia. Linux/gnu. Those really did give power to the people, and you can see how much better they were at dealing with censorship, disinformation and other 2020s infopolitics.


Wikipedia is a terrible example for the "zero censorship" crowd because stuff gets deleted or locked all the time. It's an example of how you can produce something useful despite a raging ideological battleground over all sorts of topics.


I didn't say "zero censorship." I don't even know what that means for an encyclopedia.

Wikipedia have a model for user generated content. It's much more resilient, open, unbiased and successful than social media. This isn't because they have some super nuanced, single-us distinction between moderation and censorship. They never really needed to split that hair.

They have a model for collaboratively editing an encyclopedia, including lots of details and special cases that deal with disagreement, discontent and ideological battlegrounds.

They also have a different organisational and power structure. Wikipedia doesn't exist to sell ads, or some other purpose above the creation of an encyclopedia. Users/editors have a lot of power. Things happen in the open.

Between those two, they've done much better than Alphabet/FB/Twitter/etc. Wikipedia is the ultimate prize for censorship, narrative wars, disinformation, campaigning, activism and such. Despite this, and despite far fewer resources it outperforms the commercial platforms. I don't think it's a coincidence.


I can point to particular pages that have failed in providing an accurate representation of the subject and are under the control of activists or interested groups.

I also get a bit tired of looking someone up and it has "so and so says this person is <insert bad thing>", claims that usually stack up about as well as that SPLC claim against Maajid Nawaz[1] did.

Given this, I find it hard to see how they're doing better than the other companies you mention.

[1] https://en.wikipedia.org/wiki/Majid_Nawaz#Claim_by_Southern_...


I agree. If Twitter reopened its API (properly) then "userland" moderation tools would (could) be easier to implement and that might tackle this problem.


I disagree. Comments with a certain tone set the mood for the entire forum generally leading to a poor overall use experience.


Is your concern answered by ensuring that the default experience of a forum is moderated, until someone explicitly takes off the covers? Otherwise I can't fathom where you disagree with GP.


What does that mean? Comments with certain tone set the mood anyway. That's how it works in general. It's just you'd prefer "certain" moods over others


> It's just you'd prefer "certain" moods over others

Don't trivialize it as some personal preference around moods. It's much more than that.

Stuff like death threats, doxxing, child porn, harassment are not just "moods you don't like".


Sure. I am simply opposing the use "certain". The way it's written, it could mean literally anything


Sounds like things for the legal system to handle.


In tree-style conversations (Twitter, Reddit) they are contained to subtrees of already banned posts.


> the entire forum

That is the wrong view on a global communication platform. It's like saying "a certain tone sets the mood for the entire telephone system".

These things should be seen more as silos, subcultures or whatever.

Unless you expose yourself to the firehose of globally popular content.


Telephones are 1:1, forums are 1:many.


Then pick a different medium. A single tabloid doesn't tarnish every single news industry. Rude mailing lists don't invalidate private email conversations. Also, conference calls are a thing.

Anyway, analogies are imperfect, please look in the direction where I am gesturing, not at my exact words.

The point here (and of the entire conversation) is that you shouldn't judge a medium by its worst imaginable actors as long as you're given the right tools that allow you to use that medium undisturbed, effectively putting them into a different silo. Today twitter allows a very crude, imperfect approximation of this by following people that post decent content and setting the homepage to "latest posts" instead of "top tweets". Ideally we'd have better tools than that.


But the thing is, there's no "outrage Twitter" that's distinct from "calm Twitter." There's just Twitter. Since the value of a social network is in its population, the natural inclination will be towards a reduction of networks, not a proliferation of them.


If you don't see those comments then how could they set a tone?


I'm of the same view.

If moderation must be done then let me do it for myself. Give me the tools.

A central moderating authority cannot be trusted at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: