I think this gets almost all the way there but not quite — there is one more vital point:
How we act depends on our environment and incentives.
It is possible to build environments and incentives that make us better versions of ourselves. Just like GPT-3, we can all be primed (and we all are primed all the time, by every system we use).
The way we got from small tribes to huge civilizations is by figuring out how to create those systems and environments.
So it's not about "reaching for the stars" or complaining about how humanity is too flawed. It's about carefully building the systems that take us to those stars!
But there are communities that make it work and I believe these are negatively affected by general rules we try to establish for social media through some systems.
I don't believe any system can be a solution, it isn't a requirement for a lot of communities either. I don't know what differentiates these groups from others, probably more detachment from content and statements. There is also simply a difference between people that embraced social media to put themselves out there and ghosts that have multiple pseudonyms. Content creators are a different beast, they have to be more public on the net, but that comes with different problems again.
I believe it is behavior and education that would make social media work, but not with the usual approaches. I don't think crass expressions with forbidden words or topics are a problem, on the contrary they can be therapeutic. Just saying because this will be the first thing some people will try to change. Ban some language, ban some content, the usual stuff.
- by “failure of algorithm”, the vocal minority actually mean “lack of algorithmic oppressions and treatments according to alignments of a speech with respect to academic methodologies and values”.
- average people are not “good”; many are collectivist with varying capacity of understanding individualism and logic. They cannot function normally where constant virtue signaling, prominent display of self established identities, said alignments above, are required, such as on Twitter. In such environments, people feels and expresses pain, and makes effort to recreate their default operating environments, overcoming systems if need be.
- introducing such normal but “incapable” people - in fact honest and naive and just not post-grad types - into social media had caused the current mess, described by the vocal minority as algorithm failures and echo chamber effects, and by the mainstream peoples as elitisms and sometimes conspiracies.
Algorithmically oppressing and brainwashing users to align with such values would be possible, I think(and sometimes I’d think about trying it for my interests; imagine a world where every pixel seems to have had 0x008000 subtracted - it’s my weird personal preference that I don’t like high saturations of green), but an important question of ethics has to be discussed before we’d push for it, especially with respect to political speeches, I also think.
How do you go about determining what is collaborative or "bridging" discourse, though? That seems like a tricky task. You have to first identify the topic being discussed and then make assumptions based on past user metrics about what their biases are. Seems like you would have to have a lot of pre-existing data specific to each user before you could proceed. Nascent social networks couldn't pull this off.
This also seems to be gameable. Suppose you have blue and green camps as described in the linked paper. And if content gets ranked high when it gets approval from both blue and green users then one of the camps may decide to promote their opinion by purposefully negatively engaging with the opposite content in order to bury it.
This seems no different from "popularity based" ranking mechanisms (e.g. Reddit) where the downvote functionality can be used to suppress other content.
Maybe the assumption is that both camps will be abusing the negative interactions? But you can always abuse more.
How we act depends on our environment and incentives.
It is possible to build environments and incentives that make us better versions of ourselves. Just like GPT-3, we can all be primed (and we all are primed all the time, by every system we use).
The way we got from small tribes to huge civilizations is by figuring out how to create those systems and environments.
Yes, the algorithm is not the problem alone, but a good algorithm can help fix the problems — since it creates the "loss function" (the incentives) for the humans using the platform (I go into that in more detail here https://twitter.com/metaviv/status/1529879799862378497 and here https://www.belfercenter.org/publication/bridging-based-rank... for those who are curious).
So it's not about "reaching for the stars" or complaining about how humanity is too flawed. It's about carefully building the systems that take us to those stars!