It’s no secret that social media has devolved into a toxic cesspool of disinformation and hate speech.
Without any meaningful pressure to come up with effective guardrails and enforceable policies, social media platforms quickly turned into rage-filled and polarizing echo chambers with one purpose: to keep users hooked on outrage and brain rot so they can display more ads. (View Highlight)
As detailed in a yet-to-be-peer-reviewed study, coauthors Petter Törnberg, AI and social media assistant professor, and research assistant Maik Larooij simulated a social media platform that was populated entirely by AI chatbots, powered by OpenAI’s GPT-4o large language model, to see if there was anything we could do to stop social media from turning into echo chambers. (View Highlight)
They tested out six specific intervention strategies — including switching to chronological news feeds, boosting diverse viewpoints, hiding social statistics like follower counts, and removing account bios — to stop the platform from turning into a polarized hellscape.
To their dismay, none of the interventions worked to a satisfactory degree, and only some showed modest effects. Worse yet, as Ars Technica reports, some of them made the situation even worse. (View Highlight)
For instance, ordering the news feed chronologically reduced attention inequality but floated extreme content to the top.
It’s a sobering reality that flies in the face of companies’ promises of constructing a “digital town square” — as billionaire and X owner Elon Musk once called it — where everybody coexists peacefully. (View Highlight)
With or without intervention, social media platforms may be doomed to devolve into a highly polarized breeding ground for extremist thinking.
“Can we identify how to improve social media and create online spaces that are actually living up to those early promises of providing a public sphere where we can deliberate and debate politics in a constructive way?” Törnberg asked Ars. (View Highlight)
The AI and social media assistant professor admitted that using AI isn’t a “perfect solution” due to “all kind of biases and limitations.” However, the tech can capture “human behavior in a more plausible way.”
Törnberg explained that it’s not just triggering pieces of content that result in highly polarized online communities. (View Highlight)