Over at the Atlantic, Jonathan Haidt has a piece on Why the Past 10 Years of American Life Have Been Uniquely Stupid. I don’t completely agree with his premise and feel like some of the examples are a bit of a reach, but the central premise – namely, that social media (and especially algorithmic social media) is rapidly eroding the social fabric and making us collectively dumber – feels pretty spot on, with some citations from research on the matter to back it up.
But gradually, social-media users became more comfortable sharing intimate details of their lives with strangers and corporations. As I wrote in a 2019 Atlantic article with Tobias Rose-Stockwell, they became more adept at putting on performances and managing their personal brand—activities that might impress others but that do not deepen friendships in the way that a private phone conversation will.
Once social-media platforms had trained users to spend more time performing and less time connecting, the stage was set for the major transformation, which began in 2009: the intensification of viral dynamics.
Further:
By 2013, social media had become a new game, with dynamics unlike those in 2008. If you were skillful or lucky, you might create a post that would “go viral” and make you “internet famous” for a few days. If you blundered, you could find yourself buried in hateful comments. Your posts rode to fame or ignominy based on the clicks of thousands of strangers, and you in turn contributed thousands of clicks to the game.
This new game encouraged dishonesty and mob dynamics: Users were guided not just by their true preferences but by their past experiences of reward and punishment, and their prediction of how others would react to each new action. One of the engineers at Twitter who had worked on the “Retweet” button later revealed that he regretted his contribution because it had made Twitter a nastier place. As he watched Twitter mobs forming through the use of the new tool, he thought to himself, “We might have just handed a 4-year-old a loaded weapon.”
As a social psychologist who studies emotion, morality, and politics, I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.
And let’s not forget what current trends in AI end up allowing:
Now, however, artificial intelligence is close to enabling the limitless spread of highly believable disinformation. The AI program GPT-3 is already so good that you can give it a topic and a tone and it will spit out as many essays as you like, typically with perfect grammar and a surprising level of coherence. In a year or two, when the program is upgraded to GPT-4, it will become far more capable. In a 2020 essay titled “The Supply of Disinformation Will Soon Be Infinite,” Renée DiResta, the research manager at the Stanford Internet Observatory, explained that spreading falsehoods—whether through text, images, or deep-fake videos—will quickly become inconceivably easy. (She co-wrote the essay with GPT-3.)
It’s a bit of a long read, and it’s okay if you don’t 100% agree with his points, but there’s a lot there worth considering. He does end the piece somewhat hopeful, by offering some suggestions on things that could be done to help the situation. I’m a bit less optimistic that we’ll be able to implement any of those reforms, and unfortunately have no other ideas of ways to come back from our current state.