Discover more from The Other Way
#3: The Illusory Internet
Why there might be life in the Dead Internet Theory
I have been using Twitter less frequently than I used to, and now I find myself in a situation where I'm torn between not wanting to engage with the platform at all, and experiencing a sense of loss aversion that Twitter seems to evoke through its various psychological design levers it enjoys pulling (I have a hunch that they do an awful lot more than dopamine spiking through engagement buttons).
I tend to find that during these drier periods, I’ll get random notifications from people engaging with some of my older tweets. Today, someone engaged with a monologue I shared from Hideo Kojima’s Metal Gear Solid 2 released in 2001 which features horrifyingly accurate predictions about social media before social media even existed as well as some correlations with the 2016 election 15 years before it happened, and it covers the nature of memes before the word entered the broader public lexicon.
Some tl;dw quotes from the monologue:
“The digital society furthers human flaws and selectively rewards development of convenient half-truths.”
“The untested truths spun by different interests continue to churn and accumulate in the sandbox of political correctness and value systems.”
“Everyone withdrawals into their own small gated community, afraid of a larger forum; they stay inside their little ponds leaking what ever "truth" suits them into the growing cesspool of society at large.”
“The different cardinal truths neither clash nor mesh, no one is invalidated but no one is right.”
“The world is being engulfed in "Truth". And this is the way the world ends. Not with a BANG, but with a whimper.”
After rewatching the video on the lazy Saturday afternoon that it arrived in my notifications, I decided to look into the story behind the monologue and found myself in a dark little corner of the web called “Agora Road’s Macintosh Cafe” which hosted a provocative thread called the Dead Internet Theory, written by someone calling themselves “IlluminatiPirate”, referenced the aforementioned scene from Metal Gear Solid.
Some choice language and expressions aside1, the thread raises some interesting points that are worth revisiting at this point in the history of the web. Many outlets have dismissed the theory in previous years, but now that we’re getting a bit of a feel of where this whole artificial intelligence thing is going.
“The internet feels empty and devoid of people”
Initially, I also dismissed the theory, particularly this part of the author’s thread, but I quickly remembered what brought me here - a notification on an old tweet that followed a pattern I noticed that arises when I spent time away from Twitter. The pattern roughly goes: time passes, an old tweet with an ok-ish amount of engagement for an account of my small size is liked or shared by an account that I have never heard of before with no real person in the avatar and a similar-looking bio to others that follow this pattern.
The Dead Internet Theory thread goes on to make a number of claims, some seemingly outlandish and difficult to verify, some unmistakably true such as the claim that bots are generating more conversations and generating more content on social networks than humans than we realise. Elon Musk’s public sounding of the alarm bell/attempt to back out of the Twitter deal/attempt to drive down the value of the Twitter deal, or some combination of all 3 came to mind when reading through the theory.
As it turns out, bots account for 5% of Twitter users2, which doesn't sound too scary until you read that they account for 21%-29% of the content on the platform. Close to a third of the content isn’t made by humans.
“Ah but that’s just a problem with Twitter” you might say, to which I was shocked to hear that the problem is worse outside of Twitter’s walled garden - 64% of all internet traffic is bots3. Or to put it another way, humans are the minority on the web.
⌘+C, ⌘+V Opinions
We like to be liked. The Dead Internet Theory thread author made the following observation on human behaviour and how we interface with the web, and how it is prone to hijacking:
The internet is a fast way to get info, and info is what moves the mind, and the thing is, the mind likes recognition. When the "likes" were introduced without negative feedback they created a copy-feedback subconscious, they made it so only "positive" opinions be propagated (also accepted), and in it's way negative opinions to be obsolete.
Now everyone is too cowardly to have an opinion so they copy others they like, they are more likely to follow trends and say what others said, you can also see it with the paranoia of always wanting to listen to experts.
The fast feedback system of the net created a human obsession to be in with trends, getting away from it makes it so you always feel like you are missing out, to play it safe in a trend is more easy as you can copy what already is accepted. In this way, the internet and social media, which was supposed to democratise media by allowing users to create whatever content they wanted, has instead been hijacked by a powerful few.
Creation of original content is how the internet used to work. Anonymous people were willing to express their opinions and try radical or experimental things. More truly original content, uninfluenced by bots or paid influencers, was created due to anonymity as protection against negative feedback. On the old internet, you could start anew every time you posted something.
Now add bots to this. Make it so an opinion be repeated more and more, they are faster than us, so the positive feedback makes is so we copy the bots, and anonymity can't do anything against it because we can't influence the bot like we would a human, this is an easy weapon to manipulate people, so anyone with an agenda can use a bot, is designed in a way compared to how clickbaits are made, most won't read the content, this creates tv-like propaganda where they aren't influenced by the user and that puts bots at a great advantage over any other opinion because it wont change, and we are copying that.
To summarise: consensus can be manufactured and propagated into the collective networked mind of social media, which will then proceed to reward those who echo the sentiment back into the network, reciting their predefined opinions.
I genuinely set out without the intention of talking about ChatGPT because everybody has it well covered. And we’ve done so well to get to this point in a piece like this without a clichéd nod to it, but we have to at this point, because no doubt you’ve drawn those conclusions already - the “oh dear God, what happens when we let that become part of this?” type of conclusion. It turns out, there is already a subreddit populated purely by ChatGPT both in terms of posts and replies to itself. The two things that worried me the most about that were learning that:
If the usernames didn’t contain the letters “GPT” you could swear that it was normal human beings having a conversation on the web
And these conversations are running on the old version of GPT: GPT2 (the vastly inferior model)
A few months ago, an interaction on Twitter left me very pissed. I have negative interactions from time to time and they genuinely don’t bother me, but this one did. I posted a UI concept for reading multiple branches of a conversation on Twitter - purely as something I would like to see, and by no means a suggestion for Twitter to take seriously as a solution.
It gained a decent amount of attention (once again, for an account of my humble size) and that was 99% positive for the first few days. Then about a week later, I woke up from a sleep and carried out my usual bad habit of checking email, Reddit, and Twitter first thing after waking and noticed that I had 20+ notifications. Again - for an account of my size, 20+ means either: something amazing has happened - likely some big account has shared something, or something bad has happened.
It was the latter. Sentiment had completely changed on the tweet. Every single new notification took the shape of either personal insult, or sniping at the UI concept - no constructive critique, just some variation of “L take” and other short quote tweets and responses.
I tried to find the source - the one who had flipped the sentiment from 99% good to 100% bad, and I couldn’t find them. I looked at the profiles of the people who were behind the more vitriolic responses, and they all followed a similar pattern:
No real face in the avatar (or AI generated face for that matter)
Poor ratio of following to followers
Followers always capped around the 100 mark
VAST majority of interactions on their timeline were the same sort of replies to other people
I am now entirely convinced that the interaction I had, that left me rattled in a way that I haven’t felt in quite some time on social media, was either entirely bots, or mostly orchestrated by bots in terms of setting the sentiment for others to follow. This reframing has been both helpful and unsettling.
I’ll try and bring this home to some sort of optimistic outlook. It seems that there is at least some degree of validity to the Dead Internet Theory in that bots are starting to run the show, and set narratives to follow. The fact that you rarely see the words “this changed my mind, thank you” in online discussions is something of a red flag to indicate that perhaps the web isn’t the best place to be having these types of discussions anyway. And, in a way, it’s slightly meditative to detach from the emotion of a fuelled discussion online knowing that there is a significant chance you’re talking to machine code.
The alternative path is to pivot towards true, human connection. I’m having more phone calls with people I know from Twitter than I ever have before, and it’s refreshing to have those conversations that aren’t being watched and scored by others.
The friction of the dial-up web where technology kept us from being extremely online was a flavour of how these things should have worked, so maybe it’s time to exercise some self-control and disconnect. Bots don’t have to be a problem in your everyday life if you don’t spend all of your every life with them.
The Other Way is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Slightly meta sub point: I believe we need to get better at separating message from messenger. Pre-filtering happens all the time across online discussions where the source determines people’s views on any particular topic to the point that the viewpoint itself rarely gets past the pre-filtering for an honest assessment against the reader’s own values.
According to research conducted by Similarweb - https://www.similarweb.com/blog/insights/twitter-bot-research-news
According to research by Barracuda - https://assets.barracuda.com/assets/docs/dms/Bot_Attacks_report_vol1_EN.pdf