(Photo credit: Gilles Lambert on Unsplash)
As someone who spends a fair amount of time online and aware of the discourse and digital trends that surface, I believe two current realities of our day will lead to an off-ramping of people from their social networks. People will spend fewer hours online or abandoning their accounts altogether.
First is the seemingly never-ending cascade of negative news related to the second Trump administration. I have already witnessed a number of friends say “goodbye” and share their final messages on their feeds—citing an inescapable barrage of stories related to Donald Trump as the motivation—and inviting people to email their contacts to stay connected.
Secondly is a broader realization that endless hours online—and the often negative, divisive interactions people have while there—are robbing us of our attention spans and affecting our mental health. This realization, and the anxiety around it, are growing. A recent study led by the University of Cambridge reveals that teens with mental health challenges spent more time online, and the video below by data journalist John Burn-Murdoch of the Financial Times posits something even more ominous: that social media and smartphones are lowering our intelligence.
If this exodus or dialing back from social media continues and accelerates, it would be an existential threat to companies that connect us digitally. At this early stage, they should take strategic, forward-looking steps to address this potential outcome before it snowballs into a full-blown reality.
The Problems
Companies need to address the increasing number of fake profiles on their platforms. With the wider adoption of generative AI, many networks have seen a rise in bots and fake accounts that convincingly act and dialogue like humans. But there is an increasing awareness of these accounts (and how to spot them), and they are driving users to abandon their social media feeds for more meaningful forms of engagement.
Social media’s DNA was authentic human connection, and fake accounts run afoul of this in the most egregious way. They sow discord, scam users, spread misinformation, and harvest data often for dubious ends. Unfortunately, there isn’t an incentive to purge these fake accounts because they inflate engagement numbers that drive value for companies. But there’s no point in remaining on a platform that is increasingly polluted by bots and fraudsters, and companies must realize this to safeguard their long-term survival. Building a house on hollow foundation will ultimately end in collapse; and as an advertiser, I would prefer a smaller number of real users than a larger number of fake ones.
Algorithms must also better curate content to offer suggestions that leave the user feeling inspired, empowered, and positive. Right now, default “For You” settings based on imprecise algorithms seem to be doing the opposite and an increasing number of young adults are experiencing mental health issues because of it.
Discourse if often heated, negative, and divisive. Relying on time on page and engagement metrics in this environment isn’t always an accurate measure of preference. People often watch or interact with content for a variety of reasons, but enjoyment and genuine interest are not always two of them. Ultimately, ensuring users log off feeling inspired and hopeful is the best way to keep them coming back.
The Solutions
If “For You” feeds are to be the default, one solution to the algorithm could be a Tinder-like left/right swipe function for content that tells the platform what you’re interested in seeing—and what you’re not. This simple reflex popularized by a dating app takes very little intellectual investment and comes naturally to many people. Currently, “liking” a post seems to be the easiest way to signal one’s interest; but there are reasons why someone may be reluctant to like a post even though they enjoy the content and want to see more of it, and there also isn’t a way to do the opposite when you come across content you don’t like. An easier solution would be to make the default feed show follower content rather than recommendations, and make the latter available only when explicitly chosen.
To address both problems outlined above, a verification system that confirms a user—and their representation online (i.e. profile photo)—is real could be an effective way to filter out fake accounts (problem 1) and ensure conversations online remains civil and, in most cases, positive because it adds an element of accountability that fosters civility (solution to problem 2). On the latter, consider for example how much more positive posts on LinkedIn are compared to X/Twitter. Anonymity brings out the worst in human nature; transparency does the opposite. This one change could reshape the social media experience for millions of people and clean up these often dark alleys of the internet.