When Twitter this week in its lawsuit over his attempt to buy the company, the issue of bot accounts will likely be front and center. Musk has been trying to back out of the $44 billion deal for months, claiming the platform is filled with bots. A few months ago, in a series of tweets, the CEO of Twitter—Parag Agrawal—gave a detailed description about the complexity of estimating spam on the platform.
, “Have you tried just calling them?”
Twitter has a deep bench of engineers working for the company. They have access to trillions of data points on their over 300 million monthly active users. Why has a company this size struggled to clean up its platform?
Back in 2020, I led a project at RAND that developed tools to detect Russian interference in U.S. elections on Twitter . Our team was small (fewer than 10 people). We had access to only 2.2 million tweets from 630,391 unique accounts. In a few months, our team was able to detect patterns of Russian bots and trolls on the platform that appeared to be interfering with American elections. If RAND could pull this off in a few months, why couldn’t Twitter do the same on a larger scale?
Here’s a possible hypothesis: Twitter might not want to look too closely at this problem because then they would have to remove accounts, reducing the number of reported “active users” on the platform.
More comes from advertisers. And it is probably safe to assume that most of these advertisers are paying Twitter to display ads to real human beings, not bots or Russian trolls masquerading as Americans. If Twitter removed more of these inauthentic accounts, it would ding its “active user” metrics, which drives advertising revenue—the source of value for the platform.
Social media companies like Twitter and Facebook are not incentivized to look too closely at the problem of bots, trolls, and inauthentic accounts.
Twitter is not the only social media with this problem. Back in 2017, that ads on its platforms could reach 41 million Americans between the ages of 18 and 24 years old. The problem was the U.S. Census Bureau claimed that only 31 million Americans in this age group existed. Facebook is now facing a class-action lawsuit related to audience exaggeration.
Put simply, social media companies like Twitter and Facebook are not incentivized to look too closely at the problem of bots, trolls, and inauthentic accounts. The latest whistleblower, Peiter “Mudge” Zatsko, who used to be head of security at Twitter .
So, what could be done? Putting aside of Twitter, he may make a legitimate point when he calls for greater transparency on who is actually a real person on social media.
Third-party auditors could do this. That would ensure estimates of active users on these platforms are accurate, helping investors and advertisers alike make informed decisions. Independent researchers could also do this to help inform public policy for the platforms—if they had access to quality data and freedom to publish the results.
This colorful showdown between Elon Musk and Twitter could result in a discussion about developing a more-systematic and transparent method for ensuring that everyone knows how many real people are hanging around our digital town squares.
Marek N. Posard is a military sociologist at the nonprofit, nonpartisan RAND Corporation and an affiliate faculty member at the Pardee RAND Graduate School.
Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.