Late last week, the X social media platform rolled out a new “location indicator” tool, plans for which had first been announced in October. Suddenly, it became much easier to get information on where in the world the site’s users are actually posting from, theoretically helping to illuminate inauthentic behavior, including attempted foreign influence.
“It is clear that information operations and coordinated inauthentic behavior will not cease.”
As the tool started to reveal accounts’ information, the effect was like watching the Scooby Doo kids pull one disguise after another from the villain off the week. Improbably lonely and outgoing female American GI with an AI-generated profile picture? Apparently based in Vietnam. Horrified southern conservative female voters with surprising opinions about India-Pakistan relations? Based somewhere in South Asia. Scottish independence accounts? Weirdly, many appear to be based in Iran. Hilarious and alarming though it all was, it is just the latest indication of one of the site’s oldest problems.
The tool, officially unveiled on November 22 by X’s head of product Nikita Bier, is extremely simple to use: when you click the date in a user’s profile showing when they signed up for the site, you’re taken to an “About This Account” page, which provides a country for where a user is based, and a section that reads “connected via,” which can show if the account signed on via Twitter’s website or via a mobile application downloaded from a specific country’s app store. There are undoubtedly still bugs—this is Twitter, after all—with the location indicator seemingly not accounting for users who connect using VPNs. After users complaints, late on Sunday Bier promised a speedy update to bring accuracy up to, he wrote, “nearly 99.99%.”
As the New York Times noted, the tool quickly illuminated how many MAGA supporting accounts are not actually based in the U.S., including one user called “MAGA Nation X” with nearly 400,000 followers, whose location data showed it is based in a non-EU Eastern European country. The Times found similar accounts based in Russia, Nigeria, and India.
While the novel tool certainly created a splash—and highlighted many men interacting with obviously fake accounts pretending to be lonely, attractive, extremely chipper young women—X has struggled for years with issues of coordinated inauthentic behavior. In 2018, for instance, before Musk’s takeover of the company, then-Twitter released a report on what the company called “potential information operations” on the site, meaning “foreign interference in political conversations.” The report noted how the Internet Research Agency, a Kremlin-backed troll farm, made use of the site, and uncovered “another attempted influence campaign… potentially located within Iran.”
The 2o18 report was paired with the company’s release of a 10 million tweet dataset of posts it thought were associated with coordinated influence campaigns. “It is clear that information operations and coordinated inauthentic behavior will not cease,” the company wrote. “These types of tactics have been around for far longer than Twitter has existed—they will adapt and change as the geopolitical terrain evolves worldwide and as new technologies emerge.”
“One of the major problems with social media is how easy it is to create fake personas with real influence, whether it be bots (fully automated spam) or sockpuppet accounts (where someone pretends to be something they’re not),” warns Joan Donovan, a disinformation researcher who co-directs the Critical Internet Studies Institute and co-authored the book Meme Wars. “Engagement hacking has long been a strategy of media manipulators, who make money off of operating a combination of tactics that leverage platform vulnerabilities.”
Since 2018, X and other social media companies have drastically rolled back content moderation, creating a perfect environment for this already-existing problem to thrive. Under Musk, the company stopped trying to police Covid misinformation, dissolved its Trust and Safety Council, and, along with Meta and Amazon, laid waste to teams who monitored and helped take down disinformation and hate speech. X also dismantled the company’s blue badge verification system and replaced it with a version where anyone who pays to post can get a blue checkmark, making it significantly less useful as an identifier of authenticity. X’s remaining Civic Integrity policy puts much more onus on its users, inviting them to put Community Notes on inaccurate posts about elections, ballot measures, and the like.
While the revelations on X have been politically embarrassing for many accounts and the follower networks around them, Donovan says they could be a financial problem for the site. “Every social media company has known for a long-time that allowing for greater transparency on location of accounts will shift how users interact with the account and perceive the motives of the account holder,” she says. When Facebook took steps to reveal similar data in 2020, Donovan says “advertisers began to realize that they were paying premium prices for low quality engagement.”
The companies “have long sought to hide flaws in their design to avoid provoking advertisers.” In that way, X’s new location tool, Donovan says, is “devastating.”