Is that political post real—or a money-making scam?
Election cycles in the United States see tremendous surges in political content and engagement on social media, a phenomenon that deceptive networks exploit, using inauthentic content to advance political goals or simply to make money.
A new study has shed light on how fake social media accounts fabricated by deceptive groups can reach—and potentially influence—millions of users with political posts.
Through an academic collaboration with Meta, the parent company of Facebook and Instagram, Stanford researchers examined nearly 50 misleading online networks that operated during the 2020 United States national elections. The deceptive networks targeted real users through posts from fake accounts using bogus names and profile pictures, among other tactics.
A deep analysis of the deceptive networks yielded novel and surprising findings about their reach, origins, and motives. Firstly, the networks’ reach proved considerable: At least 37 million unique Facebook users and 3 million unique Instagram users—representing 15% and 2% of active adult users on each respective platform—viewed content generated by deceptive accounts. The networks themselves turned out to be global in origin, operating in countries spanning six continents. As for motive, only a third of the networks identified had evident political influence goals; the remaining two-thirds instead were financially motivated networks—essentially scams—that produced political content as a lure to capture users’ attention and thus profit during election season.
By learning more about these networks, the study authors hope for insights into combating political disinformation and shoring up trust that social media users are legitimate people expressing genuine views.
“Social media have obviously become an integral part of political discourse in the United States and around the world, but users on these platforms are vulnerable to duplicitous actors,” said Jennifer Pan, professor of communication at the Stanford School of Humanities and Sciences (H&S) and senior corresponding author for the study, published April 6 in Nature Human Behavior. “By exposing the workings of some of these deceptive online networks, we can help address their potential distortion of public preferences.”
Special access
The new Stanford-led study was born out of a larger research effort called the US 2020 Facebook and Instagram Election Study (FIES), which continues to analyze the political impact of the platforms on adults in the United States during the 2020 national elections. During this time, Meta removed 49 networks for “inauthentic behavior.” The behavior included misleading users and platforms about content popularity, the true purposes of community pages and events, and the identities of people involved via the creation of fake accounts.
Many academic studies have sought insights into the perceived political influence that social media are having on modern online audiences. However, nearly all previous research into this topic has had to rely on inferences based on publicly available user activity because only a platform owner, such as Meta, has access to full user behavior. For FIES, Meta anonymized this user data to protect privacy and granted authorized access to Stanford and other scholarly researchers.
“The unique thing with our study as part of FIES is that we have actual data on user exposure to this deceptive type of content,” said Pan, who is also a senior fellow at the Freeman Spogli Institute for International Studies. “Usually, we have to approximate exposure based on if a user is following a certain account. But with this data, we know what’s getting seen, and that enables us to more precisely measure behavior, activity, and potential impact.”
Honest sharing of dishonest content?
Based on this analysis, the study novelly found that spreading the disingenuous posts required real users to interact with the content. The networks simply posting voluminous content was not sufficient to drive wide dissemination.
“The networks that get more outsized reach have to gain traction among ordinary users to reshare and circulate their content," Pan said. “If we care about containing or constraining these types of networks and their activities, it doesn't seem sufficient to just focus on the networks themselves. We also need to figure out why certain users spread this information.”
In terms of the users that most interacted with this deceptive content, the study found they tended to be older, more conservative, and more frequently exposed to untrustworthy content. They also spent more time on Facebook than average users.
An open question is what these users and others might feel if they were made aware that the supposedly bona fide political content they are sharing is not a real person’s opinion and instead is often profit-driven trickery, in many cases from organizations of unknown repute located outside of the United States.
“Would users’ behavior change if they realize this may be a scam network?” Pan asked. “If users are resharing this content, maybe it’s because they think it’s genuine content that resonates with their political beliefs. Maybe these users would approach the content differently if they recognized it is not from a politically interested actor but is clickbait produced by someone whose motives are commercial rather than political.”
Another significant takeaway from the study is that very few of the deceptive networks actually triumphed in their efforts to snag large audiences on social media. Of the 49 identified networks, just three collectively reached more than 70% of the affected users. Examples of these “successful” deceptive networks banned by Meta include a financially motivated, Kosovo-based network that shared copied content from Fox News and the right-leaning political marketing firm Rally Forge, based in Arizona, that created thousands of inauthentic posts though fake profiles.
Looking ahead to the 2026 midterm elections and 2028 national elections, Pan noted that the scale of this activity warrants attention. “We tend to think of disinformation as ideologically motivated, but a significant share of it may simply be financially motivated content that happens to exploit political divisions,” Pan said. “Greater public awareness of how these accounts operate seems like a reasonable step to take.”