Jasmine Crocket, Taylor Swift and Bots
Both targets of foreign-influence campaigns people keep falling for
I’ve studied a lot of high-level smear, propaganda, and disinformation campaigns over the past five years — the kind that researchers in digital forensics, OSINT, and computational social science have been warning governments about since at least the early 2010s.
While these campaigns are often strictly political — the 2016 Russian election-interference operations to support Donald Trump being the most heavily documented example (and confirmed by the U.S. intelligence community, the bipartisan Senate Intelligence Committee report, and the Mueller investigation) — many are deeply personal. They frequently target individuals, especially women, and often still carry political or ideological overtones.
You’re probably aware of some of the biggest, overtly personal smear campaigns: the coordinated harassment and mass-amplification ops that targeted Meghan Markle; the bot-driven and troll-driven influence networks that fueled hatred against Amber Heard during the Depp–Heard trial (which researchers, including Christopher Bouzy, analyzed as one of the largest coordinated harassment campaigns on record); the waves of inauthentic accounts pushing narratives about Taylor Swift being a Nazi; and the sustained targeting of Blake Lively.
These campaigns share a pattern recognizable to anyone who studies online manipulation: disproportionate scale, manufactured outrage, and algorithmic exploitation.
Hell, even I was the target of a multi-million-dollar smear campaign — documented through Florida public records, vendor contracts, and data-forensics — while I was just a scientist working mid-management at the Florida Department of Health. The campaign against me involved bot amplification, sock-puppet networks, paid influence accounts, coordinated right-wing media narratives, and state-level officials weaponizing disinformation to discredit a whistleblower.
While take-down campaigns are most common, there are also operations designed to artificially build people up.
Ron DeSantis, for example, made what Christopher Bouzy (CEO of Bot Sentinel, a social-media integrity and data-analysis firm) publicly identified as the single largest purchase of foreign bots Bot Sentinel has ever recorded, during the lead-up to his humiliating and short-lived 2024 presidential run. The goal wasn’t only to boost engagement numbers — it was to create the illusion of grassroots enthusiasm and inevitability where none existed.
And then there are campaigns meant to sow division, which intelligence agencies have long described as “chaos operations.” These are ops that attack all sides of an issue, both for and against a policy or a person, with the goal of manufacturing conflict, eroding public trust, and tearing communities apart.
Russia used this tactic extensively on Facebook in 2016; Iran and Saudi Arabia have run similar ops; countless smaller state and non-state actors replicate the strategy today because it works and because the platforms make it cheap.
The point of these campaigns isn’t just to flood the zone — though “flood-the-zone-with-shit,” as Steve Bannon famously put it, is part of the strategy. The deeper goal is to persuade enough real people, through repetition and emotional salience, that the propaganda appears self-sustaining. Once genuine users start repeating the message for free — and with conviction — the operation becomes nearly unstoppable.
We can identify these campaigns by analyzing clusters of accounts posting similar messaging; the timing and rhythm of their posts; their creation dates (often mass-generated in short windows); their posting histories; language patterns; what other topics they engage with; geolocation metadata when available; and in some cases, their links to real individuals or firms that specialize in influence operations.
Anyone who’s even briefly studied inauthentic bot campaigns recognized the “Taylor Swift is a secret Nazi” operation immediately.
It exhibited all the markers: sudden mass amplification from recently created accounts, identical phrasing, manufactured outrage cycles, cross-platform seeding, and engagement bait designed to draw in real users. Yet despite being transparent nonsense, it still trended — and it still caused reputational harm to one of the most famous women on earth.
While I’m not going to lose sleep over a minor dent in a billionaire’s portfolio, these kinds of campaigns — especially those that disproportionately target women, often with misogynistic or racist undertones — should be illegal. Funding them should result in prosecution. The U.S. has virtually no regulatory framework addressing private-sector influence operations, even though other countries (like the EU) have begun passing laws requiring platforms to detect and disclose inauthentic behavior.
Despite many of these past campaigns being publicly identified, dissected, and reported, people continue to fall for them.
A new operation targeting Rep. Jasmine Crockett is currently underway. It appears to be a classic “both-sides” divisive campaign — not designed simply to attack Crockett, but to generate controversy around her, make Democratic voters distrust her, and frame her as too polarizing to be a viable Senate candidate in Texas.
Accounts expressing legitimate concern about her voting record related to the Israel’s ongoing genocide of Gazans are being accused of being bots. Meanwhile, accounts attacking those people — myself included — are also being accused of being bots.
The truth is most likely somewhere in the middle — a hallmark of mixed-authenticity influence ops.
Israel is known to operate large, well-funded disinformation and psy-ops networks. The Israeli government and affiliated contractors have repeatedly been exposed for running covert influence operations (including the recently uncovered “Team Jorge” operation) targeting foreign audiences.
The DeSantis-aligned firm involved in smearing me was also Israeli-based, mirroring a growing trend of U.S. political actors outsourcing online manipulation to foreign digital-ops firms. Saudi Arabia, Nigeria, the UAE, Russia, China, and several Eastern European firms also run influence-ops for hire, a booming industry that researchers now track closely.
On all matters related to the ongoing genocide Israel is committing in Gaza, we expect extremely high levels of inauthentic account activity. With Israel publicly announcing plans to invest $750 million into expanded global propaganda and influence operations, this is not going to stop — and it is almost certainly going to intensify.
It’s hardly surprising that a wave of nameless, faceless accounts appeared overnight to champion Crockett’s record on supporting genocide and silence legitimate criticism by wielding accusations of racism and/or misogyny as a shield.
Simultaneously, a separate swarm of inauthentic accounts, likely financed by conservative operatives, is deploying explicitly racist narratives to discredit Crockett’s qualifications and character.
There is a clear distinction between legitimate scrutiny of a Congresswoman’s voting record on a deeply controversial topic, and the blatantly racist attacks targeting Crockett’s IQ and “class.” But the infusion of false narratives is already alienating both enthusiastic supporters and those only loosely engaged.
Until we teach social media users how to recognize coordinated fake campaigns, these operations will continue to ruin careers, wreck lives, and sow widespread discord.
So here is my best effort to educate users on how to spot these campaigns.
How to Spot a Coordinated Disinformation or Smear Campaign
A practical explainer for anyone who wants to understand how these operations actually work — and how to recognize them before they go viral.
Coordinated disinformation campaigns aren’t always sophisticated. Most aren’t funded by intelligence agencies or run out of covert bunkers. Many are farmed out to private firms in Israel, Eastern Europe, Nigeria, or the Gulf states for a few thousand dollars, while others are run by political campaigns, dark-money PACs, or wealthy individuals with a grudge.
Despite their differences, these campaigns share core behavioral patterns, and once you know what to look for, they become almost impossible to miss.
Here’s the breakdown.
1. The Messaging Arrives All at Once
The first sign of an influence operation is sudden uniformity.
Real people don’t talk in sync. Bots do.
Look for:
Many accounts using the exact same phrasing within minutes or hours
A narrative that seems to appear out of nowhere
Repetition of a specific talking point that no one was discussing the day before
Dozens or hundreds of accounts circling the same screenshot, quote, or video fragment
Researchers call this narrative seeding — planting a storyline across multiple accounts simultaneously so it looks like a spontaneous grassroots conversation.
2. The Accounts Were Created Recently — or in Suspicious Clusters
Most bot networks are created in batches.
Signs include:
A sudden wave of new accounts created on the same day or within the same week
Profiles with very few followers but very high posting frequency
Accounts with randomly generated usernames or recycled profile pictures
A timeline of activity that suggests automation (posting every few minutes, 24/7)
Even a handful of recently created accounts clustered around a single narrative is a red flag. When hundreds appear, it’s almost certainly coordinated.
3. The Accounts Post About Almost Nothing Else
Authentic users have varied interests. Bots don’t.
Inauthentic accounts tend to:
Post almost exclusively about one topic (e.g., immigration, a celebrity smear, a political scandal)
Amplify each other and only each other
Avoid organic interactions (jokes, personal updates, photos, replies not tied to the narrative)
Swarm a topic intensely and then go silent
Look for a narrow “content bandwidth.” It’s one of the easiest ways analysts identify influence operations.
4. Engagement Ratios Make No Sense
If an account with six followers, created last Thursday, no photo, no bio is getting hundreds of likes, retweets, or replies, something is wrong.
Bots amplify bots. Then real people accidentally amplify both.
This is how narratives get artificially boosted into Trending sections.
5. The Narrative Aligns Perfectly With a Political Agenda
Most disinformation campaigns are not random — they serve a purpose:
attacking a candidate
destroying a whistleblower’s credibility
muddying public opinion on a policy
deflecting attention from wrongdoing
undermining trust in journalism or science
manufacturing a sense of controversy where none exists
If a storyline appears suspiciously helpful to a politician, corporation, or government — that’s not a coincidence.
6. The Tone is Extreme, Emotional, and Designed for Outrage
Influence operations rely on affect manipulation — provoking strong emotions to override critical thinking.
Common emotional triggers:
manufactured moral panic
exaggerated threats
dehumanizing language
racially or sexually charged attacks (especially against women)
memes designed to humiliate or mock
“breaking news” claims without sources
Outrage spreads faster than nuance, so emotional manipulation is part of the business model.
7. Both Sides Start Accusing Each Other of Being Bots
This is a classic sign of a division operation.
It means:
the campaign is mixed-authenticity
real humans are now entangled in it
the operation has succeeded in confusing the public
Once people start shouting “bot!” at every disagreement, the attackers have essentially won. The debate collapses into chaos, and the target loses control of the narrative entirely.
8. Foreign Origin Signals Start Showing Up
Disinformation doesn’t respect borders.
Things to look for:
timestamps that align with foreign working hours (e.g., a big spike when U.S. users are asleep)
language errors typical of automated translation
accounts claiming to be American using non-American spelling/idioms
metadata patterns that point to foreign hosting
profile photos traced to stock-image databases or unrelated individuals
Many bot farms operate out of:
Israel
UAE
Saudi Arabia
Nigeria
Russia
China
Serbia
North Macedonia
Albania
Moldova
ALL of which have been publicly tied to covert influence operations in the last decade.
9. There’s Always a Financial or Political Beneficiary
These campaigns cost money.
Someone is paying for:
bot networks
mass account creation
content generation
graphic design
targeted ads
consulting firms
data analytics
amplification services
If you follow the trail, you typically find:
a political campaign
a dark-money PAC
a wealthy individual
an ideological nonprofit
a foreign government
a PR firm
a digital-ops contractor
In other words: disinformation rarely appears out of thin air.
10. The Campaign Keeps Going Even After the Truth Comes Out
This is the final—and most important—sign.
When a narrative continues to spread even after it’s debunked, you’re no longer dealing with misinformation (accidental), but disinformation (intentional).
Coordinated campaigns rely on volume, not accuracy.
The goal is not to persuade everyone — only enough people so that real humans continue spreading the lie on their own.
Once that happens, the operation becomes self-sustaining.
The Bottom Line
Most influence operations follow a predictable pattern:
Seed the narrative using inauthentic accounts
Amplify it until it looks real
Draw in real people through outrage
Exploit platform algorithms to spread it further
Let humans continue the campaign for free
Once you know the signs, you can see these operations forming before they go viral — and you can avoid becoming part of the machine.
Side note: I recommend following Jackie Singh and Jim Stewartson if you’re interested in learning more about how the far-right manipulates public discourse.


Amazing analysis of all this! I feel so overwhelmed by all this, so thanks for the clarity and understandable info!
Yes, excellent diagnostics, RJ. Folks, do not be overwhelmed by the dung; think of it as pond scum, the kind that stinks so bad you want to "Return It to Sender!"