Emergency managers have spent decades preparing for hurricanes, floods, and wildfires. But how many of you have prepared for the possibility that someone is actively working to make those disasters worse?
And I don't mean by making the impact of disasters or fires worse, although that does happen, but by sending information designed to make the response fail.
The traditional view of disaster misinformation treats it as a communications problem: confused citizens share bad information, rumors spread faster than facts, and agencies scramble to correct the record. Heading into 2026, that framing is now incomplete. What happened during Hurricane Helene in October 2024, during the Valencia floods that same month, and during the LA wildfires in early 2025 suggests something far more deliberate. Misinformation during disasters is no longer just a byproduct of chaos. It has become a weapon, and in some cases, adversarial actors are pulling the trigger and impacting your ability to respond.
What I've learned in recent weeks, with travel and workshops in different countries, is that the emergency/crisis management profession needs to stop treating this as a public affairs challenge and start recognizing it for what it is: a gray zone operation that exploits the most vulnerable moments in civilian governance.
The pattern is becoming hard to ignore
When Hurricane Helene struck the southeastern United States in September 2024, the usual post-disaster rumors appeared within hours. False claims that FEMA would seize survivor property. Reports that disaster relief funds had been diverted to support migrants. A widely circulated narrative that the agency would only provide $750 to affected families and nothing else, full stop.
These weren't organic misunderstandings. According to the Institute for Strategic Dialogue, 33 posts on X containing claims already debunked by FEMA and the White House had generated more than 160 million views by mid-October. The volume was significant. But what made Helene different was the operational impact. FEMA had to pause door-to-door outreach in parts of North Carolina after credible threats against aid workers. FEMA's acting associate administrator for response and recovery acknowledged that misinformation was "reducing the likelihood that survivors will come to FEMA with a trusted way to register for assistance". For the first time in recent memory, misinformation didn't just confuse the public. It physically stopped responders from doing their jobs.
And in 2026, that word “trust” is more important than ever.
A month later, catastrophic floods killed more than 230 people near Valencia, Spain. The Spanish fact-checking organization Maldita identified 112 separate disinformation narratives circulating about the disaster, from claims that HAARP weather weapons caused the flooding to false reports that hundreds of bodies were hidden in underground parking garages. Spain's 2024 national security report later confirmed what investigators already suspected: pro-Kremlin actors had exploited the disaster to erode trust in democratic institutions. The initial distribution of key false narratives was traced through Pravda-linked channels and websites connected to Russian propaganda operations.
The objective, according to Maldita's analysis, was not to spread confusion about the floods themselves. It was to say: "Look, your King and your democracy don't work".
Then came the LA wildfires in January 2025, where false claims about government lasers causing the fires circulated alongside misleading narratives about firefighter failures and political blame. By July 2025, Texas floods were attributed to cloud seeding companies, resulting in death threats against executives of those firms.
Each disaster. The same playbook. Exploit the information vacuum. Amplify institutional distrust. Turn natural catastrophe into political crisis.
Now, government entities have their own issues about keeping the public’s trust, and most definitely need to work on better services and delivery to the public. But this is deliberate targeting of the one thing institutions need to serve the public…trust.
Why disasters are ideal gray zone terrain
The concept of the gray zone, that ambiguous space between peace and open conflict where adversarial actors apply pressure without crossing military thresholds, has typically been discussed in terms of cyber operations, election interference, and mostly in critical infrastructure sabotage. But disasters create conditions that gray zone actors can only dream of manufacturing on their own.
Consider what a major disaster delivers: populations in genuine distress, authorities stretched beyond capacity, communication infrastructure degraded, trust already fragile, and intense public demand for immediate answers that governments cannot yet provide. The information vacuum that follows a catastrophe is not a gap to be filled. It is a vulnerability to be exploited.
This is what separates adversarial disinformation from ordinary rumor. Organic misinformation tends to be local, self-correcting as facts emerge, and driven by genuine (if misguided) concern. Weaponized disinformation during disasters is designed to prevent self-correction, to embed false narratives so deeply that official corrections become evidence of cover-up rather than clarification.
The EU has recognized this dynamic at the strategic level. The European External Action Service's Foreign Information Manipulation and Interference framework explicitly identifies Russia and China as actors who exploit crises to destabilize democratic societies. France tracked nearly 80 Russian disinformation campaigns between August 2023 and March 2025. The Pravda ecosystem alone was found to operate over 190 websites targeting 45 countries with automated pro-Russian content, with particular intensity in countries of geopolitical interest to Moscow.
But recognition at the strategic level hasn't translated into preparedness at the operational level. Emergency managers are still working with communication plans designed for correcting honest mistakes, not countering deliberate information operations. The tools are wrong because the threat model is wrong.
The gap between what happened and what was planned for
Here's the operational reality. Most emergency communication procedures or protocols follow a reactive model: monitor social media, identify false claims, issue corrections. FEMA's Hurricane Rumor Response page during Helene was a good example of this approach. Professionally executed. Well sourced. And completely outpaced by the scale of what it was facing.
The problem isn't execution. It's architecture. Fact-checking works when false information is accidental and correctable. It fails when the objective of the disinformation campaign is to make people distrust the fact-checkers themselves. When corrections are framed as government propaganda, the standard playbook doesn't just underperform. It feeds the cycle it's trying to break.
This is the same structural mismatch identified in our earlier analysis of gray zone operations across European civilian infrastructure: single-domain response systems overwhelmed by multi-domain pressure. An emergency management agency trying to fight an information warfare campaign with press releases and rumor response pages is operating in the wrong domain with the wrong tools at the wrong speed.
The World Economic Forum named misinformation and disinformation as the number one global risk in its 2025 Global Risks Report, for the second consecutive year. Meta announced the end of its third-party fact-checking programs, and X rolled out the community notes feature which has had some impact. And platform algorithms continue to reward engagement over accuracy, creating economic incentives for the production and distribution of false content during precisely the moments when accurate information matters most.
If you're finding value in this article, there's more where that came from. We created a free toolkit to go along with it, packed with practical insights you can start applying today.
What this demands from those responsible for crisis management
Addressing disinformation as a gray zone threat doesn't mean emergency managers need to become intelligence analysts. It means the profession needs to integrate the information environment awareness into the same planning frameworks that already address logistics, communications, and resource allocation.
And here I would offer the idea that three shifts matter the most:
First, threat modeling needs to change. Pre-disaster planning should assume that disinformation will be actively injected into the information space, not that it will emerge organically. This changes everything from staffing plans to social media monitoring protocols to the speed at which official information must be released.
Second, trust must be treated as pre-positioned infrastructure, not a resource that can be generated during a crisis. Communities with strong institutional relationships before a disaster are measurably more resistant to false narratives during one. The work of building that trust happens during what emergency managers call "blue sky" days, and it's the kind of slow, unglamorous work that rarely makes it into budgets or performance metrics. It’s not glamorous, it’s not easy, but it’s the grind of public and community engagement and building relationships that builds much needed resistance.
Third, emergency management cannot address this alone. The convergence of natural disasters and information warfare sits at the intersection of public safety, national security, and digital governance. No single agency or profession controls all three. The coordination models that emergency management already uses for physical response need equivalents for the information domain, bringing together emergency managers, intelligence professionals, platform analysts, and community leaders in unified planning structures.
For example, Sweden's Total Defence model, which integrates civilian and military crisis response across all levels of government and society, offers one template for this kind of cross-sector coordination. But it requires something the emergency management profession has historically resisted: acknowledging that civilian crisis management is not just a response and relief function. It is a component of national security, and adversaries treat it accordingly.
The uncomfortable question for every leader responsible for crisis management is rather straightforward: if the next major disaster in your jurisdiction is accompanied by a coordinated disinformation campaign, designed to turn public fear into institutional paralysis, what is the plan?
If the honest answer is that there isn't one, the time to build it is before the next storm makes landfall.
Complex crises don't wait for professions to catch up. Crisis Lab helps senior professionals build the cross-sector thinking that modern threats demand, through applied learning, strategic analysis, and practitioner-led research. The Forum at Crisis Lab brings together senior leaders from emergency management, national security, business continuity, and governance for the kind of ongoing peer exchange and cross-sector dialogue these challenges require. Learn more at crisislab.io.
References
[1] Institute for Strategic Dialogue, "Hurricane Helene Brews Up Storm of Online Falsehoods and Threats," Digital Dispatch, November 2024. https://www.isdglobal.org/digital_dispatches/hurricane-helene-brews-up-storm-of-online-falsehoods-and-threats/
[2] Axios, "FEMA Reps 'Stood Down' in N.C. Amid Threats Over Hurricane Misinformation," October 14, 2024. https://www.axios.com/2024/10/14/fema-threats-hurricane-recovery-misinformation
[3] International Journalists' Network (IJNet), "Floods in Spain Highlight Disinformation's Appeal During Natural Disasters," December 2024. https://ijnet.org/en/story/floods-spain-highlight-disinformations-appeal-during-natural-disasters
[4] Espreso Global, "Spain Blames Russia for Disinformation Campaign Following 2024 Valencia Floods," May 22, 2025. https://global.espreso.tv/spain-blames-russia-for-disinformation-campaign-following-2024-valencia-floods
[5] Center for Countering Digital Hate (CCDH), "Extreme Weather," 2025. As reported by ASIS International, Security Management Magazine, July 2025. https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2025/july/disaster-misinformation/
[6] European External Action Service (EEAS), "Information Integrity and Countering Foreign Information Manipulation & Interference (FIMI)." https://www.eeas.europa.eu/eeas/information-integrity-and-countering-foreign-information-manipulation-interference-fimi_en
[7] France 24, "Russia Behind Dozens of Disinformation Campaigns Targeting Ukraine and Allies, France Says," May 7, 2025. https://www.france24.com/en/europe/20250507-russia-disinformation-france-ukraine
[8] DISA, "Russian Disinformation Campaigns Pervasively Targeting Emerging Europe: New Study," April 15, 2025. https://disa.org/russian-disinformation-campaigns-pervasively-targeting-emerging-europe-new-study/
[9] World Economic Forum, "Global Risks Report 2025," January 15, 2025. https://www.weforum.org/press/2025/01/global-risks-report-2025-conflict-environment-and-disinformation-top-threats/
[10] NPR, "Fact-Checking Falsehoods About FEMA Funding and Hurricane Helene," October 7, 2024. https://www.npr.org/2024/10/07/nx-s1-5144159/fema-funding-migrants-disaster-relief-fund

