As Hurricane Melissa churned across the Caribbean this week, social media platforms became flooded with something almost as dangerous as the storm surge itself: AI-generated misinformation designed to deceive millions of viewers.
One viral video appeared to show four sharks swimming through a Jamaican hotel pool, allegedly swept in by hurricane floodwaters. Another clip purportedly depicted Kingston airport completely destroyed by the storm. Both videos accumulated millions of views across X, TikTok, and Instagram—and both were entirely fabricated using artificial intelligence.
“I am in so many WhatsApp groups and I see all of these videos coming. Many of them are fake,” Jamaica’s education minister Dana Morris Dixon warned on Monday. “And so we urge you to please listen to the official channels.”
This represents a troubling evolution in disaster misinformation. While hoax photos and videos have always surfaced during natural disasters, they typically get debunked quickly. However, new AI video generation tools—particularly OpenAI’s recently launched Sora platform—have dramatically lowered the barrier for creating convincing fake footage. These synthetic videos now appear alongside genuine resident-shot content, creating unprecedented confusion about what’s real.
The timing is significant: Hurricane Melissa marks the first major natural disaster since OpenAI released the latest version of Sora last month, providing a real-world test case for how AI-generated misinformation spreads during crisis situations.
Artificial intelligence has fundamentally changed how fake content gets created and distributed. Traditional photo manipulation required technical expertise and time-intensive editing. Modern AI video generators can produce realistic footage from simple text prompts in minutes, making sophisticated misinformation accessible to anyone with an internet connection.
“Now, with the rise of easily accessible and powerful tools like Sora, it has become even easier for bad actors to create and distribute highly convincing synthetic videos,” explains Sofia Rubinson, a senior editor at NewsGuard, a company that analyzes online misinformation. “In the past, people could often identify fakes through telltale signs like unnatural motion, distorted text, or missing fingers. But as these systems improve, many of those flaws are disappearing.”
This technological leap creates particular challenges during natural disasters, when people desperately seek real-time information about threats to their safety and property. The emotional urgency of these situations makes viewers more likely to share content without verification, amplifying the reach of synthetic footage.
Most AI video generation platforms, including Sora, automatically embed watermark logos in their output—typically appearing in video corners. However, third-party tools can easily remove these markers, so look for suspicious blurs, pixelation, or discoloration where watermarks should appear. These artifacts often indicate someone attempted to hide the content’s artificial origins.
AI-generated videos often break down under scrutiny. The viral shark pool footage, while initially convincing, reveals problems upon closer examination—one shark displays an unnaturally distorted shape that betrays its synthetic nature. Look for objects that blend together unnaturally, garbled text on signs, or missing brand logos (AI systems often avoid reproducing specific company branding to prevent legal issues).
Social media monetization creates financial incentives for viral content creation. On X, users earn money based on engagement metrics, while YouTube creators profit from ad revenue. A video garnering millions of views can generate thousands of dollars with minimal effort, according to AI expert Henry Ajder.
Check the account sharing suspicious footage. Profiles with histories of posting clickbait content, sensationalized claims, or engagement-farming material should raise red flags. However, remember that some creators openly experiment with AI tools for artistic or attention-grabbing purposes without malicious intent.
Take a moment to consider whether what you’re viewing makes logical sense. The Poynter Institute, a journalism organization, advises skepticism toward situations that seem “exaggerated, unrealistic or not in character” for the claimed circumstances. This includes audio elements—while early AI videos featured obviously synthetic narration, newer tools produce synchronized sound that can seem convincingly realistic.
X’s Community Notes feature—a user-powered fact-checking system—often flags suspicious content with crowd-sourced corrections. One version of the shark pool video includes a community note stating: “This video footage and the voice used were both created by artificial intelligence, it is not real footage of hurricane Melissa in Jamaica.” While not foolproof, these warnings provide valuable additional context for questionable content.
Rather than relying on social media for disaster information, prioritize official channels. The Jamaican government regularly posts storm updates, as does the National Hurricane Center, the U.S. federal agency responsible for tracking and forecasting tropical weather systems. Government emergency management agencies, meteorological services, and established news organizations maintain verification standards that social media accounts typically lack.
The financial incentives driving AI-generated misinformation extend beyond simple ad revenue. Content creators use viral videos to rapidly expand their follower bases, then monetize these audiences through product promotion, affiliate marketing, or sponsored content. The shark pool video, for instance, carries watermarks linking to Yulian_Studios, a TikTok account describing itself as a “Content creator with AI visual effects in the Dominican Republic.”
While the specific creator of the shark footage remains unclear, the account demonstrates the pattern: AI-generated disaster content mixed with other synthetic media designed to capture attention and build audience engagement. This business model transforms natural disasters into content opportunities, regardless of the potential harm from spreading false information during crisis situations.
The Hurricane Melissa misinformation campaign represents more than isolated fake videos—it signals a fundamental shift in how false information spreads during emergencies. As AI video generation tools become more sophisticated and accessible, the challenge of distinguishing authentic disaster footage from synthetic content will only intensify.
For businesses, this evolution carries significant implications. Companies operating in disaster-prone regions must now contend with AI-generated content that could falsely depict damage to their facilities, potentially affecting stock prices, insurance claims, and customer confidence. Emergency response organizations face the additional burden of countering realistic-looking false information that could misdirect resources or create unnecessary panic.
The Hurricane Melissa case demonstrates that we’ve entered a new era where artificial intelligence doesn’t just assist in information creation—it actively participates in information warfare during society’s most vulnerable moments. Developing robust detection skills and maintaining healthy skepticism toward viral disaster content has become an essential digital literacy requirement for navigating our increasingly synthetic media landscape.