AI war fakes are fooling millions

Welcome back. All three of today's stories show how quickly AI is reshaping the world. At MWC, Qualcomm sketched a future in which AI wearables work together as a body area network rather than a single device. Yann LeCun is challenging the industry’s fixation on AGI, proposing “superhuman adaptable intelligence” as a more practical goal. And in the U.S.–Iran conflict, AI-generated videos are flooding social media, showing how quickly fake videos and images can distort the information landscape. But there's something everyone can do. The technologies advancing AI are powerful; we can all agree on that. The bigger question is how well we can adapt with them.
Jason Hiner

IN TODAY’S NEWSLETTER

1. AI video fakes are war’s newest weapon

2. Yann LeCun argues AGI is the wrong goal

3. AI wearables move toward body area networks

GOVERNANCE

AI video fakes are war’s newest weapon

An onslaught of AI-generated fake videos is warping the information landscape in the U.S.–Iran conflict. 

We knew that AI video models have been getting more realistic over the past year, but the consequences have unfolded in real-time during the current war. Multiple players in the conflict are passing off both AI-generated and manipulated videos as new reports, claiming they show the current state of hostilities in the Middle East. 

News organizations such as BBC World Service, The New York Times, US News & World Report, Financial Times, and Associated Press have been racing to debunk false reports spreading rapidly across social media platforms such as X, Instagram, and others. 

The issue has gotten so bad on X that the platform issued a statement on March 3, warning that any creators in the revenue-sharing program would have monetization immediately suspended for 90 days if they posted "AI-generated videos of an armed conflict—without adding a disclosure that it was made with AI." If they do it a second time, they will be permanently banned from the program. 

That removed the financial incentive to create dramatic, AI-generated videos that quickly spread and are amplified. Of course, the geopolitical incentives of spreading false narratives remain. 

Several AI-generated videos that have proven to be false have already become famous in the week since the war began:

  • A video reportedly showing a U.S. fighter jet being shot down in Iran got 70 million views but turned out to be footage from a video game. 

  • A video showing the world's tallest building, the Burj Khalifa in Dubai, on fire after presumably being hit by an Iranian missile was one of the more obvious AI-generated fakes

  • Another fake video showing missiles hitting a densely populated downtown area in Tel Aviv got 20 million views and turned out to be AI-generated, but the chatbot Grok falsely claimed it was real when users tried to verify it.

  • According to a report in Rolling Stone, the U.S. is creatively editing videos to spread propaganda to convince the Iranian people to rebel against their government, but the report stopped short of accusing the U.S. of using AI to create known fakes.

“The volume of AI content is starting to just pollute the information environment in these kinds of crisis settings to a really terrifying degree,” Melanie Smith, senior director of policy and research at the Institute for Strategic Dialogue, told U.S News & World Report. “The inability to get access to verified and credible information in times like this [is] getting harder and harder.”

Spreading false reports to create fear and impel people into actions such as despair, surrender, and protest was a part of warfare long before AI. The difference is the speed and scale at which it can be created and spread. The capabilities are now also distributed among the populace, creating a new layer of potential chaos. What we can all do to improve the situation is to pause before texting, retweeting, or amplifying any reports. Use free sites such as AP Fact Check and BBC Verify to make sure the reports are real. And use free tools such as Compass from Blackbird AI (email registration required) to verify the truth of a report, video, image, meme or article. If you want to follow my updates on the AI space in real-time, you can find me on X/Twitter at x.com/jasonhiner.

Jason Hiner, Editor-in-Chief

TOGETHER WITH CRUSOE

Crusoe: move from maintenance to momentum today

Is your team spending more time fighting infrastructure fires than building models?

The operational burden of downtime and maintenance shouldn't be slowing you down — that's why we built Crusoe Command Center to deliver more uptime, less triage. Command Center replaces fragmented monitoring with a single source of truth, ensuring every GPU in your cluster is visible and accountable.

AutoClusters detects performance degradation and evicts compromised nodes automatically, while out-of-the-box telemetry tracks individual GPU health, storage, and network metrics so you can build without the resource blind spots that lead to inefficiency.

RESEARCH

Yann LeCun argues AGI is the wrong goal

Everyone has a different opinion about what AGI will look like. Some don’t believe in it at all. 

In a recent paper, AI godfather Yann LeCun cast doubt on the concept of Artificial General Intelligence, or AI that can match human capabilities in any given domain. Instead, LeCun and a team of researchers propose a new goalpost: Superhuman Adaptable Intelligence. 

The paper suggests that generality should not be a requirement for intelligence to be extremely useful. So instead of focusing on a single model that can do everything, Superhuman Adaptable Intelligence emphasizes how long it takes for a model to adapt to new tasks and the range of tasks it can learn. 

LeCun and other researchers argue that, not only is there no consensus on a definition of AGI in the industry or academia as it stands, but humans themselves are not generalized. 

  • Think of it this way: Humans have the capability to learn anything, but no one knows everything. So, the paper questions, why should we expect something different from our AI? 

  • “Awareness of human limitation gives rise to a critical realization: humans may be specialized creatures, but are nonetheless capable of accomplishing or quickly learning a wide range of incredible things,” the paper said.

The paper also suggests two mechanisms for training an AI to achieve this goal:  self-supervised learning, a machine learning paradigm that trains a model by labeling unlabeled data, to acquire generic knowledge, and world models, or visual AI models that reflect the world as it is, for planning tasks. 

LeCun’s ideas stand in opposition to many major voices in AI. OpenAI’s Sam Altman has predicted that AGI could arrive before 2030, and that large language models will be the key to getting there. Anthropic’s Dario Amodei, meanwhile, has long questioned whether these machines are capable not just of generalized intelligence but also of consciousness, claiming last week in an interview with The New York Times that Claude, its chatbot, is exhibiting signs of anxiety and that “we don't know if the models are conscious.”

Though these AI heavy hitters are debating what AI can do, the question remains of what it should do. A model that knows everything is, of course, dangerous, especially if it knows how to disable any kill switches that could counteract its existence. Similarly, a model that can adapt to any situation is, in theory, one that can learn to wiggle out of its handcuffs. And despite the speculative debates around AGI, there’s still the matter of what exists right in front of us: AI that’s more capable than we know what to do with.

TOGETHER WITH RESOLVE

AI SRE: How Coinbase makes investigations 72% faster

Writing code is no longer the bottleneck. Instead, engineering orgs spend over 70% of their time triaging alerts, investigating incidents, and debugging prod.

Managing production doesn’t have to be so painful: Coinbase, DoorDash, and Zscaler use Resolve AI to make investigations 72% faster and pull in 30% fewer engineers per incident.

When it comes to production, teams need solutions that correlate code, infrastructure, telemetry, and tribal knowledge to provide real-time root cause analysis and prescriptive remediation.

Download the free AI SRE buyers guide to learn more about the ROI of AI SRE and six criteria for evaluating their effectiveness.

Get the AI SRE buyers guide →

CONSUMER

AI wearables move toward body area networks

Some of the largest crowds at MWC queued outside Meta, Google, and Qwen to try on AI smart glasses, highlighting again that 2026 is likely to be a huge year in AI wearables.

While smartglasses may have dominated the AI wearable space at MWC, many other form factors are possible. Qualcomm, the semiconductor company whose chips power the most popular smart glasses on the market, the Meta Ray-Bans, launched its Snapdragon Wear Elite Platform, enabling AI wearables across a range of form factors, including pins, rings, pendants, earbuds, smartwatches, and more. 

I sat down with Qualcomm veteran Alex Katouzian, GM of Mobile, Compute & XR, to ask which form factor has the most potential. It turns out that I may have been thinking too narrowly. 

“We are moving towards a world where it's not just one device, it's multiple,” said Katouzian. 

A device ecosystem serves people in ways a single wearable can't. It accommodates personal preferences, distributes computing needs, and expands the capabilities of each device.

Photo credit: Sabrina Ortiz

For instance, Katouzian gave the example of riding a bike and telling your glasses to look for an address within a WhatsApp conversation on your phone. Then, the address and directions automatically populate on the glasses to guide you to your destination. This could also apply to other actions and devices, including your car.

“So you see, two or three devices, the ones you already have around you, can give you information that you otherwise wouldn’t have [on] one device. And I think if you have just one device, all you do is go to the cloud, and all it does is record or transcribe your audio,” added Katouzian.

In this future, actions taken on each device might also depend on the needs of the task. For example, Katouzian explained that there might be a hierarchy in which immediate needs are addressed on the wearable, while more demanding tasks are offloaded to a phone or puck. He adds that it wouldn’t compromise the user experience, since even if it takes longer to retrieve the answer, it can still notify the user and provide a good answer.

Having multiple wearable devices on your body at all times makes sense. Ultimately, AI is only as good as the information you provide it, and more devices mean more input and context, enabling more opportunities for assistance. However, that requires people to be comfortable wearing more devices than they already do, and that may be the biggest obstacle to adoption. It took years for smartwatches to take off, and still, many people (myself included) refuse to wear one for reasons of comfort and aesthetics. There is also the challenge of paying for connectivity for each device, another major barrier on top of the cost of the hardware itself. Making this vision work will require something like a Body Area Network or a Personal Area Network.

LINKS

  • Codex Security: OpenAI’s new application security agent, able to identify complex vulnerabilities using your context. Now available in research preview. 

  • Claude Marketplace: Users can now use their existing Claude budget on “Claude-powered solutions” such as Lovable, Harvey or GitLab.

  • Test Sprite: An autonomous AI testing agent to eliminate the bottleneck of testing. 

  • Stitch with Google: Turn your ideas into user interface designs for mobile and web applications, powered by Gemini 3.0 Flash.

GAMES

Which image is real?

Login or Subscribe to participate in polls.

A QUICK POLL BEFORE YOU GO

Do you believe AGI should be the goal the frontier AI labs are working toward?

Login or Subscribe to participate in polls.

The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The light pattern looked realistic compared to other picture.”

“[This image] had a more aged look, as I thought it should.”

“The large block in the bottom right has some drilled holes that don't serve a purpose for an AI-generated image.”

“The light in [this image] was too perfect.”

“Rock edges are too sharp on [this] image, considering the age of the place.”

“[This image] looked like a still from a Laura Croft videogame: too pristine, too perfect on the coloring, too perfect on the lighting.”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.