- The Deep View
- Posts
- The fight over who controls AI just began
The fight over who controls AI just began

Welcome back. Google’s AI glasses are preparing to shake up the AI device market. We got a demo of the latest hardware at Mobile World Congress, and it showed how seamlessly Gemini can blend the digital and physical worlds. Anthropic is courting open-source developers with six months of Claude Max access as an olive branch that’s generous, strategic and notably not open-source. And the biggest story of all: Anthropic’s standoff with the Pentagon has ignited a public debate over who controls frontier AI. The Deep View readers overwhelmingly sided with safety. The power struggle now unfolding will shape contracts, policy, and the future of AI itself.
—Jason Hiner
1. The fight over who controls AI just began
2. Google AI glasses prepare to take center stage
3. Open-source community gets a Claude-sized gift
GOVERNANCE
The fight over who controls AI just began
Everyone’s got an opinion on Anthropic’s face-off with the Pentagon.
The past few days have brought a deluge of news and updates as the government continues to blacklist the AI firm, with its technology now being shut out of the US Treasury and the federal housing agency, along with the military. The company’s designation as a supply chain risk marks an unprecedented retaliatory move by the government, and risks chilling future contracts between tech companies and federal agencies.
If you’re trying to sort out the situation, The Deep View has picked three of the most poignant and viral essays published in recent days:
“Clawed,” By Dean W. Ball for Hyperdimensional. In this long-form piece, Ball makes the argument that the fight between Anthropic and the U.S. government is indicative of a “death rattle of the old republic.” The fight also marks one of the first times the question of who should control AI has been debated in the public eye, and the government got off on “extraordinarily bad footing” in the argument.
“Anthropic and Alignment,” by Ben Thompson for Stratechery. Thompson argues that Anthropic’s insistence on designating how its models can be used is “fundamentally misaligned with reality,” claiming that it is intolerable for corporate executives to supersede the decisions of elected officials.
“A Few Observations on AI Companies and Their Military Usage Policies,” by Sarah Shoker for fishbowlificiation. Shoker, former leader of OpenAI’s geopolitics team, takes a broader view of the subject, pointing out that frontier AI labs don’t have coherent policies around military AI use, which has allowed these firms to live in a vague grey area of “optionality.” Additionally, the use of AI in military action is largely opaque due to policy, disinformation and the fog of war creating “black boxes" all around.
Even Deep View readers have strong thoughts on the topic: In our daily poll, we asked “should Anthropic have acquiesced to the Pentagon’s request to remove safety restrictions,” to which 78.8% responded “No.”

This situation has made one reality abundantly clear: AI will impact our society and future in monumental ways. The fight between the US government and Anthropic is about more than just one company or one contract. Instead, it boils down to power, for the first time putting on public display “the nexus of control over frontier AI,” as Ball’s essay notes. In the end, the one who controls the technology effectively controls the future.
TOGETHER WITH TINES
Workflow clarity: Where AI fits in modern automation
Are you confident you're using AI in the right places, or could your workflows be even faster, more simple, and more secure?
On March 12th, join Tines and The Hacker News to explore how to strike the right balance between speed, flexibility, and security in modern AI-driven workflows.
In this webinar, you’ll learn:
How to identify where human-led, rules-based, and agentic workflows fit best
How to avoid over-engineering with AI
How to design secure, auditable workflows that improve real-world outcomes
Practical examples of how leading teams are putting AI to work thoughtfully and at scale
Save your seat here.
CONSUMER
Google AI glasses prepare to take center stage
Nine months after my first demo, Google's AI glasses still feel like they could change everything. And my second demo at MWC 2026 this week only confirmed it.
I wasn't allowed to take photos during the demo since these were prototypes and not the final product. Even so, the promise is clear: like the classic Meta Ray-Bans, they look strikingly similar to regular glasses. The final product will be produced in collaboration with popular eyewear brands Warby Parker and Gentle Monster, likely making them more stylish than the typical geek glasses.

Google demoed AI glasses at MWC 2026. Photo: Sabrina Ortiz
The in-lens display is the biggest highlight, as it opens up a whole new range of capabilities. Smart glasses are gaining momentum largely through AI integration and the ability to fuse the physical and digital worlds, but there are also practical, everyday wins, like reading messages or following turn-by-turn navigation without pulling out your phone.
The in-lens display is well-positioned and easy to read. During the five-minute demo, I asked Gemini multiple questions, watched my words get accurately transcribed and sent to the chatbot, and received responses in real time.
I also tried the Nano Banana integration, which let me ask Gemini to take a photo of what I was looking at and modify it. I asked it to add a space-themed background. While it wasn't the most practical everyday scenario, the image quality was impressive, and the processing was fast (around 15 seconds, I was told). Last time, I demoed Google Maps turn-by-turn navigation and came away equally impressed.
Following the surprise success of the classic Meta Ray-Bans, last year Google announced that it was re-entering the category with its own smart glasses. When worn, Google's AI glasses feel much closer to the original Meta Ray-Bans, which owe their popularity largely to their comfort and the fact that they look like normal glasses. However, Google's version is more functionally similar to the bulkier and more expensive Meta Ray-Ban Displays, which look less like normal glasses and more like a tech product.

There's already growing acceptance of AI glasses, and since Google's glasses are so similar to regular glasses and add so much functionality with the in-lens display, I think they are poised to push smart glasses adoption to the next level. Some important details that will play a major role in the appeal are still to be determined, such as battery life and speed. However, all the pieces may be coming together. Qualcomm just unveiled Snapdragon Wear Elite, a platform designed to power next-gen AI wearables with always-on, low-power, on-device AI processing. The next year will be pivotal for the AI wearable category, and Google’s take on smart glasses is likely to redefine the category by making in-lens displays mainstream.
TOGETHER WITH CRUSOE
Your model. Our inference engine. Breakthrough performance.
Eliminate the "memory wall" causing inference bottlenecks. Crusoe’s inference engine is powered by MemoryAlloy™ technology to deliver larger shared memory capacity so you can serve more users at lower latency, with better throughput, and less wasted compute.
The result? Breakthrough time-to-first-token speed and up to 5x higher throughput. Work with our team to optimize performance for your own fine-tuned model so you can scale without compromise.
PRODUCTS
Open-source community gets a Claude-sized gift
Anthropic wants to recruit the top open-source developers and maintainers to its side. Unless they're in China, of course.
Anthropic has launched a new “Claude for Open Source” program that gives qualifying open-source maintainers six months of free access to its highest-tier, $200-a-month, Claude Max 20x plan. The AI powerhouse is framing the move as both a thank-you to the open-source community and a way to harden the software ecosystem with AI-assisted development.
According to program descriptions circulating in the developer community, Anthropic is targeting primary open-source maintainers and core contributors of major projects that meet certain scale and activity thresholds. The eligibility criteria include projects with at least 5,000 GitHub stars or over 1 million monthly npm downloads, along with recent, ongoing activity such as commits, releases, or pull-request reviews in the last few months.
That said, when Matt Mullenweg, co-founder of WordPress, asked if he and WordPress's top ten developers were eligible, Lydia Hallie, a member of Anthropic technical staff, replied on X that "We also accept maintainers for projects that don’t quite fit the criteria but still make a big impact."
In addition, Anthropic says maintainers of “critical infrastructure” projects that may not hit the headline metrics should "apply anyway and tell us about it."
The launch follows a string of moves by Anthropic to deepen its engagement with the open-source world.
For example, in a recent security update, Anthropic disclosed that its latest Claude Opus 4.6 model helped uncover more than 500 previously undetected bugs in production open-source projects.
And Boris Cherny, the creator and Head of Claude Code at Anthropic, credits open source for helping build Claude. "So much of what makes Claude Code great came from feedback from OSS developers,” Cherny said. “Excited we can give back a little."
Mind you, Anthropic LLMs remain some of the most closed-off models. Sure, its Model Context Protocol (MCP) is open, but the company doesn't offer any open-source versions of its flagship models. In short, don't mistake this for Anthropic getting ready to open up its models.
Opening its LLMs is simply not in the cards. This comes as no surprise since Anthropic has accused the Chinese open-source companies DeepSeek, Moonshot, and MiniMax of carrying out “industrial-scale” model distillation campaigns, exfiltrating Claude’s capabilities to improve their own models.
While Anthropic hasn't expressly said their new offer isn't available to mainland Chinese developers, it’s unlikely that they would be welcomed, as Anthropic banned “Chinese-controlled companies” from using Claude in September.

While this initiative may signify an olive branch by Anthropic, it also adds fuel to the ongoing debate over how frontier AI companies should repay the open-source projects on which their models are built. By underwriting AI access to open-source developers, these programmers get a taste of high-end frontier AI. Simultaneously, Anthropic is positioning Claude for Open Source as a tangible, albeit time-limited, attempt to sell open-source programmers on their LLM so Claude will become the open source community's go-to AI.
By Steven J. Vaughan-Nichols, Contributing Writer
LINKS

Reflection AI is seeking $2 billion in funding at $20 billion valuation
Chinese AI firm Minimax more than doubles its revenue
Nvidia to invest more than $2 billion in Lumentum, Coherent for AI processors
AWS plans adds more than $21 billion to investment in Spanish data centers
Anthropic’s Claude faces a worldwide outage across all platforms
Meta tests AI shopping research feature to rival ChatGPT

Simple: After 40, the gym can break you. That’s why millions of people are turning to walking to lose weight, find out your optimal step count. (sponsored)
Qwen: The company introduced its new Qwen 3.5 Small Model Series which includes the Qwen3.5-0.8B, Qwen3.5-2B, Qwen3.5-4B, and Qwen3.5-9B. The advantage is they take less compute.
KREA AI: The videos and images platform added Voice Mode which lets users speak as they draw and get changes in real-time.
Grok: Users can now extend their own videos by up to 30 seconds using the Frame Extend option,
Telegram: The messaging app now supports response streaming, useful for interacting with OpenClaw agents in app, according to TestingCatalog News.

Bytedance: Research Scientist, AI Foundation
Amazon Web Services: Sr. Applied Science , AWS Agentic AI
Nvidia: Senior Research Scientist, Efficient Deep Learning
xAI: Member of Technical Staff - Reasoning
A QUICK POLL BEFORE YOU GO
Do you think AI will create more jobs than it displaces over the next 5 years? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Hard to tell but [this image] had more realistic shadowing. [The other image] had an unrealistically blue sky for London.” “The different clocks show the same time.” “This image had more clear details.” |
“The clocks in [this image] are showing different times.” “Uneven clock hands and flags are meaningless.” “The scale is off in [this image]. The giveaway is the street lamp.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.











