- The Deep View
- Posts
- AI companies court Pentagon, Anthropic resists
AI companies court Pentagon, Anthropic resists

Welcome back. AI is colliding with government power, enterprise pressure, and the tool of the moment, all at once. SpaceX, xAI, and OpenAI are chasing lucrative Pentagon contracts while Anthropic resists loosening safety guardrails without specific assurances. CIOs are pouring money into AI but struggling to prove ROI, shifting the conversation from experimentation to accountability. And the rise of personal AI agents like OpenClaw is becoming the defining trend of 2026, but they're not ready for everyone. If FOMO is tempting you to install OpenClaw, we've got recommendations.
—Jason Hiner
1. AI companies court Pentagon, Anthropic resists
2. Conflicting signals: AI investments vs. ROI doubts
3. Ready to install OpenClaw? Do it smart, or wait
GOVERNANCE
AI companies court Pentagon, Anthropic resists
AI companies are fighting over the Pentagon’s favor.
Elon Musk-owned SpaceX and subsidiary xAI have thrown their hat into a secretive Pentagon contest for a $100 million contract to develop voice-powered autonomous drone swarms, Bloomberg reported on Monday. It marks xAI’s latest effort to collaborate with the government agency, as the AI lab recently signed a contract with the Pentagon to integrate Grok into government sites, as well as a $200 million contract to integrate its tech into military systems.
But the Musk-owned companies aren’t the only ones seeking Pentagon contracts. OpenAI is reportedly supporting Applied Intuition, an autonomous machines startup, with its own submission for the contest, Bloomberg reported.
However, the news comes as a rival’s relationship with the military is reportedly on the rocks: According to Axios, the Pentagon may cut ties with Anthropic over the company’s refusal to relax the safety restrictions in place on its flagship chatbot, Claude.
The military is currently using Claude only for its classified systems. Though the company is willing to loosen some safety restrictions, it wants to ensure that its chatbot won’t aid the agency in surveillance of U.S. citizens or developing autonomous weaponry that can kill without human oversight.
Defense Secretary Pete Hegseth is reportedly considering designating the company a "supply chain risk," forcing military contractors using Anthropic to also drop the company.
One senior official told Axios that the agency is “going to make sure they pay a price for forcing our hand like this." The Department of War’s spokesperson Sean Parnell told Axios that its relationship with Anthropic is “being reviewed.”
Anthropic’s tension with the Pentagon represents a stark reversal from its previous intentions. Anthropic, along with practically every other major AI company, sought to get in the government’s good graces last year by offering services at steep discounts to support early adoption.

It adds up that xAI and OpenAI are competing for the Pentagon’s hand: Government contracts, particularly defense-related ones, can be a massive cash cow. If they want to pay off their debts and eventually turn a profit, these AI labs need to find big recurring revenue streams. Anthropic, by contrast, has had a far easier time earning revenue than its rivals. While Anthropic standing firm in its morality is a very typical move for a company that has long prided itself on prioritizing AI safety, it also may be easier for Anthropic to turn away from the massive paydays that the government can offer than it would be for its competitors because of its strong base of enterprise customers.
TOGETHER WITH PIGMENT
How AI Is Changing Forecasting, Planning, and GTM Execution
AI is transforming how modern Go-To-Market teams forecast, plan, and execute.
On February 26, 2026, join sales and revenue leaders from OpenAI, Spotify, and Pigment for a live conversation on how AI is changing forecasting, planning, and GTM execution in practice.
You’ll learn how high-performing sales teams use AI to improve alignment, make faster decisions, and drive better outcomes. Save your spot and future-proof your GTM strategy.
ENTERPRISE
Conflicting signals: AI investments vs. ROI doubts
AI investments continue surging, evident in headlines, fundraising rounds, stock rallies, and product launches. But executives are having second thoughts.
The AI company Dataiku released a new global report based on a Harris Poll survey of more than 800 CIOs worldwide, and it found that CIOs are not only facing regret from their AI investments, but also hold anxiety about what AI’s ability to perform means for their organization's future and their jobs.
A majority of CIOs (74%) said their role will be at risk if their company does not deliver measurable business gains from AI within the next two years. At the same time, they are not seeing the results, yet they are being questioned about them. The report found that:
74% say they regret at least one major AI vendor or platform decision made in the last 18 months.
62% say their CEO has directly questioned or challenged those decisions.
Nearly one-third (29%) say they have repeatedly been asked to justify AI outcomes they could not fully explain.
“ROI is a real question, but the honest answer is that we're early. It's normal that measurement frameworks haven't caught up with a technology whose application is still being defined,” Kurt Muehmel, Head of AI Strategy at Dataiku, told The Deep View. “The pressure is real. But the answer isn't to stop investing, it's to stop investing badly.”
To maximize the value of AI investments, Muehemel recommends avoiding a single model provider. Advantages of this approach include: switching to better models as they evolve rapidly, leveraging cheaper alternatives if the AI bubble pops, and avoiding the need to rebuild your entire system when swapping out a model.
A Gartner report that surveyed more than 300 CFOs and finance leaders also found that they are willing to increase AI spending in 2026 based on the future promise of AI. The report found that 60% of CFOs plan to increase AI investments in the finance function by 10% or more in 2026, while another 24% expect gains of 4% to 9%.
“This investment surge is driven by a 'Return on the Future' mindset, which prioritizes long-term strategic disruption and competitive parity over immediate financial gains,” said Nauman Raja, Director Analyst at Gartner. “After all, 88% of CFOs view AI as a critical mandate for future efficiency, so its potential of being a disruptive force is too great for CFOs not to invest in and try to achieve gains with.”

The major theme explored in both reports is a shift from experimentation to accountability. While business leaders and budgets have focused on AI adoption in recent years, the emphasis has now shifted to demonstrating measurable ROI. However, it's too early to see clear returns, leaving business leaders in a difficult position, forced to make significant investments without proof that they will pay off in the long run. Ultimately, to maximize ROI, the best advice is to implement AI to solve problems your organization already faces, not to pursue benefits that are too abstract to conceptualize clearly.
TOGETHER WITH JETBRAINS
Databao sees shared context as the foundation of AI trust
There’s a quiet problem underneath most AI-powered analytics tools: they can query your data, but they don’t truly understand what it means.
Metrics get defined differently across teams. Dashboards contradict each other. Add AI, and inconsistency scales.
Databao solves this with a governed semantic context layer. Its context engine captures business logic and definitions, while a data agent generates trusted SQL, visualizations, and insights across your existing stack.
Built for platform flexibility and evolving toward SaaS self-service, Databao lets companies run a proof of concept to define needs, establish governed context, and connect data agents to real use cases from day one.
PRODUCTS
Ready to install OpenClaw? Do it smart, or wait
When OpenAI hired OpenClaw founder Peter Steinberger, it officially turned personal AI agents into the hottest trend of 2026.
If you're feeling FOMO and are about to spend $500 on a Mac mini to install OpenClaw and spin up your own personal AI assistant, there are a few factors you may want to consider first.
Despite what you may have heard, installing OpenClaw is a pretty technical and time-intensive process. There are four-hour YouTube videos that walk you through the entire process —and that doesn't include all the prep and planning it takes to do it right by setting up separate accounts for email, texting, GitHub, Slack, an Apple account, and any other services you want your personal AI assistant to work with.
That brings us to the second caveat: You shouldn’t install this on your main computer or allow it to use your logged-in accounts for email, file storage, text messaging, etc. That would be like hiring a new employee and giving them the password to your laptop and your email on the first day.
Remember that AI agents are non-deterministic systems. That means they don't follow a set of step-by-step instructions. In fact, they can delete all of your files because they perceive that you are well-organized and thought it would be helpful to clean up for you.
So instead of installing OpenClaw yourself right now, consider one of these options:
Use a cloud-based OpenClaw service like Hostinger, MyClaw.ai, or V2Cloud. You can spin up one of these almost instantly for $5-$10 per month, and they give you a secure sandbox that keeps your personal AI assistant from accessing other machines and data on your network.
Wait until OpenAI and Steinberger release their personal AI assistant product that will "bring agents to your mom and everyone else," as OpenAI's CMO Kate Rouch said. We should expect that to happen relatively quickly.
Give Anthropic's Claude Cowork a try. It's another AI agent, but it has more guardrails to keep you from getting in trouble. For enterprises, there's also the lesser-known Amazon Quick Suite, which offers many of the benefits of personal AI assistants within the confines of a traditional IT environment.

Don't take this advice the wrong way. It's clear that 2026 is shaping up to be the year of the personal AI assistant. But the purpose of this technology is to save you time and amplify your work. That's why OpenAI wants to co-opt the idea and make a much simpler version that's easy to deploy and use. If you don't want to wait for the OpenAI product, consider one of the hosted OpenClaw services to quickly and safely use the product. And if you're already super technical and have a machine to install OpenClaw, then just treat it like a virtual employee with access restrictions, rather than a new app install.
LINKS

Anthropic opens Bengaluru office, its second office in Asia
Sony may push upcoming Playstation release on AI memory chip demand
Bytedance to curb Seedance generations after Disney legal threat
India has 100 million weekly active ChatGPT users, according to Altman
AMD, India’s Tata Consulting announce AI infrastructure partnership
EU Parliament blocks AI features on lawmakers’ devices over privacy fears

Lockdown Mode for ChatGPT: An optional security setting for “higher-risk users, businesses, and enterprises” that disables certain tools that can be exploited by hackers.
Qwen-3.5: The latest iteration of Alibaba’s open source model family, now with “visual agent” capabilities.
Kimi Claw: Chinese AI lab moonshot has brought OpenClaw to its browser, Kimi.com.
Agent Bar: A macOS app that lets you run Claude Code from your menu bar. Review messages, approve or deny tool actions and keep sessions organized.
Emdash: An open source agentic coding environment that lets you run multiple coding agents in parallel.

Applied Intuition: Research Scientist - Reinforcement Learning, Self-Driving
Apple: Speech & Audio ML Algorithm Engineer
Peregrine: Lead of Applied AI
Meta: Partner Engineer, Generative AI
A QUICK POLL BEFORE YOU GO
Have you tried OpenClaw yet? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The waiting area looks more real with vendors, etc. and the background also seems less fictional.” |
“[This image] looks a little too much like a fairyland. Those palm trees look out of place and the creek's perspective is off. The train in the fake one is a little too clean.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.












