Anthropic passes $30B ARR after Pentagon clash

Hello, friends. Robotics is entering its pretraining era, as Generalist shows how data and scaling laws could unlock more capable machines. At the same time, AI is increasingly being framed as infrastructure akin to electricity, pointing to a future where intelligence becomes always-on and rarely noticed. Anthropic has surged past $30B in ARR, signaling that enterprise demand, not consumer hype, is shaping the winners of the AI boom. All in all, AI’s center of gravity is moving toward wins that are under-the-radar, operational, and measured by outcomes. Jason Hiner

IN TODAY’S NEWSLETTER

1. Anthropic passes $30B ARR after Pentagon clash

2.  Why AI may be more like electricity than software

3. Robotics startup is on a quest for physical AGI

MARKETS

$30B ARR shows Anthropic’s strategy is working

While rivals zigged, Anthropic zagged, and the bet keeps paying off.

On Monday, the company announced that its revenue run-rate, or its financial forecast based on current performance, has surpassed $30 billion. This figure is nearly triple the run rate at the end of 2025, which came in at $9 billion, and double that of mid-February, which the company reported was $14 billion. By the end of February, CEO Dario Amodei confirmed annual revenue had exceeded $19 billion

This exponential growth results from an equally notable increase in customer demand. In February, Anthropic disclosed that over 500 business customers were spending over $1 million, and now reports that the number exceeds 1,000 businesses, representing a two-fold growth in less than two months. 

The rapid growth can be mostly attributed to Anthropic’s laser focus on enterprise. For instance, Claude Code, a go-to coding tool for many developers, alone generated a run-rate revenue of over $2.5 billion by February. 

In that same month, weekly active users have also more than doubled since January 1, and business subscriptions to Claude Code have quadrupled since the beginning of 2026. Those figures are likely much larger now. This success has allowed the company to bridge the gap with the much bigger OpenAI, despite the company kick-starting the AI race as we know it.  

At the end of February, OpenAI topped $25 billion in annualized revenue, according to a report from The Information, citing a person familiar with the figure, representing a 17% increase from the annualized revenue it generated at the end of the year. The comparison isn't perfectly apples-to-apples, as the two companies use different methods to calculate revenue, as noted by the WSJ. But with an IPO in the works, the ChatGPT maker has recently made moves to pivot towards enterprise, just like Anthropic. 

Last month, it shut down its Sora generative AI video platform, ended its $1 billion content partnership with Disney, and put its ChatGPT “adult mode” on hold, all while OpenAI CTO Sarah Friar acknowledged that enterprise “is a very profitable business at scale” and that it's how OpenAI will “build a sustainable business model.” 

To meet growing demand and scale up the computing it needs to power it, Anthropic also announced an expansion of its existing partnership with Google Cloud and Broadcom by signing a new agreement. The resulting multiple-gigawatt TPU capacity is expected to come online beginning in 2027, according to the blog post.

Anthropic has had a couple of challenging months, making headlines for reasons it has typically avoided in the past: controversy and accidental data exposures. Anthropic found itself in a battle with the Pentagon after the Pentagon demanded the ability to use Claude for all lawful purposes, including autonomous weapons and mass surveillance, and, upon Anthropic's refusal, labeled it a "supply chain risk." It also suffered two accidental data leaks within five days of each other in late March. Yet through it all, the company is reportedly considering going public as soon as October, putting it in a race with rival OpenAI for a public listing. Against that backdrop, it is a sound strategic move for Anthropic to continue investing in and reporting results from what it does best — a focus on enterprise, which remains the core driver of its growth and revenue.

TOGETHER WITH DESCOPE

Ship AI agents and MCP servers with identity built-in

Every company has an AI / MCP project, but identity becomes a blocker. Following the MCP spec, debating on DCR vs CIMD, issuing short-lived credentials…the list goes on.

Descope Agentic Identity Hub is an identity provider for AI agents that lets developers easily add auth, access control, and credential management to their AI systems.

  • WisdomAI got MCP auth running in 1.5 days

  • Daylight Security shipped an MCP server with auth built-in

  • Cequence uses it under-the-hood in their AI Gateway

BIG TECH

Why AI may be more like electricity than software

If you think AI is the world's next smartphone or cloud, Nvidia would like to change your mind. 

Instead, Nvidia thinks AI equates more to underlying infrastructure like electricity or the internet. At HumanX in San Francisco on Monday, this was the topic of the opening keynote of the event, which brings together 6,000 people from across the AI industry to discuss how to use AI to solve business problems. 

I'd tweak this idea to say that today's AI will evolve into intelligence so ubiquitous that we'll rarely mention it. But it's likely to power the next generation of tools, as well as the next set of technological and scientific breakthroughs.

Those promises are why the AI industry is in the midst of "the largest infrastructure buildout in human history," as HumanX CEO Stefan Weitz called it during Monday's keynote.

That's because all that intelligence requires a lot of hardware to run, and it's going to require a lot more in the years ahead as the number of people using AI continues to grow and those who are using it today keep finding more things to do with it. 

"The connection between compute and intelligence is stronger than ever," Bryan Catanzaro, VP of applied deep learning research at Nvidia, said on Monday during the opening panel.  

Nvidia has extrapolated all the technology it takes to power AI by coining the phrase "AI is a five-layer cake," which is the same phrase HumanX used for the opening keynote on Monday. Here's how the layers break down:

  1. Energy: Everything starts with power, and right now, this is the greatest constraint to scaling up AI to meet future demand.

  2. Chips: Today's AI workloads need GPUs to run tasks in parallel at a massive scale, high-bandwidth memory to move data at breakneck speeds, and fast interconnections between all the pieces.

  3. Infrastructure: Here's where the physical components come together, from land to construction to power delivery to networking to cooling to server racks. This is where AI factories are emerging.

  4. Models: The AI models understand topics across a ton of different domains, and now that's expanding to other kinds of models as well, from scientific discoveries to autonomous systems to robotics.

Applications: The place where almost all of the value is created remains the application layer. It's where we get daily tools, agents, coding helpers, self-driving cars, industrial robots, and lots of other things being invented day by day.

In the months and years ahead, all these advances will depend heavily on bringing costs as low as possible and performance as high as possible. "When it comes to inference, it's not just about latency and cost, it's about quality," said Lin Qiao, CEO of Fireworks AI, another member of the panel. By "quality," Qiao was getting at the fact that accuracy and the elimination of hallucinations will also be huge factors in advancing intelligence to the point where it becomes a utility. Think of it like electricity getting refined and optimized so it doesn't throw breakers and the internet bringing widespread broadband with 99.9% uptime to most of the population. One of the other things that Qiao mentioned was a future where everyone will have their own model, optimized for their needs, interests, and daily tasks. That kind of individualization is where intelligence could go far beyond electricity and the internet as a general-purpose technology.

Jason Hiner, Editor-in-Chief

TOGETHER WITH TIGER DATA

ghost: free postgres for your agents

your agent needs a database. not tomorrow, not after a config file, not after a 12-step setup wizard. now.

ghost gives your agent unlimited, free postgres databases in seconds. one CLI command. no credit card. spin up, build, fork, tear down, repeat. databases at the rate of ideas, ephemeral and scalable.

stop wiring infrastructure. start shipping the thing you actually want to build.

HARDWARE

Robotics startup is on a quest for physical AGI

Can a single model enable a robot to do anything? One company is on a quest to find out. 

Generalist AI, a startup aiming to build “physical AGI,” debuted GEN-1 last week, its latest attempt at creating a hardware-agnostic, general-purpose robotics model. The company said that GEN-1 has passed a new performance threshold, demonstrating the ability to master simple physical tasks with reliability, speed and improvisation. 

According to Generalist, its model has a 99% success rate on tasks such as folding clothes, packaging items and folding boxes, whereas previous models only saw a 64% success rate. It also completes those tasks three times faster and requires only one hour of “robot data” per task, enabling “commercial viability across a broad range of applications.” 

“While it cannot solve all tasks today, it is a significant step towards our mission of creating generalist intelligence for the physical world,” the company said in its announcement. 

Generalist, which reached a $440 million valuation last year following a $140 million funding round, is one of several budding companies seeking to capture the momentum in physical AI. While many are focusing on specific form factors, like industrial arms, delivery bots or humanoids,  Generalist’s model is designed not to be locked into any specific form factor. 

At Nvidia GTC in March, Lucy Licht, Generalist’s partnerships lead, told The Deep View that the strength of the company’s model is that it’s “totally hardware agnostic.” By collecting an “incredibly diverse” set of pretraining data — one that the company has claimed is the world’s largest pretraining dataset for robotics — any use case, whether it be industrial, commercial, manufacturing or home, “exists inside of the model and just needs to be awoken by fine-tuning data.” 

In a demo at the conference, a two-armed robot fitted with a Generalist model was able to delicately pick up a smartphone, place it in a small box and replace the lid. 

“We see strong evidence and signs of life that the model works on a wide variety of embodiments,” said Licht. 

Jamie Lee Solimano, the company’s applied AI lead, told me that the company’s work to-date has “ushered robotics into its pretraining era.” 

“We've never before seen real scaling laws in robotics, because we've never had enough data,” said Solimano. “What we are seeing is an absolute revolution in robotics: To be able to show that pretraining works and completely reduce the amount of time that it takes to train up a new task or a new capability.”

Data is one of physical AI’s biggest roadblocks. Ken Goldberg, a professor at UC Berkeley, wrote a paper last year that detailed this problem, calling it the 100,000-year data gap: These models need far more data than is actually available. Generalist is on a mission to close that gap using pretraining, allowing robots to learn without the need for large simulation datasets. By vastly speeding up the process of training and deploying robots, those robots could then collect more data, get more training and become smarter. Hence, this creates the data flywheel that opened the same door in recent years for LLMs to grow as powerful as they are today. 

Nat Rubio-Licht

LINKS

  • Genspark: Next-gen Genspark AI Workspace 4.0 launches on April 7 at 8 PM PT

  • Manus: Meta’s AI agent is available to use in Slack 

  • Google AI Edge Gallery: The new app ranks #8 on the iOS App Store for productivity

  • Google Photos: “AI-enhance” feature is now available to Android users worldwide

  • Anthropic: Research Engineer, Cybersecurity Reinforcement Learning

  • Scale AI: Research Scientist — Frontier Risk Evaluations

  • Visa: Staff Research Scientist- GenAI Foundational

  • LinkedIn: Senior Applied Scientist, AI/ML

GAMES

Which image is real?

Login or Subscribe to participate in polls.

POLL RESULTS

How do most of the people you know outside of the tech industry feel about AI?

Positive (13%)
Negative (30%)
Confused (51 %)
Other (6%)

The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Not so uniform, not so perfect.”

“The different shapes of shadows behind the cups and the imperfections in the green wallpaper seemed real.”

“It feels like the imperfect one is generally the real one. AI is getting so good that the images look too good… and miss the messiness that humans bring.”

“The shadows of the mugs as well as the one-piece floating shelf tipped the scale of authenticity toward this image. I really like how these images are super similar down to the mug designs. Very tricky!”

“I've seen so many photos that look similar to this, but the second option looked 'too' clear and in focus. Turns out I was wrong!”

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.