OpenAI, Google staff back Anthropic stance

Welcome back. Apple’s new MacBook Pro M5 Max points to a potential reset in AI economics, making it possible to run larger models locally and reduce reliance on costly cloud inference. Microsoft is pushing deeper into agentic work with Copilot Cowork, but the early version looks less capable than more ambitious rival agents. And Anthropic’s clash with the Pentagon is turning into a broader story, as employees from OpenAI and Google publicly back the company’s red lines. Jason Hiner

IN TODAY’S NEWSLETTER

1. Rival AI workers line up behind Anthropic

2. Microsoft launches Cowork agent, with limits

3. MacBook Pro M5 Max could change AI economics

GOVERNANCE

Rival AI workers line up behind Anthropic

Anthropic’s saga with the Pentagon continues. Now, it has support.  

On Monday, the AI firm sued the Department of Defense and other federal agencies for its recent designation as a supply chain risk, a retaliatory move that could cost the company billions, according to WIRED. The company isn’t fighting alone, however: A coalition of dozens of employees from OpenAI and Google filed an amicus brief in support of the company’s fight against the government. 

The brief, which counts supporters such as Google DeepMind chief scientist Jeff Dean, argues that the designation undermines competitiveness in the industry, stymies discussions of AI safeguards and will have rippling impacts on the industry at large. 

“In the absence of public law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against their catastrophic misuse,” the brief claims. 

It’s the latest sign that tech workers are not taking the actions of their employers lightly. 

  • On Saturday, Caitlin Kalinowski, a robotics leader at OpenAI, announced that she resigned from the company after it had struck a deal with the Pentagon following the agency’s fallout with Anthropic. In a post on X, Kalinowski said surveillance of Americans and fully autonomous weapons “are lines that deserved more deliberation than they got,” adding that her resignation is “about principle, not people.” 

And in late February, a petition titled “We Will Not Be Divided” from AI workers supporting Anthropic's "red lines" has been signed by nearly 1,000 workers from OpenAI and Google. The petition calls on tech workers at frontier AI companies to stand together against the demands of the Department of War.

People like to think of tech companies as single, collective entities that all think and feel the same way about the work they do. However, the recent chaos around Anthropic has demonstrated that they are anything but. These employees clearly have opinions that differ from those of their employers about the gravity of the technology they’re building and how it should be handled. And given the current premium on AI talent, it’s clear that these employees have leverage. However, these examples are likely just the beginning. If the social movements and unionization campaigns from the last century are any indicator, real collective action will take more than a few hundred signatures. The question remains: How much support would it take to sway the decisions of the most powerful entities in AI?

Nat Rubio-Licht

TOGETHER WITH TELEPORT

Identity Infrastructure Built for Autonomous AI

AI agents deploy code, access infrastructure, and trigger workflows at machine speed. Legacy identity systems built for humans weren’t designed to govern non-human actors operating autonomously.

Without purpose-built controls, agentic workflows create credential sprawl, excessive privilege, and limited auditability.

Teleport’s Agentic Identity Framework defines what infrastructure requires to adopt AI securely at scale:

  • Short-lived credentials for non-human identities

  • Policy-based access for agentic workloads

  • Elimination of static secrets

  • Full auditability and clear provenance

PRODUCTS

Microsoft unveils Copilot Cowork, but with limits

In what Microsoft is calling the third wave of Copilot, the company unveiled Copilot Cowork, delving deeper into agentic capabilities.

Copilot Cowork is powered by Work IQ, which the company describes as the “brain” behind Copilot, which understands all your data, context, skills, and tools to provide personalized assistance. The concept is the same as the viral Claude Cowork, allowing M365 users to offload tasks to Copilot, which can then work in the background to execute them pending user approval. 

Microsoft provides examples of how users can use it, including: 

  • Triaging calendar: It can review your Outlook schedule and suggest changes to match priorities, such as flagging low-value meetings and conflicts and proposing changes the user approves. 

  • Generating content: It can create documents and presentation decks. 

  • Deep Research: Cowork can research across the company and web sources to provide a well-cited, well-organized summary. 

This launch is an attempt to stay competitive with agentic AI tools such as OpenClaw and Anthropic’s Claude Cowork. However, there are some key differences in Microsoft’s response, as noted by Gartner research.

“Unlike Claude Cowork, it does not support local computer use, cannot interact directly with local files or applications, and lacks native integrations with third-party tools and services.

These omissions constrain [Copilot] Cowork’s autonomy and limit its ability to operate end-to-end workloads outside Microsoft 365.” 

Currently, Copilot Cowork is in Research Preview and being tested with a select group of customers. Microsoft said it will be more broadly available in the Frontier program in late March 2026.

When ChatGPT first exploded in popularity, Microsoft was quick to capitalize, offering free access to OpenAI's latest models through its tools, even when OpenAI itself was paywalling them. That strategy worked initially. But Microsoft has been having trouble competing since then. Without developing its own models capable of enabling unique experiences, catching up could be difficult, though not impossible. The talent is there, with a prime example being Microsoft AI’s CEO, Mustafa Suleyman, who previously co-founded DeepMind and has been aggressively poaching talent. Nevertheless, Microsoft still needs to find a way to appeal to consumers beyond just bundling Copilot into laptops through OEM deals or its Microsoft 365 business licenses.

TOGETHER WITH CRUSOE

Turn blind spots into actionable insights

AI velocity shouldn't stall because of silent GPU underutilization or training jobs that fail without a clear root cause. Crusoe Command Center is the unified operations platform designed to eliminate friction through automated orchestration, observability, and support.

Command Center replaces fragmented monitoring with a single source of truth, ensuring every GPU in your cluster is visible and accountable — so your team can move out of maintenance mode and start building with confidence.

HARDWARE

MacBook Pro M5 Max could change AI economics

I've been testing Apple's MacBook Pro M5 Max for the past week, and the real story isn't how crazy fast it is, but how it's going to change the game for AI inference. 

The model I've been testing is a 16-inch MacBook Pro with an 18-core CPU and 40-core GPU M5 Max chip, and 128GB of unified memory. This is a $6,000 machine. Previously, that would have sounded nuts, unless you were a video or audio producer or a 3D graphic artist. 

But the AI boom has provided a new reason and a new clientele for a machine like this.

It's all about running AI inference locally. That's become especially urgent recently because AI inference costs have been running out of control and threatening to derail the AI revolution, because it makes it very difficult to run AI profitably. This gained additional attention this week when entrepreneur Chamath Palihapitiya reported that AI inference costs at his startup have tripled over the past three months, while productivity and profitability have not increased. 

Apple did two things to make the M5 Max MacBook Pro a monster at running AI models locally: 

  1. Expanded memory bandwidth means the GPU can now access all of the onboard memory, so you can run much larger parameter models (up to 70B)

  2. The new "Neural Accelerators" in every GPU core will speed up the performance of local LLMs

This MacBook Pro could break one of the key pillars of the current AI ecosystem: paying for inference tokens for models hosted in the cloud. That's what Palihapitiya was waving the red flag about. These token costs get insanely expensive very fast. Entrepreneurs running AI agents like OpenClaw say the token costs of running OpenClaw to build software and apps can end up costing more in monthly token fees than the salary of a human developer. 

On the MacBook Pro M5 Max, you can run very powerful models locally for free using several different options:

  • Apple MLX is Apple's lesser-known open-source command-line tool for running language models directly on Apple silicon chips

  • LM Studio is a popular GUI tool for downloading the latest models and running them from your own computer

  • Ollama is the preferred tool for developers to manage various models, and it's free when you run it on your local system 

All of these platforms allow you to freely download and locally run the latest open-source models, including Qwen, DeepSeek, GPT-OSS (OpenAI), Gemma (Google), Llama (Meta), Nemotron (Nvidia), MiniMax, GLM (Z.ai), Mistral, and more.

Beyond the cost savings, running these models locally also has important benefits for security, privacy, and data sovereignty.

Since the end of 2025, I've been hearing from executives and leaders about the out-of-control costs of AI inference and their efforts to reduce them by using smaller models, domain-specific models, and open-source models. When applied smartly to specific use cases, leaders reported that this approach can deliver better performance, fewer hallucinations, and lower costs. I now fully expect that running models locally on machines like the MacBook Pro M5 Max will become another part of the answer. I'll continue testing the models locally on this machine and share what I'm learning. You can find me on X/Twitter at x.com/jasonhiner to follow my updates in real time. 

Jason Hiner, Editor-in-Chief

LINKS

  • Claude Code: Anthropic’s coding agent got a new Code Review tool that sends a team of agents to hunt for bugs when a PR is opened. 

  • Pomelli: Google’s AI tool for creating on-brand content for businesses just expanded into 170 countries and businesses. 

  • Freepik: The AI creative platform was ranked the #11 most-used Gen AI Product Worldwide by A16Z.

  • Runway: The video-generating platform introduced Runway Characters, which are real-time intelligent avatars that can be used for interactive applications.

  • EPM Scientific: Frontier ML Infrastructure Research Engineer

  • Amazon: Sr. Applied Scientist - Privacy, Privacy Engineering

  • Adobe: Senior Machine Learning Engineer

  • Snap: Staff Machine Learning Engineer, Ad Ranking, Level 6

GAMES

Which image is real?

Login or Subscribe to participate in polls.

POLL RESULTS

Do you believe AGI should be the goal the frontier AI labs are working toward?

Yes (28%)
No (56%)
Other (16%)

The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The water splash [on this image looked more real.]”

“[The other image] was more rich and detailed, leading me to believe it was AI generated.”

“This is the first time I got one of these wrong, mainly because the AI waterfall looks like a place I've actually been -- Silver Falls, OR.”

“Line of people on [the] trail are too evenly spaced.”

“The people in [this image] have too similar of posture/build.”

“[In this image] everything both close and far is perfectly focused.”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.