- The Deep View
- Posts
- Altman reframes who controls AI’s future
Altman reframes who controls AI’s future

Welcome back. OpenAI is broadening its path to include providers beyond Microsoft, a move that gives it greater cloud flexibility, broader enterprise reach, and a stronger IPO narrative. Mistral is tackling a different bottleneck: turning AI pilots into production workflows that enterprises can trust. And Sam Altman is moving AI decentralization into the industry spotlight. That’s the right conversation, but the real test is whether OpenAI and the other frontier labs change how they operate before market incentives push them toward more centralization, not less. We should hold them to it. —Jason Hiner
1. Altman elevates AI's centralization paradox
2. Mistral fixes enterprise AI’s last mile
3. OpenAI resets its cloud strategy beyond Microsoft
GOVERNANCE
Altman reframes who controls AI’s future
Last month, The Deep View raised the red flag about the risks of AI power centralizing in the hands of too few companies. On Monday, OpenAI CEO Sam Altman published a manifesto calling for the democratization of AI.
"Power in the future can either be held by a small handful of companies using and controlling superintelligence, or it can be held in a decentralized way by people," Altman wrote in his essay titled Our principles. "We believe the latter is much better, and our goal is to put truly general AI in the hands of as many people as possible."
This follows the publication of a policy document earlier this month when OpenAI researchers shared a set of "ideas to keep people first" that included ambitious thought-starters aimed at inviting policymakers and government officials to play a larger role in the development of AI. At that time, Altman emphasized that the public and governments should have an extended period to debate these ideas and make good decisions long before AI precipitates a potential crisis.
Others have recently struck a similar tone on democratization and wider participation:
Greg Brockman (OpenAI president) told Alex Kantrowitz, "We need this broad conversation. We need lots of people to be aware that if this technology is going to come and change everything for everyone, people need to participate in that. It can't be something that's done off in secret by one centralized group."
Andrej Karpathy (former Tesla AI lead and early OpenAI cofounder) told Sarah Guo, "Centralization has a very poor track record in the past, in my view. There are a lot of pretty bad precedents [in economic and political systems]. So I want there to be a thing that's maybe not at the edge of capability because it's new and unexplored. But I want there to be a thing that's behind and is a common working space for intelligences that the entire industry has access to. That seems to me like a pretty decent power balance for the industry." Karpathy, of course, is talking less about policy and more about putting the benefits of open-source AI into the hands of a lot more people in the world, which will also help accomplish the democratization mission Altman and Brockman are extolling.
Beyond democratization, Altman's statement also elevates several other principles: empowerment, universal prosperity, resilience, and adaptability. These are the pillars of what Altman lays out as a path for the industry to build AI safely, minimize harm, and maximize the benefits for the broader public.

To be clear, Altman and others have mentioned the benefits of democratization and decentralization before. Still, I applaud the fact that OpenAI is placing so much emphasis on it now. The challenge is that all of this is still at the level of theory and dialogue. It's unclear how OpenAI will transform the way it operates to live up to these ideals. And it's even less clear if other AI labs will follow. But crucially, Altman reiterated, "We will resist the potential of this technology to consolidate power in the hands of the few." We need to hold him to that, and expect the same from the leaders of Anthropic, Google, Meta, xAI, Microsoft, Amazon, DeepSeek, and the rest of the current and future frontier labs. Because once these companies go public, all of the financial incentives will favor centralization. And it would take a very strong counterweight to balance it out.
TOGETHER WITH REDIS
Your agent isn't broken. Your context is.
Most AI agents don't fail because the model is bad. They fail because the model doesn't have the proper infrastructure to reason well.
Simba Khadder, Head of Engineering at Redis, lays out a 4 pillar framework for building context systems that hold up in production—plus an architectural self-audit checklist you can run against your stack today.
Read the guide →
STARTUPS
Mistral fixes enterprise AI’s last mile
For many enterprises, the promise of AI stalls at the pilot phase, caught on a stubborn bottleneck: getting it deployed in ways that actually deliver value. Mistral believes it has an answer.
On Tuesday, the French AI lab launched Workflows in public preview, available in Studio, an orchestration layer that allows enterprises to run AI-powered processes reliably in production, handling tasks such as connecting tools, managing multi-step pipelines, detecting and handling failures, and pausing for human approval mid-execution without losing progress.
“Most of what's happening out there is about building things very easily for individual chatbots. We've taken the approach of tackling harder problems that need orchestration happening in different places where the execution is happening,” Elisa Salamanca, Head of Product at Mistral, told The Deep View.
With Workflows, a developer writes the workflow in Python and publishes it to Le Chat, Mistral's conversational AI assistant, so anyone in the organization can trigger it, according to the blog post. Every step is tracked and auditable in Studio, with built-in fault tolerance, durability, and the ability to pause mid-execution for human approval before resuming. Workflows itself is built on Temporal's durable execution engine, the same infrastructure used by Netflix, Stripe, and Salesforce.
A major aspect of Workflows is maintaining flexibility to meet customers where their needs and work processes are. For instance, Mistral emphasizes mission-critical, flexible deployment with some customers running hybrid workflows, others running fully in their own virtual private cloud, and lastly, running entirely on Mistral’s infrastructure.
In another flexibility example, Salamanca explained that since every workflow does not need to be agentic, Workflows are hybrid, with parts being rule-based and agentic tools being added only when needed.
“So Workflows has been built in a way that you can orchestrate things deterministically and inject agentic pieces whenever you want them to be done, so it combines deterministic code with agentic capabilities,” said Salamanca.
Mistral already has customers across different industries. Use cases including cargo release automation, document compliance checking, and customer support triage, according to the release. To get started, users can try Workflows in Studio.

When catering to enterprises, AI labs often focus on new features that optimize specific tasks. But it is the deployment phase where enterprises most commonly struggle, and solving it otherwise would require building from scratch. Workflows target the infrastructure layer that powers real, high-stakes processes, and in doing so, Mistral is showing a deep awareness of how to help its primary audience of ROI-focused enterprises.
TOGETHER WITH ATLAN
Atlan Activate 2026
Claude for one team. Cortex for another. Genie rooms. Copilot. Every AI tool in your org is rebuilding context from zero.
Different answers to the same question. Nobody knows what AI actually knows about your business.
That's the gap between the AI strategy and a production-ready AI stack. On April 29, join Atlan for a live demo of the context layer - a shared infrastructure every agent in your stack draws from
Save your spot
BIG TECH
OpenAI resets its cloud strategy beyond Microsoft
ChatGPT's ascent boosted Microsoft's AI strategy by providing access to OpenAI's latest and greatest models. But what began as a tight-knit partnership has since loosened, and the relationship between the AI lab and its biggest investor just got less exclusive.
On Monday, OpenAI announced “the next phase of the Microsoft OpenAI partnership,” which involved an amendment to the agreement granting OpenAI greater independence from its lead investor.
The most notable changes are that OpenAI can now offer its product to customers on any cloud provider, with Microsoft remaining OpenAI’s primary cloud partner, and products shipping first on Azure unless the Microsoft chooses not to.
Other major changes include:
Product access: Microsoft will continue to have a license to OpenAI IP for models and products through 2032, but it is now non-exclusive.
Revenue share: Microsoft will no longer pay OpenAI a revenue share, while OpenAI will continue to pay Microsoft at the same percentage through 2030, subject to a total cap. Prior to this, Microsoft shared 20% of OpenAI's model sales on Azure with OpenAI, while OpenAI shared 20% of its total revenue with Microsoft.
Involvement: Microsoft “continues to participate in OpenAI’s growth as a major shareholder.”
The latest version of the agreement, amended in October 2025, was written so that Microsoft retained IP rights and Azure API exclusivity until OpenAI created AGI, or human-level intelligence. The blog post attributes “long-term clarity” as the motivator behind the amended agreement, and removing the AGI clause makes sense as a result.
That clause was controversial, as AGI itself is a nebulous concept, and its exact definition is highly contested; so much so that the agreement included a clause requiring AGI to be verified by an independent expert panel. The blog post attributes “long-term clarity” as the motivator behind the amended agreement, and removing the AGI clause makes sense as a result.

Both Microsoft and OpenAI have long held different approaches to AGI, and while Microsoft has repeatedly said this is not a point of contention, the distance between the two seemed inevitable given OpenAI's growth pace. More concretely, exclusivity terms that restricted OpenAI from selling to enterprises outside Azure were proving limiting, with OpenAI’s newly appointed revenue chief, Denise Dresser, saying in an internal memo that the Microsoft partnership has “limited our ability to meet enterprises where they are,” including on Amazon's AWS Bedrock. The non-exclusivity factor is a smart move for OpenAI, as it paves the way for major deals like the one with Amazon and clears a path to IPO by demonstrating diversified revenue and cloud flexibility to potential investors.
LINKS

China blocks Meta’s $2.5 billion acquisition of Manus AI
OpenAI works with MediaTek and Qualcomm to develop smartphone processors, report
Canva apologizes for Magic Layers bug that replaced "Palestine” and fixes it
Meta looks to power AI data centers with solar energy collected in space
David Silver, researcher behind AlphaGo, lands $1.1B seed round for AI lab

Google: New online five-day AI agents intensive Vibe Coding course with Kaggle
Kimi K2.6: Ranks first place on OpenROuter’s weekly LLM Leaderboard
Telegram: Users can use AI bots to develop, launch, and manage other bots on the app
Lovable: A new mobile app where users can build anywhere

Anthropic: Events Lead, Brand
Meta: Marketing Technology Manager, AI
Basis AI: Revenue Operations Leader
CVS Health: Product Manager - AI
A QUICK POLL BEFORE YOU GO
Have you seen The AI Doc, the new film about the AI industry? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Light looked more real in [this image].” “[This image] seemed haphazard. I don’t think AI has a setting for that.” “The signs about the niche looked more real. The other image was too much contrast on the letters. They looked fake.” |
“The objects on the shelf in [this image] just didn’t make sense to be together.” “The shadow of the tree on the wall wasn't quite right.” “The plushies in [this] image looked aligned too well.” |


If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.













