- The Deep View
- Posts
- Inside OpenAI's case for an AI New Deal
Inside OpenAI's case for an AI New Deal

Hello, friends. As AI gets more powerful, the real story is whether institutions can keep up. Anthropic is holding back a top model because it may be too dangerous to release, a rare sign that caution can still beat competitive pressure. Meanwhile, some companies are using AI as cover for layoffs, even though the smarter long-term play is redesigning jobs, not cutting them. And in an exclusive interview with The Deep View, the lead researcher of OpenAI's new policy blueprint shared why the document argues that society needs a New Deal-like rethink of AI’s role in the world before the biggest disruptions begin. —Jason Hiner
1. Exclusive: Inside OpenAI's case for an AI New Deal
2. Anthropic built a model too dangerous to ship — yet
3. AI-based layoffs are a sign you’re doing it wrong
POLICY
Exclusive: The story behind OpenAI's policy moonshot

OpenAI’s stated mission is to ensure AGI benefits all of humanity. A group of the company’s researchers is putting extra focus on the "all" part.
In an exclusive interview with The Deep View, the lead researcher on OpenAI's ambitious new blueprint, "Industrial Policy for the Intelligence Age: Ideas to Keep People First," explained why this report is needed and what the company hopes it will accomplish.
Adrian Ecoffet, the OpenAI research scientist who spearheaded the initiative, told me that a group of around 3 dozen researchers at OpenAI came together to build this report in collaboration with the policy team. They created a series of working groups on various topics they wanted to include in the report and eventually settled on two main themes:
Building an Open Economy (or spreading the economic impact of AI)
Building a Resilient Society (or how we keep AI safe and under human control)
The company is on the verge of releasing its most powerful models yet, ones that could create crises ranging from economic dislocation to cybersecurity disasters. To preempt this, the researchers explored policies that could rein in the damage before it happens, creating a policy blueprint that could change the social contract in the biggest way since the New Deal. On the surface, this looks like a doc aimed at policymakers and public officials, but the reality is broader than that.
"This document is meant as a conversation starter," said Ecoffet. "The ideas are not fully baked. There are probably a lot of problems with them. In many cases, they are not very specific. And so what I hope is for people to react to it, and frankly criticize it [and] even come up with rip‑offs of it. And to really have a good conversation about these ideas."
The thirteen-page document has a strong egalitarian bent to it and proposes several revolutionary concepts:
A Public Wealth Fund: The government creates a fund that invests in AI companies and shares the profits with citizens, so everyone benefits from what could be the greatest wealth creator in history, rather than just the wealthy. This would raise all kinds of practical challenges. But if their goal is to stimulate debate, this certainly will.
32-hour/four-day workweek pilots tied to efficiency dividends: If AI makes workers more productive, companies should pass those gains on to employees in the form of a shorter work week, rather than pocketing all the extra profits.
"Right to AI" similar to universal internet access: Just as we made sure most people can access electricity and the internet, the government should make sure everyone can access affordable AI tools. Again, this keeps the benefits from accruing disproportionately to rich people.
AI-driven corporate gains and automated labor: As AI replaces human workers and companies earn higher profits, those companies should pay higher taxes to help fund social safety nets that retrain and support displaced workers.
Ecoffet said the goal of these proposals is not to have them all accepted as-is. Instead, the goal is to bring the risks and impacts of AI to the public consciousness before AGI and superintelligence have fully come to fruition. That’s something that OpenAI CEO Sam Altman has also advocated, telling Ecoffet and other researchers in a roundtable published Tuesday that the public and policymakers alike need a long period of time to debate these ideas to make good decisions long before AI triggers potential crises that force us to act.
Al Gore, former vice president and founding partner of Generation Investment Management, echoed this sentiment in his keynote speech at the HumanX conference in San Francisco on Monday. Referring to Anthropic publishing its constitution for Claude in January, Gore told a crowd of several thousand attendees that “I think there ought to be a public discussion and debate of the constitution written for … each of the [frontier] models.” So far, none of the other frontier labs, including OpenAI, has published a constitution like the one Anthropic has opened up for Claude.

OpenAI believes we will need a New Deal-style policy moonshot to reframe the social contract around AI. Doing that is going to require raising awareness about these issues in the public consciousness, so that we come to a consensus around them by the time superintelligence gets here. OpenAI isn’t the first to call attention to these ideas. As companies continue their push for AGI, ethics organizations have been calling for regulation and governance of AI to keep humans in the driver's seat and allow the gains to be distributed more broadly. What makes this different from those movements is that OpenAI is the one shouting from the rooftops. In essence, the company is calling itself out and holding itself accountable, along with other frontier labs and leading players in the AI industry. It’s also an example of the kind of forward-thinking ethical position that usually characterizes OpenAI’s No. 1 rival, Anthropic.
Senior reporter Nat Rubio-Licht also contributed to this story.
TOGETHER WITH LLAMAINDEX
Save Time (and Tokens) with the #1 Agentic Document OCR
Your AI agent can write production-ready code with the best of them, but somehow can't accurately read a simple PDF. Classic.
But before you give up and accept that the rest of your day (or week) will be transcribing documents, you need to give LlamaParse a shot. This Agentic Document OCR reads even the most chaotic PDFs—we’re talking 10k reports, invoices, legal docs, even full-blown research papers – with 99%+ accuracy.
It gives your agents the context they need, and it saves you a whole lotta hassle (and money. Did we mention the money?) Try LlamaParse right here and get 20K free credits with code DEEPVIEW20.
PRODUCTS
Anthropic built a model too dangerous to ship, yet
Anthropic's most advanced model is finally here, just not in the way you might expect.
On Tuesday, Anthropic unveiled Project Glasswing, an initiative to secure critical software in the age of AI, as attacks become increasingly sophisticated and prevalent. Anthropic cited the launch of Claude Mythos, a model more advanced than its current top-tier model, Opus, and the far-reaching implications it carries for cybersecurity as the driving force behind forming the initiative.
“Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities,” said the company in the blog post.
For this initiative, leaders across industries are joining Anthropic, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. The plans for Project Glasswing include:
Launch partners: Companies listed above will use Mythos Preview as part of their defensive security work, and Anthropic will share learnings
Additional access: 40 additional organizations that build or maintain critical software infrastructure can access Mythos Preview to scan and secure systems
Monetary support: Anthropic is committing $100 million in usage credits for Mythos Preview across the use cases above, and $4 million in direct donations to open-source security organizations.
Cross-industry collaboration is essential to achieving a comprehensive understanding of the threat landscape, a point underscored by Steve Schmidt, SVP & Chief Security Officer at Amazon.
“The software that needs to be examined is literally everything across all of the different industries, and so we have to work together, because we don't have it all,” Schmidt told The Deep View.
Mythos Preview has already found thousands of “high-severity vulnerabilities” across “every major operating system and web browser.” The model outperforms Claude Opus 4.6 in vulnerability reproduction, scoring 83.1% vs. 66.6% on CyberGym, driven by stronger agentic coding and reasoning abilities, as reflected across benchmarks such as SWE-bench Pro and Terminal Bench 2.0.
The Claude Mythos preview will not be made generally available, though the company shares the goal of one day releasing models of that scale to the public. Ultimately, the initiative reflects Anthropic's commitment to its core mission of responsible AI deployment, a sentiment echoed by Adam Meyers, SVP of Counter Adversary Operations at CrowdStrike, who told The Deep View.
“I think it's important to think about the fact that they [Anthropic] did this because they had a new model, and they understood there were implications to this model that they didn't fully appreciate, and they wanted to almost get a peer review to understand what are we dealing with here, and I think that that is a really responsible way to address that problem,” said Meyers.

The AI race earns its name: labs are locked in fierce competition to release ever more powerful models. The risk, however, is that these advances are consistently outpacing our ability to responsibly anticipate and apply the appropriate guardrails. Releasing models that the world isn't yet equipped to handle carries serious consequences, particularly when bad actors enter the equation, and the potential for unprecedented harm is real. That Anthropic chose to pump the brakes despite the significant enterprise revenue at stake offers a measure of reassurance that responsibility, not greed, can still win out.
TOGETHER WITH TINES
New infrastructure management guide for modern IT Ops teams
IT infrastructure today is more complex than ever. More tools, more systems, more pressure to keep everything running smoothly.
Yet, most IT teams are still managing it with manual processes, disconnected workflows, and reactive fixes. Can you relate?
Tines recently published this new essential guide that breaks down how to change that. Here’s a preview of what’s inside:
A clear framework for reducing hidden waste and delays caused by manual capacity management
Practical guidance to improve reliability by moving from alert-driven firefighting to automated response
Insight into scaling infrastructure predictably without sacrificing performance or governance
WORKFORCE
AI-based layoffs are a sign you’re doing it wrong
Experts are warning against cutting jobs in favor of AI. But companies are going to try anyway.
A survey of 2,400 C-suite leaders published by AI agent platform Writer on Tuesday found that 60% of enterprises intend to lay off employees who can’t or won’t use AI. AI is also spurring favoritism, with 92% of executives surveyed admitting that they are cultivating a class of “AI elite” employees, and 77% of executives claimed that those who don’t use AI won’t be considered for promotions.
The severity towards employees who resist AI might be driven by their own anxiety:
38% of CEOs interviewed reported experiencing high levels of stress related to their AI strategies, and 64% feared losing their position if they failed to properly guide their employees through the AI transition.
"Executives, who are so crippled by anxiety around not having delivered any results [with AI], are clinging to the AI-first people in their companies [and] creating a dual class structure," May Habib, CEO of Writer, told The Deep View’s Jason Hiner.
Though these executives believe that AI can supercharge work, with 87% claiming their “power users” are five times more productive on average, the actual returns are still miles behind: only 29% report significant returns from generative AI and 23% from agents.
Because these companies have yet to reap what they sowed, many are turning to the one surefire place that they can save a few bucks fast: payroll. Additionally, many companies will likely “AI wash” their headcount reductions, making the bloodbath look even larger, Chad Seiler, KPMG U.S. Industry Leader for Telecom, Media and Technology, told The Deep View.
The gains made from cutting staff and replacing them with AI, however, are temporary, said Seiler. “The losers are going to be the ones that figure out how to eliminate jobs,” he said. “It's not going to be durable. As businesses grow, people continue to hire, and so you're going to have to backslide into hiring more people.”
The durable strategy comes when roles are reimagined, rather than eliminated, said Seiler. If agents can handle all of the grunt work, whether it be cluttered or administrative tasks or data analysis, it could open up brain space for employees to do much more high-value work. To be clear, time is money.
“People on the winning side of this are going to be [asking], how do I free up more time for my people, so they can add more value to my organization?” said Seiler. “Versus ‘I cut 12% of my people through automation.’ That's not a winning strategy for any company, especially if you're a growth-oriented company that has anything to do with innovation.”

The enemy of a successful AI metamorphosis is impatience. If you are a leader who isn’t patient enough to reskill hesitant or skeptical employees, instead sending them to the chopping block as a means of getting fast returns, then your gains may only be temporary. On the flip side, if you are taking the time to rethink what the roles in your company might be capable of with AI agents in tow, you may be more likely to stay afloat. The problem, however, is that enterprises are rarely patient, especially with stakeholders breathing down leaders' necks. AI is moving faster than ever, money is being spent at a rapid clip and FOMO is at an all-time high. In the end, something has to give.
LINKS

Gemini 3-based AI overviews are wrong 10% of the time, according to NYT
Suno, Universal music licensing talks have stalled
Google adds mental health tools, hotline support to Gemini after lawsuit
Spotify expands AI-prompted music playlists to podcasts
Intel joins Musk’s Terafab AI chip project to power data centers, humanoids
AI network infrastructure firm Aria Networks raises $125 million Series A

Clico: A browser extension that pulls context from your open tabs and writes right at your cursor, without ever leaving the page. (sponsored)
Acrobat Student Spaces: Adobe has launched a suite of AI-powered Acrobat tools for students, allowing students to create quizzes and presentations from study materials.
Google AI Enhance: Google Photos now allows android users to enhance photos using AI, rolling out to users gradually.
Marble: World Labs has rolled out two new updates to its flagship model, including Marble 1.1 for better lighting and contrast, and Marble 1.1-Plus for scaling environments.

Meta: AI Research Scientist - Voice AI Team, Meta Superintelligence Lab
PayPal: Staff ML Scientist, Agentic AI
Fireworks AI: Member of Technical Staff, Research
Amazon: Sr. Applied Scientist, AGI Info
A QUICK POLL BEFORE YOU GO
Are you worried about AI fueling job loss? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“[This image] is not a crisp as the [other image].” “[The other image] had too much of that tell-tale sepia tone.”
|
“Both sets of dandelion puffs looked too patterned to be real. I went with the realistic grass seed.”
“I don't know if AI is becoming better, or if photography is becoming worse, because the real one doesn't look all that real to me!” |


If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.












