- The Deep View
- Posts
- Can an AI agent become your Iron Man suit?
Can an AI agent become your Iron Man suit?

Welcome back. AI ambition is outrunning reality, and AI safety is in the crosshairs again. The AI skills gold rush continues: enterprises are pouring millions into training and talent, yet economists argue AI added “basically zero” to U.S. GDP growth in 2025. Even OpenAI’s Brad Lightcap says true penetration into business processes has just begun. Meanwhile, Anthropic is loosening parts of its Responsible Scaling Policy while holding firm against U.S. government pressure to relax its safety measures. And finally, Quill’s new “Chief of AI Staff” bets the next agent wave will feel less like a self-driving car and more like a superhero outfit. —Jason Hiner
1. Can an AI agent become your Iron Man suit?
2. Pressure mounts on Anthropic’s AI ethics stance
3. Reports: AI skills demand outweighs returns
STARTUPS
Can an AI agent become your Iron Man suit?
The idea of an AI Chief of Staff has quickly gained currency as AI agents have had their moment in early 2026. And at the pace the AI industry is moving right now, it's no surprise that the concept now has an official product.
On Wednesday, Quill launched Quilliam, its "Chief of AI Staff" with the purpose of powering up knowledge workers rather than replacing them and doing it in a way that can preserve security, sovereignty, and localization of data. Previously known as Quill Meetings and a competitor of products such as Granola and Fireflies, the company is transforming itself around making meetings more actionable with agentic AI.
At the same time, Quill announced a $6.5 million seed funding round and a new COO, Yacob Berhane, to pursue the new mission. The Deep View spoke with both Berhane and CEO Michael Daugherty about the launch of their Chief of AI Staff.
Here's what they highlighted it can do:
Turn meeting action items into concrete actions: Automatically create or update project management tickets, Notion docs, and other systems of record, showing you a high‑level plan and then executing after you click approve.
Automate follow‑through beyond meetings: It can draft emails, memos, and summaries tailored to each recipient or use case, such as VC rejection emails, internal investment memos, or a recap email from a parent‑teacher conference. So you start with decision‑making, not recaps.
Use your entire meeting history as context: It lets you query across all past calls (e.g., “Catch me up on the last call” or “Summarize top security requests from my last three meetings”), and then it can spin those insights into structured work.
Keep you present in meetings while capturing what matters: It lets you mark highlights and take screenshots in real time; those cues are used to personalize notes and pull out examples you were thinking about, even if you didn’t fully verbalize them in the meeting.
Run privately on your own machine, even fully offline: The agent records and stores audio and transcripts locally, can enforce strict deletion policies, and can run against local models (e.g., OpenAI's GPT‑OSS 20B) and with Wi‑Fi off, so no meeting data has to leave the device.

In our discussion, Daugherty emphasized two different paradigms for agents. He said, “You either end up with a Waymo car, where you have an agent operating on its own… or you end up with an Iron Man suit where you are responsible for the outputs, you are actually in control, but it's making you much stronger and able to do a lot more." Daughtery portrayed Quill's agent as the Iron Man suit. That's a metaphor that's going to make a lot more professionals and enterprises comfortable. And, taking action on your meeting notes is a great place for AI agents to get started in clawing back time for workers. Naturally, this will have the biggest impact on people who spend at least half their day in meetings, rather than on individual contributors who are heads-down all day working on projects.
TOGETHER WITH ENERGYX
AI has the lithium boom heating up
Thanks to growing demand across high-growth sectors like AI and robotics, lithium stock prices grew 2X+ from June 2025 to January 2026.
$ALB climbed as high as 227%; $LAC hit 151%; $SQM, 159%. But the real winner may be a stock not listed on public exchanges, EnergyX.
This $1B unicorn’s patented technology can recover up to 3X more lithium than traditional methods, earning investment from leaders like General Motors. Now they’re preparing for commercial production just as experts project 5X demand growth by 2040.
They’ve announced what could be one of the US’ largest lithium production facilities and have rights to ~150,000 lithium-rich acres across the Americas.
HOW TO AI
Talk instead of type

Wispr Flow is a voice dictation tool that actually works… you just talk naturally, and it outputs clean, punctuated text without the "ums," filler words, or the robotic delivery that built-in dictation requires. Say "let's meet at 4 pm, actually 3 pm," and it corrects itself in real-time. It works across any app (email, Slack, WhatsApp, docs, etc.) and supports 100+ languages. The Android app just launched on Monday, joining Mac, Windows, and iOS. On Android, it's a floating bubble; on iOS, it's a dedicated keyboard; on desktop, it works system-wide.
How to set it up:
Download the app (click here)
Sign in and grant permissions when prompted
Open any app with a text field, and Flow will automatically appear
Android: Hold the bubble to dictate, or tap to start/stop
iOS: Switch to the Wispr keyboard, then hold the mic
Desktop: Use the keyboard shortcut to start dictating
Speak like a human and get near-perfect transcription
GOVERNANCE
Pressure mounts on Anthropic’s AI ethics stance
Anthropic might be risking the thing that makes it Anthropic.
On Tuesday, the company announced changes to its Responsible Scaling Policy, the framework that prevents Anthropic’s models from being released without proper safety and security measures.
The biggest change? The company has struck the pledge to hold back its models if Anthropic can’t guarantee proper risk mitigations in advance of release. Additionally, the company is now no longer preventing itself from training models above a certain level without certain safety measures.
In an interview with Time, Anthropic’s chief science officer Jared Kaplan said that it “wouldn't actually help anyone” for it to stop training AI models. “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”
Some of the highlights from the new version of the policy include:
Anthropic meeting or exceeding the “overall risk reduction posture” of competitors
Delaying development if Anthropic is already considered to be in the lead on AI development and models in production are considered to carry catastrophic risk
Commitments to release “risk reports” every three to six months to remain transparent about the safety issues its models may face
Introducing “frontier safety roadmap,” describing concrete plans for risk defenses across security, alignment, safeguards and policy
Even with the changes, Anthropic is still standing firm in the ongoing dispute over its models being used for warfare: After a meeting with CEO Dario Amodei on Tuesday, Defense Secretary Pete Hegseth is reportedly giving the company until Friday to roll back its AI safety guardrails for its chatbot, Claude. Anthropic has two major ethical boundaries: For its models to not be used for fully autonomous targeting in military operations or for the surveillance of U.S. citizens.
If Anthropic continues to refuse, the Pentagon will label the firm a “supply chain risk” and invoke the Defense Production Act, giving the agency access to Claude “regardless of if they want to or not,” according to CNN.
But not all AI firms have the same reservations: On Monday, xAI struck a deal with the Pentagon to use Grok in classified systems, including weapons development and battlefield operations.

Though holding out against the use of its models for acts of warfare still gives it the ethical high ground, with so much attention and cash flowing into Anthropic, the company loosening its tight safety standards felt inevitable. As Harvey Dent says in The Dark Knight, “You either die a hero, or live long enough to see yourself become the villain.” While Anthropic isn’t a “villain” by any stretch of the imagination, it continues to face situations that challenge its moral compass, chipping away at its position as the poster child for ethical conduct in AI. In the interview with Time, Kaplan denied that this move was financially motivated. But fewer restrictions will allow the company to innovate faster and fit in better with the “move fast and break things” ethos of Silicon Valley. We just thought that was the mindset Anthropic was created to oppose.
TOGETHER WITH CLAWDTALK
Your AI agent could think. Now it can pick up the phone.
ClawdTalk just launched, and it solves something that's been quietly annoying every serious agent builder: AI agents only lived behind a chat window.
ClawdTalk now gives your agent a real phone number, so you can call, text, or WhatsApp it from anywhere in the world, over real carrier infrastructure.
Your agent talks like a human and can execute tasks mid-conversation, triggering workflows, pulling data, or completing missions while you're still on the call.
It works with Clawdbot (OpenClaw) agents, free to start at $0 forever, and the fastest way to get it is to just call it.
Built by @Telnyx
→ Call the demo line: 301-MYCLAWD (301-692-5293)
WORKFORCE
Reports: AI skills demand outweighs returns
Executives and stakeholders are all in on AI skills. However, it’s unclear whether those skills are yielding any actual results.
Several reports published on Tuesday detailed an increasing demand for AI skills across job functions. That excitement may be intensified by a broader pressure on enterprise leaders to extract value from their massive AI investments.
Some recent findings include:
Linkedin’s Skills on the Rise report indicates a hot demand for skills including AI literacy, prompt engineering, responsible AI and more throughout occupations. Still, while two-thirds of executives feel confident that their employees will proactively learn new AI skills over the next six months, less than half feel supported in doing so.
KPMG’s AI Pulse Survey shows that in the technology, media and telecommunications sector, companies plan to invest an average of $156 million in AI over the next 12 months. These companies are also willing to pay more for employees with AI skills, and 62% expect to achieve measurable gains on their investments over the next year.
However, despite the exuberance, some in the industry may be starting to question the reality of these expectations. On Monday, Goldman Sachs Chief Economist Jan Hatzius said in an interview with the Atlantic Council that AI investment contributed “basically zero” to the U.S. GDP growth in 2025.
“I think there’s a lot of misreporting, actually, of the impact AI investment had on U.S. GDP growth in 2025, and it’s much smaller than is often perceived,” Hatzius said.
Still, it’s unclear whether that lack of meaningful impact will endure or if the industry is more nascent than many believe. For instance, at the India AI summit held last week in New Delhi, OpenAI COO Brad Lightcap said that AI adoption hasn’t truly taken off at scale in businesses. “We have not yet really seen enterprise AI penetrate enterprise business processes,” Lightcap said.

Whether or not these investments will have a real impact, it’s clear that many companies have FOMO, and as a result, are going all in. However, this AI skills fervor might give more companies an excuse to shed employees who lack those skills, such as what Accenture started doing last year. This is where the narrative of AI job replacement comes into play: What happens to the people who don’t have AI skills, don’t have the means to be reskilled on AI, or have jobs that can simply be automated entirely? In the end, it could still become a zero-sum game in which those with AI skills greatly benefit, while those without are left to struggle.
LINKS

Meta, AMD strike a deal to deploy 6GW of compute, worth $100 billion
Guide Labs debuts 8-billion-parameter interpretable LLM
Judge dismisses xAI’s trade secrets lawsuit against OpenAI
AI accounting firm Basis raises $100 million at $1.15 billion valuation
OpenAI appoints Arvind KC as chief people officer
Discord delays rolling out its verification tool

Mercury 2: Inception launched a new reasoning LLM, which the company claims is It is 5x faster than leading speed-optimized reasoning models, including Claude 4.5 Haiku and GPT 5.2 Mini.
Cursor Agents: Cursor launched an update allowing its Agents to act more independently and “use the software they build and send you videos of their work,” according to the release.
ProducerAI: Google Labs now includes ProducerAI, a music creation partner that can help you create and optimize music using AI.
Notion: The workspace introduced Custom Agents that can carry out tasks for users and teams.

A QUICK POLL BEFORE YOU GO
If you had to choose: Meta AI glasses or Google AI glasses? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“I try to look at fine details, but lately this method of determination hasn't been working. I don't know what aspect to look at. Pictures from AI have become quite amazing.” |
“The colors on [this] image seemed more natural. Fooled again! 😡” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
*Indicates sponsored content
*EnergyX Disclaimer: Energy Exploration Technologies, Inc. (“EnergyX”) has engaged The Deep View to publish this communication in connection with EnergyX’s ongoing Regulation A offering. The Deep View has been paid in cash and may receive additional compensation. The Deep View and/or its affiliates do not currently hold securities of EnergyX.
This compensation and any current or future ownership interest could create a conflict of interest. Please consider this disclosure alongside EnergyX’s offering materials. EnergyX’s Regulation A offering has been qualified by the SEC. Offers and sales may be made only by means of the qualified offering circular. Before investing, carefully review the offering circular, including the risk factors. The offering circular is available at invest.energyx.com/.
Comparisons to other companies are for informational purposes only and should not imply similar results.
Under Regulation A+, a company has the ability to change its share price by up to 20%, without requalifying the offering with the SEC.












