- The Deep View
- Posts
- Nvidia throws its weight behind OpenClaw agents
Nvidia throws its weight behind OpenClaw agents

Welcome back. OpenAI rolled out GPT-5.4 mini and nano to bring faster, cheaper AI into coding, tool use, and workflows. This is more proof that smaller models can be smarter. At GTC, Nvidia put OpenClaw agents squarely in the spotlight by unveiling NemoClaw, focusing on both performance and security. All in all, Nvidia is making its case for a spot in the coming wave of personal AI agents. Mistral introduced Forge, giving companies a way to build custom models on top of open-source foundations using their own code, data, and policies. The Deep View team will be on the ground at Nvidia GTC in San Jose for another day of reports from the frontier of AI advances. —Jason Hiner
1. Nvidia puts OpenClaw agents center stage at GTC
2. Mistral unveils Forge to build custom AI models
3. OpenAI's new GPT-5.4 cuts size, boosts speed
BIG TECH
Nvidia announces major OpenClaw agent focus
On stage at AI's first big event of 2026, Nvidia’s CEO spotlighted technology that didn’t even exist 4 months ago: OpenClaw's personal AI agents.
In his keynote at GTC on Monday, Jensen Huang officially announced NemoClaw, Nvidia’s addition to the claw agent ecosystem. NemoClaw is an agent platform that integrates its open-source Nemotron model family into OpenClaw “self-evolving” autonomous AI agents, popularly known as “claws.”
And Nvidia has high hopes, with Huang saying that OpenClaw “opened the next frontier of AI to everyone … This is the moment the industry has been waiting for: the beginning of a new renaissance in software.”
So what does NemoClaw do? This system installs OpenClaw with Nvidia’s open models, leverages Nvidia’s “agent toolkit” to optimize OpenClaw commands, and installs “OpenShell,” an open-source tech stack with built-in policy-based guardrails, to add a layer of privacy and security controls to these agents.
“[OpenClaw] is the most popular open-source project in the history of humanity, and it did so in just a few weeks,” Huang said in his keynote. Huang was referring to the fact that OpenClaw has rapidly become the project with the most stars on GitHub, but it's a stretch to call it more popular than widespread open-source projects like Linux, Git, and Apache that make up the foundation of the internet.
OpenClaw is not Nvidia’s only bet on agents. Since agents are the most token-hungry AI systems yet, Nvidia is building a booster pack to accelerate them. Huang used part of his three-hour keynote to detail the company's new flagship Vera Rubin platform, a seven-chip, five-computer rack "AI factory" that Huang pitched as built to scale agentic AI.
The platform, available in the second half of this year, was initially announced at CES, but now includes the Groq 3 LPU, a chip designed specifically to run language models fast.
“The future is not just going to be about LLMs,” Huang said in a private press Q&A on Tuesday. “The future is about agentic systems. And [with] agentic systems, the problem space just expanded yet again. So when the problem space expands, you have a greater opportunity to find that big leap.”
Agents can do more, but can also cause more damage. That gives Nvidia a whole lot more to do, from making inference cheaper and more efficient to solving the security pitfalls. To put it simply: more problems, more money (for Nvidia).
GTC COVERAGE BROUGHT TO YOU BY IREN
IREN Increases secured grid-connected power to >4.5GW
This new data center campus is strategically located in Oklahoma, an emerging AI infrastructure hub, with 1,600MW of power secured across 2,000 acres and high-bandwidth, low-latency interconnectivity.
The grid-connected site boasts <6ms latency to network hubs, and with power scheduled to ramp up from 2028, it's an attractive solution for large-scale AI infrastructure. Explore the Data Center Campus.

It makes sense that tech giants such as Nvidia are jumping on the OpenClaw bandwagon. The momentum behind adoption is leaving major gaps to fill. With NemoClaw, Nvidia is meeting AI builders where they’re at, and providing them the resources to tackle one of OpenClaw’s biggest threats: security. Plus, personal AI agents like the various claw agents consume a ton of AI inference, so they push chips like Nvidia’s to the max and drive up demand for the fastest and most powerful systems. In short, by providing the tools to more easily deploy these agents, Nvidia is multiplying its pool of high-value customers.
TOGETHER WITH MERGE
How teams plan to use MCP this year
Most teams building AI agents plan to adopt the Model Context Protocol (MCP) this year. Most of those same teams have serious security concerns about it.
To understand how teams are navigating this tradeoff, we surveyed hundreds of AI leaders building AI agents for the first-ever state of agentic integrations report.
Their top concerns?
70% worry about credential leaks and malicious servers
56% say MCP doesn’t support enterprise search well
51% report ambiguous tool definitions causing incorrect tool calls
PRODUCTS
Mistral unveils Forge to build custom AI models

Mistral built its reputation on open-source AI, and now it's inviting users to build on top of it
On Tuesday, Mistral launched Forge, a system that enables customers to build AI models using their proprietary data, including codebases, internal documentation, compliance policies, operational processes, and more. As a result, these models are tailored to each organization’s unique applications, which, according to Elisa Salamanca, Head of Product at Mistral AI, holds many benefits.
“We've been seeing a couple of key areas where model customization is actually critical: It can be because you need to train it on your proprietary data that the models out there have never seen on the web or this is not publicly available information; it can be because you need very specific behaviors; it can be because you actually need to train models for the edge,” said Salamanca in an interview with The Deep View.
Forge supports training across several phases of model development, including pre-training, post-training, and reinforcement learning, allowing companies to feed domain-specific information, fine-tune behavior, and align models and agents with internal policies.
However, technically, competitors' APIs already allow companies to create and fine-tune custom models. As a result, the biggest differentiator, according to Salamanca, is that Mistral’s models are open-source.
“AI Is changing a lot of things, but customizing models and making sure that you are able to own that AI is what's going to get you the differentiation that you need,” said Salamanca. “Forge is actually the critical piece there. It's what's going to make you create your own AI that's not the one that your competitor is going to be able to create and that you are going to own it and not rely on another vendor.”

Model deprecation is a real issue that should be considered when building AI fine-tuned AI models. Oftentimes, companies will sunset a model because it is too costly to maintain, especially if a newer, more capable one has been released. However, this fails to account for the fact that, while a new model may be better, it will have nuances that necessitate changes to the established model, as seen in the backlash OpenAI received when retiring GPT-4o. While open-source models have many benefits, this is one of the biggest, and a prime example of the gains that model transparency brings.
TOGETHER WITH ORACLE NETSUITE
23 Essential Financial KPIs
Take charge of your company’s success with the perfect mix of financial metrics.
Get started with a simple guide to core KPIs.
Download this guide book written by business owner and coach Bernie Smith to discover the KPIs that all-star finance teams use to fuel growth, delivered with case studies, formulas, definitions, and more!
PRODUCTS
OpenAI's new GPT-5.4 cuts size, boosts speed

With AI models, bigger isn’t necessarily better. Small models pack efficiency and speed in a lower-cost offering, and GPT-5.4 is joining the party.
On Tuesday, OpenAI shipped GPT-5.4 mini and nano, which the company calls its most capable small models yet. GPT-5.4 mini offers improvements across nearly every category, including coding, reasoning, multimodal understanding, and tool use, while being 2x faster than GPT-5 mini, according to OpenAI.
Notably, despite its smaller size, GPT-5.4 mini’s performance is comparable to that of its larger counterpart on SWE-Bench Pro and OSWorld-Verified, with a difference of around 3%. Meanwhile, GPT-5.4 nano is the smallest and cheapest version of GPT-5.4, particularly useful for tasks where speed and cost are the most important.
Overall, the benefits of using these models ultimately come down to high-volume workloads where speed is of the essence. OpenAI lists some examples in which they may be particularly useful:
Coding: The models can efficiently tackle coding workflows that need fast interactions, including tasks such as “targeted edits, codebase navigation, front-end generation, and debugging loops with low latency.”
Subagents: For systems that can combine multiple models, GPT-5.4 mini is efficient as a subagent for tackling smaller tasks in parallel, as the bigger model or agent.
Computer use: GPT-5.4-mini is proficient in multimodal tasks related to computer use, such as quickly interpreting screenshots.
GPT-5.4 mini is available in the API, Codez, and ChatGPT starting today. Meanwhile, GPT-5.4 nano is only available in the API.

Large Language Models (LLMs) were the primary focus as AI models first gained widespread popularity. However, as use cases continue to grow more specialized within everyday workflows, small language models offer compelling advantages in many scenarios, with some of the most notable being customization, cost efficiency, speed, and the ability to run on edge devices. This is especially true given the recent rise of computer agents that can take direct control of your machine, where running locally offers meaningful privacy and performance benefits. As a result, OpenAI's release of GPT-5.4 mini and nano is well-timed and addresses a genuine market need.
LINKS

Sears exposes customer conversations with chatbots to the web
Google expands Personal Intelligence access across Search, Gemini
Microsoft combines personal and workplace Copilot teams
Adobe, Nvidia partner to power the next generation of Firefly Models
Senator Elissa Slotkin introduced bill to regulate the Pentagon’s use of AI
Anthropic seeks a weapons policy manager to avoid “catastrophic misuse”

Simple: The free printable plan seniors can't stop talking about. (sponsored)
Sana: Workday made its conversational AI experience, Sana, available to customers worldwide
Grok: The Text to Speech API is now available, allowing developers to build with voices
Lovart: The design agent introduced Voice Mode, which lets users talk to the agent when creating projects
Manus: Meta’s general AI agent launched support for the Google Workspace CLI, allowing users to manage entire workflows in Docs, Sheets, and Slides
NotebookLM: Users can use Yahoo Sports notebooks for help filling out Yahoo Fantasy Bracket Bracket Mayhem brackets

Anthropic: Engineering Editorial Lead
PWC: AI Evaluation Engineer
Lockheed Martin: AI Adoption Data Specialist
Deloitte: Oracle AI - Senior Consultant
POLL RESULTS
Is putting AI data centers in space a good idea or a bad idea?
Good idea (46%)
Bad idea (44%)
Other (10%)
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Shadows, skin tone, reflections and clarity of picture.” “Lighting looks more real.” “The details of the flooring, the way the clothing folded more naturally, and a more natural-looking face, were the key clues.” |
“[This image] is too pristine, too posed.” “The chair feet show light reflection on the ground, which should not as they are in the shade. The floor looks weird under the take feet, with no obvious reason for it to be different.” “The floor in [this image] looked like chocolate, which was why I thought it was AI. Maybe I am just hungry.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.










