Nvidia GTC: 3 trends to watch

Welcome back. A new Snowflake survey suggests AI is creating more jobs than it destroys, at least so far, even as hiring patterns shift for entry-level workers. The open-source NanoClaw project just got a major boost from Docker, allowing AI builders to run secure personal AI agents locally without buying a Mac mini. And this week we’re on the ground in San Jose for Nvidia GTC, where the biggest signals to watch include the battle for AI inference, Nvidia’s push into open models, and a new wave of robots queuing up. Jason Hiner

IN TODAY’S NEWSLETTER

2. AI job takeover? The data says not yet

3. NanoClaw just removed the need for a Mac mini

HARDWARE

Nvidia GTC this week: 3 things to watch

Nvidia GTC 2026, the first marquee AI event of 2026 has arrived. And with the "Moore's Law of AI" doubling every 4 months right now, we should expect plenty of news and announcements this week. 

The Deep View will be on-the-ground at Nvidia GTC in San Jose, covering the most important developments in real time. That includes Nat Rubio-Licht, Faris Kojok, and yours truly

These are the big trends we're tracking:

  1. Nvidia's chip strategy for inference: Nvidia has owned the lion's share of the AI market because of its technology advantage in GPUs. However, the market has been dominated by model training so far, and it's about to shift to inference and the compute needed to run AI day-to-day as the number of people and organizations running AI grows dramatically. When it comes to inference, Nvidia has less of an advantage, and we're seeing companies like Cerebras swoop in and take market share because they can run inference faster and less expensively. Nvidia made its $20B Groq deal to tackle inference. There are reports of a big inference announcement coming at GTC. 

  2. Robots and physical AI: Nvidia loves robotics. Jensen Huang is a wonderful storyteller and robots are a tangible, physical manifestation of the current advances in AI. I'm sure Nvidia will trot robots on stage during the main keynote on Monday, but will we learn more about practical advances of robots of various shapes and sizes, consumer and enterprise? Robots, especially humanoid robots, are advancing much faster in China right now. Can Nvidia and its partners offer a counterpoint? Also, keep an eye on announcements for autonomous vehicles, another form of physical AI. The economics are getting a lot better, opening up new possibilities. Nat will be tracking developments across these topics throughout the week. 

  3. Open models: Nvidia released its latest 120B-parameter model, Nemotron 3 Super, ahead of GTC and is also promising that Nemotron 4 Ultra, with four times as many paramters is coming soon. Nvidia is quickly becoming a leader in this space with models that are more open and are outperforming competitors. It's worth following anything they announce about new models of any type, as it's likely to have downstream effects on making AI more accessible and customizable for enterprises and lowering inference costs.

GTC COVERAGE BROUGHT TO YOU BY IREN

The Future of AI is Physical

For today's AI builders, time-to-compute is crucial. IREN is responding to that need - delivering the physical infrastructure required for high-performance AI training and inference at scale. We'll be on the ground in San Jose from Monday, March 16 – Thursday, March 19, connecting with developers, researchers, and business leaders to explore the next wave of AI innovation.

Visit us at Booth #1107 to learn how our vertically integrated AI Cloud platform is enabling high-performance AI training and inference at scale. Learn more.

Nvidia GTC will be a harbinger of what to expect across the AI space during the rest of 2026. Expect to hear a lot about AI agents, world models, robots, self-driving cars, and AI inference. Making the unit costs work for AI inference is one of the critical factors facing the industry, especially enterprises deploying production-ready AI projects that demand strong ROI and snappy performance. As we discussed in a recent episode of The Deep View: Conversations podcast, AI inference is shaping up to become one of the largest markets in the history of the world. And it needs to get much more efficient and much less expensive to fully realize that potential. You can follow my Nvidia GTC updates in real time on X/Twitter at x.com/jasonhiner.

Jason Hiner, Editor-in-Chief

TOGETHER WITH ORCHIDS

The best way to build any app

The real tax on vibe builders right now isn’t effort, it’s cost. Every tool wants you paying for their model usage and hosting forever. Orchids.app lets you bring your own and use tools/SDKs that already exist elsewhere.

Plug in your ChatGPT, Claude Code, Gemini, Copilot, GLM, or any API key you already pay for.

You keep control of the bill. And best part is: you don’t get price locked because you shipped with Orchids. Deploy your code straight to Vercel with one click.

WORKFORCE

AI job takeover? The data says not yet

As AI grows more capable by the day, headlines warning of mass job displacement have become impossible to ignore. But early evidence suggests the reality is far more complicated.

Snowflake recently published a report that found AI-driven job creation is outpacing job losses. In a survey of 2,050 business and technology leaders across 10 countries, far more reported AI creating jobs than eliminating them: 11% said AI had cut roles at their organizations, 42% said it had created new ones, and another 35% reported a mix of both.

While automation will continue to advance, what we’re observing in the data is less about outsourcing entire roles and more about reshaping them to support AI-driven workflows,” said Anahita Tafvizi, Chief Data Analytics Officer at Snowflake, to The Deep View.

For instance, IT operations saw a 56% job gain but also a 40% job loss, reflecting that AI can significantly impact workflows: it creates opportunities for expansion while also enabling complete automation. 

Among the other jobs showing gains were cybersecurity and software development, fields where AI can perform the work proficiently and that rank among the highest in AI-observed exposure, according to a new Anthropic report.

Anthropic developed this new measure called observed exposure, which combines theoretical LLM capabilities with real-world usage data to more accurately assess AI job displacement. 

Occupations with higher observed exposure were concentrated in white-collar roles such as legal, computer and math, finance, arts and media, education and library, and are projected by the BLS to grow less through 2034. 

Yet, there has been no systematic increase in unemployment for highly exposed workers since late 2022. At most, the report found that hiring of younger workers has slowed in exposed occupations, likely signaling that companies are slowing hiring of inexperienced workers, as AI can likely do their roles more cheaply and faster. 

“From a practical standpoint, the most important action is investing in adaptability,” said Tafvizi. “That doesn’t necessarily mean becoming an AI engineer. It means building data fluency, strengthening AI literacy, and deepening domain expertise so you can work effectively alongside these systems.”

While it may feel like AI news is everywhere you look, it is important to remember that the technology only began gaining momentum at the start of 2023, so it is still relatively new. It is therefore far too soon to determine what lasting impact it will have on jobs. Although company leaders have attributed layoffs to AI-related causes, AI often serves as a convenient scapegoat for broader cost-cutting decisions. This isn't to say that some jobs aren't being offloaded to AI, but be wary of sweeping statements. Company leaders often face pressure from shareholders to improve their financials, and reducing headcount is one of the quickest ways to make the numbers look better. What better cause to blame than an emerging technology that people already fear, and that makes companies appear to be on the cutting edge?

Sabrina Ortiz, Senior Reporter

TOGETHER WITH TWELVELABS

Search any video with natural language. Now on Amazon Bedrock

Finding specific moments across hours of footage is still painfully manual. Most AI models treat video as a series of frames, but TwelveLabs built two models that understand visuals, audio, speech, and motion together, so you can search hours of footage the same way you'd search text.

It's already being used across security, media, and enterprise workloads at petabyte scale.

This guide breaks down how it works, what real deployments look like, and how one customer cut highlight creation from 16 hours to 9 minutes.

Get the guide

PRODUCTS

NanoClaw just removed the need for a Mac mini

OpenClaw showed the world what was possible with personal AI agents. NanoClaw and Docker are showing how to make them trustworthy enough for real work.

NanoClaw is one of the most prominent open-source forks of OpenClaw. It's focused on creating a secure-by-default agent platform that's easier to install, easier to trust, and better for getting stuff done. And it just got a big boost from a traditional enterprise player.

On Friday, Docker announced an integration that made NanoClaw the first claw-based platform that "can be deployed inside Docker’s MicroVM-based sandbox infrastructure with a single command," according to Docker's statement. 

That means it's safe to run NanoClaw on your own machine. No Mac mini needed. Each agent runs in its own siloed Docker container, by default, along with its own filesystem and session history. It's invisible to every other agent. That means you get the benefits of OpenClaw — persistent memory, agent swarms, and communicating via a messaging app — but professionals can sleep much better at night because of the security and privacy controls.

“OpenClaw showed the way, showed what was possible," Gavriel Cohen, cofounder of NanoClaw told The Deep View. "And NanoClaw is coming now to provide the reliable, secure, production‑ready implementation of that,” 

NanoClaw has been having a moment ever since Andrej Karpathy shouted out the project on Twitter as one of the safer claw agents. In five weeks, it went from zero to 20,000 stars on GitHub and 100,000 downloads. It's drawn huge applause from Karpathy and other coders for taking OpenClaw's 434,453 lines of code and reducing it to under 4,000. 

NanoClaw has done this in part by tying itself closely to Claude Code, which it uses for setup, memory, and tool use. 

If you want to try NanoClaw and the new Docker integration, you can visit the website or GitHub and launch it with a single line of code in a terminal. The Deep View has done a full interview with the cofounders of NanoClaw and will follow up with a full story on how the project has come together.

What NanoClaw pulled off was already impressive by streamlining OpenClaw and creating a version that professionals could count on for enterprise-level work. But adding Docker integration to safely separate agents into their own sandboxes takes it to the next level. So if you've been wanting to try OpenClaw for its agent capabilities and ease of use in messaging, but have been understandably alarmed by the security issues, then NanoClaw solves those problems. The one drawback is that it's limited solely to Claude Code.

Jason Hiner, Editor-in-Chief

LINKS

  • Claude: Both Claude Code on desktop and Cowork have voice mode, allowing all users to voice chat about their needs. Separately, Opus 4.6 with a 1M context window is now the default Opus model for Claude Code users on Max, Team and Enterprise plans. 

  • Groundsource: Google open-sources the dataset for a new AI methodology that uses Gemini to identify 2.6M+ historical flash-flood events. 

  • Perplexity Computer: The agentic platform has yet another availability upgrade, now available on mobile. You can easily hand off workloads between phone and desktop. 

  • Google Slides: Users can now turn sketches into editable charts and notes using Gemini with a feature in beta. 

  • Capital One: Applied Researcher II (AI Foundations, LLM Core and Agentic AI)

  • SAP: Senior Customer Success Manager,  SAP Data & AI

  • IBM: IBM Associate Partner - SAP AI Enabled Testing-as-a-Service Architect

  • Salesforce: Employee Success Lead Data Engineer

GAMES

Which image is real?

Login or Subscribe to participate in polls.

POLL RESULTS

Will humanoid robots have a key part to play in the AI revolution?

Yes (73%)
No (19%)
Other (8%)

The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“[This image] shows some wilted flowers. [The other image] shows a better layout and all flowers looking fresh: AI-generated.”

“[This image] looks like a typical botanical nursery: very cluttered and messy. [The other image] looked like Photoshop made it. AI created too pristine a picture.”

“The flowers were less perfect - some drooping, some not bloomed yet, which was a hint that was the real one.”

“Real life is not perfect placement, and all containers are not the same in appearance.”

“One basket seems to be floating, no support.”

“[This image] seemed too staged. I've been to a lot of farmers’ markets, and it's unlikely to see the plants arranged that way.”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.