- The Deep View
- Posts
- Why Nvidia chose open models to reshape AI
Why Nvidia chose open models to reshape AI

Hello, friends. Enterprises are starting to move past copilots and toward AI products that deliver measurable results inside real-world workflows. This is a shift Gartner says will define the next phase of enterprise AI adoption. A new study shows how easily humans defer to AI reasoning, raising deeper questions around how we can be more intentional about what we choose to do manually versus what we allow to be automated. Meanwhile, Nvidia’s push into highly open models is less about becoming a lab and more about expanding the AI ecosystem, reducing dependence on a few proprietary models. —Jason Hiner
1. Why Nvidia threw its weight behind open source AI
2. The cost of letting AI think for you
3. Pay for results, not AI tools, say enterprises
BIG TECH
Why Nvidia chose open models to reshape AI
If you're wondering why AI chips leader Nvidia is now building open models that compete with the Chinese open-source champs, and even proprietary models from OpenAI and Anthropic, then you're not alone.
Last month, Nvidia launched Nemotron 3 Super, a 120-billion-parameter reasoning model that outperformed expectations in benchmarks. This is a mixture-of-experts model with a 1-million-token context window. In other words, it's a serious model made to compete with the frontier labs. Meanwhile, the company promised that a model 4x its size, to be called Nemotron 3 Ultra, is coming soon.
And because Nvidia opens the weights, datasets, and training recipes, it's among the most open models in the world, especially for a model of this capability. Some of the only models that could claim to be more open would be the ones from MBZUAI, which The Deep View covered in depth in January. But Nvidia's open models are far closer to full-stack openness than most of the open-source models, which only offer open-weight releases.
So why would the leading hardware company of the AI era make software that competes with its leading customers?
"We're not trying to control AI. We're trying to grow it," Bryan Catanzaro, VP of applied deep learning research at Nvidia, told The Deep View. "And so our incentives as a company, our business is aligned with open models and with supporting the ecosystem in a very direct way."
Kari Briski, VP of generative AI software at Nvidia, told The Deep View another perspective: "The model is the byproduct. It is not core to our business, which allows us to just open up the data, open up the recipes, open up everything."
If we break it down, there are three benefits Nvidia gets from making its own models:
Extreme hardware co-design: Making their own models allows Nvidia to optimize the heck out of their GPUs, CPUs and other hardware to run AI. They don't have to wait to get the latest models from the frontier labs to plan the next stage of optimizations.
Hedging against proprietary monopolies: If the frontier labs that need the latest and greatest hardware dwindle down to only a handful of players, then Nvidia could end up at their mercy. When you have a smaller number of customers you rely on for huge numbers of orders, then those customers gain more and more control over your prices. They can demand lower prices because they know so much of your business depends on them.
Letting a thousands flowers (a.k.a. customers) bloom: By releasing open models that other hardware and software makers can use as a rapid on-ramp to build their own AI products and serve the various niches in the industry, Nvidia is powering up the ecosystem, helping companies with limited resources have models they can use to compete and potentially creating a lot more future customers when those companies succeed and grow.
"You don't want one person winning [because] then they decide all the rules. You need a big open ecosystem for everybody to come along," said Briski.

Nvidia's open model strategy makes perfect sense from the perspective of being an ecosystem catalyst. The more it eases the on-ramp for companies of all sizes to bring their AI products to market, even if they can't afford to develop their own models, the more the whole ecosystem grows. And since Nvidia powers 90% of the GPUs in the generative AI ecosystem, every increase in demand translates directly to Nvidia's bottom line right now.
TOGETHER WITH ATLASSIAN ROVO
The most critical move of 2026? Operationalizing your company’s shared knowledge
Most teams have the knowledge. They just can’t use it.
Atlassian Rovo connects your company’s docs, projects, code, decisions, and people into a single, permissions‑aware layer you can tap into instantly. And because Rovo lives where your teams already work, it doesn’t just help you find answers — it helps you do the work:
Ask Rovo to turn meeting notes into a project plan in Confluence and trackable action items in Jira
Ask Rovo to pull the right docs, decisions, and data to ramp onto a new project fast
Ask Rovo to summarize project updates across all your tools and send it to your team so everyone stays in sync without another status meeting
See how customers like Domino's and FanDuel are becoming AI‑native teams with Rovo.
RESEARCH
When AI thinks, humans stop questioning
AI might be causing us to forget how to think for ourselves.
Recent research from the University of Pennsylvania found that AI users were often willing to accept flawed AI reasoning, readily incorporating it into their decision-making with “minimal friction or skepticism.”
The research documents the rise of “cognitive surrender,” a phenomenon in which users adopt AI outputs while “overriding intuition… and deliberation.”
In a study of nearly 1,400 participants across 9,500 trials, researchers found that subjects accepted unsound AI reasoning more than 73% of the time and only overruled models' decisions about 20% of the time.
Additionally, participants with higher trust in AI and “lower need for cognition and fluid intelligence” tended to fall victim to this more often.
“Across domains, AI tools are not merely assisting decision-making; they are becoming decision-makers,” The research reads. “This shift opens new theoretical ground: How should we understand human cognition and decision-making in an age when we outsource thinking to artificial processes?”
The study adds to a growing body of research on how AI may be impacting the way that we think. One of the most commonly cited studies comes from the MIT Media Lab, in which a group of test subjects was asked to write SAT questions with three different tools: one with OpenAI’s ChatGPT, one with Google search, and one with no help at all. Consistently, the ChatGPT users “underperformed at neural, linguistic, and behavioral levels.”
Even some of AI’s biggest names are questioning its effects on our brains. Anthropic CEO Dario Amodei said in a March interview with podcaster Nikhil Kamath that deploying AI in the wrong ways could easily make people “become stupider,” but only if they choose to forgo learning entirely. “Even if an AI is always going to be better than you at something, you can still learn that thing. You can still enrich yourself intellectually,” Amodei told Kamath.
The researchers, however, posit that cognitive surrender may not inherently be a bad thing. If an AI model is generally better at reasoning and decision-making than the person using it, with fewer mistakes, “deferring to a statistically superior system may be adaptive or even optimal.”
The bigger issue, however, comes down to agency. The researchers noted that this trend could mark a profound shift in cognition itself, “one in which users may not know when or why they have deferred, and where the line between human and machine agency becomes blurred.”

We are not yet at a point where thought is entirely automated. AI, however, presents the opportunity to manifest that future, turning the friction of human critical thinking into a slippery waterslide of accepting all it gives us. Amodei is correct: Even if AI is someday capable of doing everything, the dividing line between reaping the benefits and losing ourselves is in what we let it do. Even if machines make our clothing, plenty of people still knit and sew as a form of enrichment. Even if laptops make writing easier, there is still value to be gained from writing in a journal by hand. And even if an AI model can take the work out of work, doing things ourselves is still vital to retaining our humanity and agency. Put simply: Don't be afraid to be bad at something, even if AI can do it better. Explore when there's value to handling it yourself.
TOGETHER WITH CRUSOE
Crusoe: deploy fine-tuned models with zero infrastructure headaches
Work with our team to deploy your fine-tuned model on a platform built for performance.
Use Crusoe Managed Inference to unlock breakthrough speed and throughput without the infra overhead.
Start deploying with Crusoe.
ENTERPRISE
Pay for results, not AI tools, say enterprises
In a crowded AI subscriptions market, enterprises are increasingly prioritizing tools that deliver measurable results.
A new study from research firm Gartner found that by 2028, over half of all enterprises will stop paying for assistive intelligence, including copilots and smart advisors, and instead favor platforms that deliver workflow results. This sentiment reflects a broader industry shift towards results-driven agentic solutions over chat-like interfaces that can only deliver advice.
“The market is moving away from standalone AI experiences and toward workflow-native," said Vuk Janosevic, Gartner analyst, to The Deep View. “Enterprises have made it clear that they value outcomes inside existing processes more than smart advice sitting off to the side.”
A defining characteristic of agentic AI, or an AI solution that can be categorized as taking action on a user's behalf, is granting it access to the same databases and context the user relies on daily. Any agentic solution expected to deliver meaningful outcomes must therefore be entrusted with proprietary, and often highly sensitive, data.
This reframes what enterprises are willing to pay for. It's not necessarily demand for a new category of AI, but rather a reflection of whether the AI has the authority to trigger the actions being requested. Gartner posits that the vendors who succeed will not be those that offer more AI, but those that facilitate agent orchestration, ensuring agents follow guardrails, securely access key company databases, and can identify and correct missteps.
“In practice, that means the real value is less about building one more AI platform and more about making business software capable of acting, deciding, and completing work in context and in line with compliance guardrails,” said Janosevic.
The growing shift toward automation may prompt an instinctive assumption of widespread job disruption. Janosevic, however, offered a more nuanced view that while some displacement is inevitable, entire professions are unlikely to vanish. Instead, certain roles will contract and be redesigned as new ones emerge around AI-led work.
“The deeper point is that agentic AI changes how work is organized, not just how fast it gets done,” he added.

Companies' expectations of what technology can accomplish are naturally bounded by what the technology is actually capable of, and as the AI space evolves at its current pace, it is only natural that the goalpost advances with it. The report's findings, however, surface an important takeaway for vendors: the solution may not be more AI, but rather more deliberate products designed to address a specific need. This shift is already underway, perhaps most visibly reflected in OpenAI's decision to sharpen its focus on enterprise and to deprioritize peripheral projects such as Sora.
LINKS

Anthropic limits access to OpenClaw through Claude
Meta pauses it’s work with Mercor amid data breach, OpenAI investigating
Fidji Simo takes medical leave from OpenAI, Brad Lightcap shifting role
Startups, researchers lean on low-cost AI models over proprietary ones
Trump’s attempt to preempt state AI laws stalls among lawmakers
UK seeks to court Anthropic after battle with US Department of War

Simple: From desk to gym — rebuild your core, boost posture, and feel better with just 7 minutes a day. (sponsored)
AI Trinity-Large-Thinking: A new, 399 billion-parameter open source model from Acree for text-only reasoning
Steer AI: The latest release by Ramp Labs is a model that “can’t stop thinking about any concept you choose.”
ChatGPT: OpenAI’s flagship chatbot is now available on Apple CarPlay, letting you ask questions via voice while you drive.

Amazon: Applied Scientist, AWS Neuron Science Team
Capital One: Sr Distinguished Applied Researcher (World Models)
Anthropic: Research Engineer, Agents
Nvidia: Tech Engagement Lead - Model Builder
A QUICK POLL BEFORE YOU GO
How do most of the people you know outside of the tech industry feel about AI? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The eyes are more realistic ” |
“[This image] felt too, I don’t know. Fluffy. Not like a real dog.” “The look on [the other image’s] face was somehow more real, emotional than [this image].” |


If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.













