New Nvidia AI chip: 5x faster, arriving early

Welcome back. The demand for Nvidia's AI chips is so intense, and the backorder queue is so long, that the company doesn't actually have to worry much about the competition. But it was still great to see AMD roll out its Helios Rack system with the world's first 2nm chips to provide the compute to help accelerate AI innovation. AMD's marathon CES 2026 keynote lasted over two hours and covered topics as far afield as AI PCs, humanoid robots, health care innovation, and space exploration. But the most memorable part was AMD CEO Lisa Su saying that with the launch of its new chips, AMD will have delivered a 1,000x increase in AI performance over the past four years.
Jason Hiner

IN TODAY’S NEWSLETTER

1. New Nvidia AI chip: 5x faster, arriving early

2. Audio AI emerges as new CES theme

3. AI moves into kids’ robots, questions emerge

HARDWARE

New Nvidia AI chip: 5x faster, arriving early

Nvidia continues to set the bar higher. In his CES keynote on Monday, the chip giant’s CEO Jensen Huang announced the launch of the Vera Rubin supercomputing platform, capable of delivering five times the compute power of the last-generation chip and arriving months ahead of schedule. 

Huang said that the Vera Rubin platform is now in “full production,” and will be available to customers in the second half of 2026. Vera Rubin is composed of six chip types, 1152 GPUs in total across 16 server racks, that work in concert to reduce training time and inference costs. 

  • The platform is particularly fit for agentic AI, advanced reasoning and massive-scale mixture-of-experts, a machine learning concept that draws on multiple specialized models. 

  • Huang touted the Rubin platform’s ability to do more with less: it cuts inference token costs by up to 10 times and requires 1/4 as many GPUs to train mixture-of-experts models compared to its current Blackwell platform. 

“The amount of computation necessary for AI is skyrocketing,” Huang said in his keynote. “The demand for NVIDIA GPUs is skyrocketing. It's skyrocketing because models are increasing by a factor of 10, an order of magnitude every single year.” 

Nvidia has been talking about Vera Rubin for a while now. The company first teased the platform at the Computex conference in 2024, and laid out a clearer roadmap for the tech at its GTC conference in March 2025. However, the platform wasn’t expected to be released until mid-2026, making Huang’s announcement on Monday six months ahead of schedule.

Already the dominating force of the AI chip industry, Nvidia is now outdoing itself. The company rolling out Vera Rubin early could tell us two things. One: It’s doing all it can to keep up with the frenzied demand for its chips. And two: It may be feeling pressure from competitors like Google and Amazon as they develop and release their own chips. AMD on Monday night may have squeezed Nvidia further, with CEO Lisa Su showing off Helios, its new rack for AI compute, and bringing out OpenAI president Greg Brockman and Dr. Fe–Fei Li to tout their partnerships. No other chip on the market holds a candle to Nvidia’s, as it stands. AI companies are hungry for more computing power, and with Vera Rubin, Nvidia just cooked them up a feast.

Nat Rubio-Licht

TOGETHER WITH YOU.COM

Stop Guessing. Prove AI ROI.

AI spend is rising, but are you measuring return on investment? We love this guide from You.com, which gives leaders a step-by-step framework to measure, model, and maximize AI impact. 

What you’ll get:

  • A practical framework for measuring and proving AI’s business value

  • Four essential ways to calculate ROI, plus when and how to use each metric

  • A You.com-tested LLM prompt for building your own interactive ROI calculator

Turn “we think” into “we know.” Download the AI ROI Guide

STARTUPS

Audio AI emerges as new CES theme

As the AI industry works on what comes next after chatbots, several startups are targeting audio AI as the new frontier. 

At CES, dozens of companies are showing off apps and gadgets that listen to their users a lot more closely. Often built on the foundation of large language models like Gemini, ChatGPT and Claude, audio is emerging as one of the next major use cases for AI tools. 

The applications of fine-tuned voice AI range far and wide: 

  • Accessibility tech was a major point of focus at the trade show on Sunday, with companies like Cearvol and Elehear debuting hearing aid technology that uses AI to cut through background noise. 

  • Subtle Computing, a startup that emerged from stealth in November, showed off its new “voicebuds,” which feature fine-tuned “high-performance voice isolation models” for dictation in loud or quiet environments, co-founder Savannah Cofer Chen told The Deep View. 

  • And if in-ear tech isn’t your thing, Gyges Labs displayed Vocci, a note-taking AI ring that can understand 112 languages and uses an agent to summarize transcriptions, with an understanding of “implicit meaning and historical context,” chief scientist Siyuan Qi told me. 

  • Outside of personal devices, voice AI is also making its mark in enterprise spaces, with French startup Airudit using audio as a means of controlling robots hands-free in manufacturing and industrial spaces, showing off its capabilities at CES by making a small robotic dog sit and lie down with a few simple commands. 

The timing looks right for audio AI to explode. Industry voices are starting to question how useful large language models are when used solely for chatbot capabilities. And as consumers start to examine exactly how AI fits into their lives, audio-based models provide an easy way in.  

While some industry thought leaders are targeting humanoid robots, world models and physical AI as the next steps forward, audio applications like these are far easier to develop and deploy and might provide a stopgap while those systems mature.

Facing intense competition on both the hardware and software fronts, startups like these may have to look over their shoulders. With Apple, Google and Samsung already providing cushy ecosystems that consumers are comfortable with, breaking through with a single consumer device isn’t easy. And as for the models themselves, major developers have long had their nose to the grindstone on powerful voice models, some of whom (looking at you, OpenAI) are sharply ramping up those efforts. To survive, these startups will likely have to niche down and stay creative.

Nat Rubio-Licht

TOGETHER WITH ASAPP

Serve every customer like a VIP — with one intelligent platform

We're ready to unveil how ASAPP Customer Experience Platform makes every customer feel like a VIP by uniting your systems, data, workflows, and teams, so you can finally serve customers intelligently, not reactively.

Join us for a free webinar to see what ASAPP CXP means for you:

  • One platform to serve your customers

  • Lower cost to serve without lower quality service

  • AI that understands, speaks naturally, and resolves issues

  • Easy integrations with your tools—no rip and replace

  • Happier, more loyal customers

PRODUCTS

AI moves into kids’ robots, questions emerge

It turns out that AI is more fun than we thought — and I'm not talking about laughing at AI slop. I'm talking about the surprising number of AI products at CES 2026 that are aimed at entertaining kids. The products are cute, cuddly, and well-designed, but they also raise some serious questions. 

Here are three from CES 2026 that we'll use as examples:

  • Luka AI Cube and Luka Robot — Both products come from the same company that gave us the Jibo "social robot," a viral hit a decade ago. The Luka AI Cube is a small ruggedized square tablet worn on a neck strap. It's a learning partner that kids can point at things in nature, in a museum, and in other settings to ask questions and get interactive content. The Luka Robot is a multilingual tool that can read stories to kids. The simple reader version of the product has already been used by over 10 million families for several years, but just added AI features unveiled at CES 2026 to transform from passive listening to conversation-based interactions.

  • Sweekar's AI Tamagotchi-inspired pet — This is a throwback to the 1990s virtual pet that kids had to give attention to keep alive. The Sweekar version shown at CES uses the same concepts. The robot pet starts as an egg that hatches and then, as kids play with it, progresses through stages of development until it becomes an adult. Where the AI comes in is that the virtual pet can learn to talk, recognize its owner's voice, and adapt to its owner's personality. The device works to create emotional attachment.

  • Cocomo robot pet by Ludens AI — Another robot pet that's focused on emotional support is Cocomo. Japanese startup Ludens AI has created an autonomous robot pet that can follow you around your living space and learn what comforts you, what makes you laugh, and what surprises you. It can then respond with cute-sounding hums and noises that are aimed at creating a personal connection. 

In contrast to the AI companies launching emotionally complex toys, across the halls at CES, the Lego company unveiled smart Legos that simply light up and make fun sounds.

While all of the AI pets and toys unveiled at CES are very well-intentioned — we asked them all about privacy, and they had good answers — it still feels risky to be giving out these kinds of devices to kids. While the toys could become very good at morphing to the emotions of their owners, you could also see that as emotionally manipulative. And we don't know what the unintended consequences of these technologies could be. That should be enough to cause parents to proceed with caution.

Jason Hiner, Editor-in-Chief

LINKS

  • Toma: An AI agent for handling conversations and operations in car dealerships.

  • Amie: An AI personal assistant that turns meeting insights into workflows. 

  • Alexa+: Amazon launched an Alexa-focused website to enable more AI-powered interactions for Alexa Plus users 

  • Invoce: A platform that uses AI to generate invoices for freelancers, using only your voice.

  • OpenAI: Researcher, Alignment

  • Amazon: Applied Scientist, LLM Code Agents, Kiro Science

  • Snap: Senior Research Scientist, Generative AI

  • Riot Games: Principal Enterprise AI Engineer

GAMES

Which image is real?

Login or Subscribe to participate in polls.

A QUICK POLL BEFORE YOU GO

Would you be comfortable giving an AI-powered toy to a child?

Login or Subscribe to participate in polls.

The Deep View is written by Nat Rubio-Licht, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

The Deep View team

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The first thing I noticed was the snow. Looked fake in [the other image]. Then I looked at the cars. [This image] has actual car brands and a readable license plate. All the cars in the background looked like cars I have seen before, and the branding looks accurate.”

“I thought you were trying to trick us with an AI version with perfect license and insignia.”

“The bricks on [this] photo were textured and colored slickly, the snow fallen too perfectly.”

“[This image] was more ‘imperfectly’ lit.”

“[This] image had no snow on the street, which made no sense to me.”

“[This image] is too perfect.”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.