AI agents break free from the chat prompt

Welcome back. Mike Clark has seen tech go from zero to one hundred over the years. But now, as the director of product management for AI agents at Google Cloud, “none of those step functions compare to the agentic step function,” Clark told The Deep View. He said 2025 was the year of enterprises seeking to give agents more responsibility and control. But challenges remain in the market, which is still in its infancy. Clark sat down with The Deep View to discuss agent adoption challenges, the possibilities agents unlock, and whether we should be calling these systems our “digital coworkers.” This interview has been edited for brevity and clarity. Nat Rubio-Licht

IN TODAY’S NEWSLETTER

1. AI agents break free from the chat prompt

2. Should you trust your digital coworkers?

3. Why agents can’t do everything

AGENTS

AI agents break free from the chat prompt

Nat Rubio-Licht: What do you think enterprises are getting wrong, or right, as they build and deploy AI agents? 

Mike Clark: I’ll start with the in-between answer. For enterprises, or for anybody building agents today, it's the moment [where we're] trying to understand what agents really are. We don't have a clear definition of what agents are. The one thing that I'm watching a number of enterprises do is step back and actually define what is an agent to us, and what do we expect out of it. 

For Google Cloud and a lot of our customers, it's the four key pieces: The LLM that wowed us three and a half years ago; connecting tools to that model and being able to do things; orchestration – and this is where agents truly come to be – how do we do these in a multi-step phase; and then finally, like a runtime to build them at scale.

One of the big misses that I see is a lot of companies are focused on purely the chat side of the world. “Agents” have this meaning in human language that doesn't have the same meaning technically. Everybody thinking about it as chat first really limited the capabilities and scope. Think about background agents and things that are solving these traditional background tasks, not just replacing workflow, but, really think about the objectives of what your company is trying to do. 

Rubio-Licht: What challenges have you come across at Google Cloud as far as agentic adoption? 

Clark: The number one thing that enterprises care about is quality. So the number one challenge is that security and governance don't matter if I can't have a quality product. Getting to a place of trust and risk mitigation has meant getting to a quality place. 

I think as models continue to improve and that technological capability continues to grow exponentially – which Gemini 3 did for us. Watching the impact that those model improvements have on all the other pieces of agents was awesome. Watching Claude's advancements and OpenAI’s advancements have similar impacts on agents has been great for the whole industry. It's helped our enterprise customers find trust in the quality that they have. If people don’t trust it, they’re not going to scale it.

TOGETHER WITH NEBIUS

Nebius Token Factory — Post-training

Nebius Token Factory just launched Post-training — the missing layer for teams building production-grade AI on open-source models.

You can now fine-tune frontier models like DeepSeek V3, GPT-OSS 20B & 120B, and Qwen3 Coder across multi-node GPU clusters with stability up to 131k context. Models become deeply adapted to your domain, your tone, your structure, your workflows.

Deployment is one click: dedicated endpoints, SLAs, and zero-retention privacy. And for launch, fine-tuning GPT-OSS 20B & 120B (Full FT + LoRA FT) is free until Jan 9. This is the shift from generic base models to custom production engines.

AGENTS

Should you trust your digital coworkers?

Rubio-Licht: How do privacy and trust play a factor in Google Cloud’s agentic strategy?

Clark: Google has a strong reputation from a privacy, security and trust perspective, in the products that we build and ship. We try to lead the charge on that. But I want to uplevel to the industry for a moment. As an industry, when I think about agents, it introduces some new potential vulnerabilities – the ability for prompt injection, the ability for tools to do some things like that, and we've invested in tools customers can use and leverage in the Vertex AI ecosystem to look for those patterns of prompt injection. 

On our agent platform that we released, we've taken agents from acting like a user and having the identity like a user to having this principal identity as an object in [Google Cloud] that you can attach security to. You don’t have the exposure of managing it as a user, instead you’re actually managing it as part of the infrastructure. We've tried to take the approach of making least privilege a core part of how agents get built. 

Rubio-Licht: How do ethics and workforce comfort play a role in enterprise agentic adoption? 

Clark: Personally and anecdotally in having conversations with enterprises, the folks that are having the most success, the employees are the ones that have helped unlock some of these agent capabilities. And it's not replacing them. 

Some of the agents that companies have built that have some of the best interactions are those that users don't even interact with, but it helps inform them about their day. It helps inform them about decisions that are being made. It helps inform them about a number of other things in ways that they themselves wouldn't have had a mental context window big enough to handle.  It's actually helping them get more done.

From an ethics perspective, the companies that are most successful are really transparent about the agent products and the impacts of those. The goals of those projects have very specific meaning about growing the business and unlocking new lines of business.  

Rubio-Licht: Do you think agents should be treated as tools, or more so as digital coworkers, and why? 

Clark: So I grew up on a farm, where you have animals that work versus animals that are pets. Even the things that work, sometimes, you anthropomorphize. You give this cow a name, or that horse a name. This is an interesting challenge that we've always had: How do you humanize the things that you interact with, and what's the impact of humanizing it in that way? That's why you see such a strong contrast in organizations being really strong one way versus the other, because I don't think there's a clear industry foundation on humanizing. We build agents that talk to you in very humanized voices, but at the same time, are merely AIs operating as a model in the back end through a series of API's to make that happen. I don't think this is unique to AI. I think it's just the moment in time that we happen to be in. 

One clear observation is technological capabilities are happening on this exponential curve, but organizational changes are logarithmic changes. They happen on a much, much slower curve. For some organizations, humanizing the technology helps me get closer to it. It closes some of that gap from fear. For other organizations, dehumanizing it and keeping it just as an API running in the background has its same effect, because it's tied more to their culture as an organization than it is to agents or AI as a concept.

TOGETHER WITH CEREBRAS

20× Faster Inference, Built to Scale

Advanced reasoning, agentic, long‑context, and multimodal workloads are driving a surge in inference demand—with more tokens per task and tighter latency budgets—yet GPU‑based inference is memory‑bandwidth bound, streaming weights from off‑chip HBM for each token and producing multi‑second to minutes-long delays that erode user engagement.

Cerebras Inference shatters this bottleneck through its revolutionary wafer-sized chip architecture, which uses exponentially faster memory that is closer to compute, delivering frontier‑model outputs at interactive speed.

AGENTS

Why agents can’t do everything

Rubio-Licht: What do you think is the biggest misconception about agents today? 

Clark: The biggest misconception … is that they are there and ready to solve every single problem that we have. While they are very capable, and while a lot of the tools are happening, most of the tools, most of the interoperability, most of the interconnect between things, it’s very nascent. A2A, the first protocol to give interoperability between agents, was introduced by us eight months ago. MCP [Model Context Protocol] is only a year old. But also, I think everybody looks at the rate of change today, and a month in AI cycles is like five years in tech cycles 10 years ago. It's a misconception of the capabilities and where they are today. 

The second misconception is that it just replaces the same thing we're doing with just automation. A lot of our processes and governance and other things that we have in enterprises are built around processes that date back to the [1940s], 50s and 60s, where people would type in triplicate on a typewriter and take one thing and hand it to one person. And we've just added technology on top of those processes over the years. The misconception is we're just replacing the same workflows by putting an agent where there may have been a typewriter or a person or a computer. 

Rubio-Licht: What does the future look like for the agent market? 

Clark: I think a lot of our interactions with AI are going to be driven through agents. It's going to become less obvious when we're interacting with AI [and] when we're not, or what work’s being done by AI or not, because it's just going to become a blend. We're going to see more and more things happening and being done with agents. We're going to start to see agents defining contracts, making economic transactions, making transactions around assets between organizations. What that's going to do is that's going to unlock more and more capabilities for businesses around trade, around how they interact with one another, both from a data trade perspective and even physical asset management. Those are going to be some of the creative things that we're going to start to see [during 2026].

I also think it changes all of our careers. Myself as a product manager, how much like the role of a PM is going to change, the role of designer, the role of everybody. I can now do a bunch of these cross functional pieces and start to interact on a much deeper level, solving deeper problems. I think we're going to solve a lot of problems that have plagued society, and plagued the world.

LINKS

  • ConnectMachine: A private AI agent for networking and contact management.

  • Surgeflow: A browser extension that automates your web action.

  • Loki.Build: An AI-native landing page builder for studio-grade websites.

  • Google: Senior Product Engineer, Machine Learning Accelerators

  • Nvidia: Principal Enterprise AI Architect, Enterprise AI Platform - Agentic AI

  • Anduril: Senior Software Engineer, Platform Security

  • OpenAI: Protection Scientist Engineer, Intelligence and Investigations

GAMES

Which image is real?

Login or Subscribe to participate in polls.

A QUICK POLL BEFORE YOU GO

Do you expect to have an agent doing work for you in 2026?

Login or Subscribe to participate in polls.

The Deep View is written by Nat Rubio-Licht, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

The Deep View team

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The ONLY reason I went with [this image] over [the other image] is that the dog is not fully on the rug in [the other image]. I'm guessing the rug would be more comfortable [than] the hardwood to a real dog.”

“The reflections in the eyes were the deciding factor for me!”

“Color and richness of picture [made me pick this one].”

“No light in the dog’s eye, despite that side of it being lit.”

“Was a bit hard to tell. Both pix are very realistic, but the golden [retriever] was not looking up as dogs typically do when someone is getting close to take a picture.”

“Ok, from now on, when I think I have the correct one, I'm going to pick the other one! The real dog's right side [on the other image] didn't look right, which is why I got fooled.”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.