Why OpenAI’s team agents raise new risks

Welcome back. Apple and Google’s AI alliance is moving from theory to reality, showing how high the stakes have become as the two rivals join forces to bring AI mainstream. At the same time, one of the least visible but most important AI battles is happening in the data layer, where Vast Data’s $1 billion raise signals that infrastructure, storage, and memory constraints could shape how far agents can scale. And OpenAI’s new Workspace agents make clear that team agents are the next frontier, but companies should move carefully, because shared context, autonomy, and business-critical data introduce a new class of risks. Jason Hiner 

IN TODAY’S NEWSLETTER

1. OpenAI launches team agents: be ready for the risks

2. Google keynote cements Apple frenemy deal on AI

3. The next AI bottleneck is the data layer

PRODUCTS

Why OpenAI’s team agents raise new risks

Personal AI agents have been the hottest trend of 2026. OpenAI wants to turn it into a team sport.

On Wednesday, the company launched ChatGPT "Workspace agents," which will now appear as a new tab in the left navbar and use Codex to let you build agents you can create once and quickly share across teams to carry out all kinds of regular tasks.

Here's how OpenAI described it in the announcement: "AI has already helped people work faster on their own, but many of the most important workflows inside an organization depend on shared context, handoffs, and decisions across teams. Workspace agents are designed for that kind of work: they can gather context from the right systems, follow team processes, ask for approval when needed, and keep work moving across tools." 

If you're ready to try ChatGPT workspace agents on your team, there are several factors to keep in mind:

  • Workspace agents are not just set-it-and-forget-it. They will require deep integration with your company’s apps and data. Teams will need to connect them to systems like internal docs, messaging platforms, and databases. And that will immediately raise hard questions about data access, permissions, and governance.

  • OpenAI is pushing a model where agents can coordinate across multi-step workflows, not just single tasks. That means companies will need to think in terms of process design, what tasks should be automated, how agents hand off work, and where humans stay in the loop to review or intervene.

  • The biggest unlock, and the biggest risk, is that these agents can operate with persistent context inside a workspace, learning how a team works over time. That creates leverage for productivity, but also forces companies to get serious about accuracy, oversight, and how much autonomy they are willing to give AI inside core business operations.

The OpenClaw phenomenon showed us how powerful personal AI agents can be when they're given access to all our personal tools and context, and when we can easily interact with them from anywhere via messaging. The fact that tools like NanoClaw and NemoClaw have popped up to make the process safer and more secure is an indication of both how powerful agents can be and how dangerous. That is amplified to a whole other level when you give them access to business-critical data. So if you want to implement Workspace agents, do it intentionally and in small steps. At a minimum, make sure you have data and agent oversight plans in place before you proceed.

Jason Hiner, Editor-in-Chief

TOGETHER WITH DESCOPE

Take AI agents and MCP servers from playground to production

Every organization is exploring how to adopt AI agents or MCP servers, but how many of them are in production?

And if they aren't in production, how likely is it that authentication, access control, and agentic identity concerns are the reason?

  • Real-world MCP and agentic AI use cases

  • Identity challenges that prevent production-readiness

  • Actionable tips to build secure, scalable AI agents and MCP servers

Move fast on AI without breaking things. Watch the webinar now.

BIG TECH

Google keynote cements Apple frenemy deal

Google’s Cloud Next event in Las Vegas had fake snow, vibe-coded music, and flashy visuals. Yet, none of it drew my attention more than an Apple logo.

On Wednesday, Google Cloud CEO Thomas Kurian kicked off the opening keynote with a run-through of Google’s latest and greatest highlights, which, of course, featured a lot of AI. When discussing its latest AI models, Kurian noted that their true value is unleashed when “operationalized” to solve mission-critical problems, citing its deal with Apple as a prime example. 

“We are collaborating with Apple as their preferred cloud provider to develop the next generation of Apple foundation models based on Gemini technology,” said Kurian. “These models will help power future Apple Intelligence features, including a more personalized Siri coming late this year.” 

The new Siri would finally deliver on the long-standing promise of drawing on users' personal data, such as messages, notes, and emails, to better understand context, offer more useful assistance, hold more natural conversations, and take action directly within apps. 

Bloomberg’s Apple watcher, Mark Gurman, also reported that Siri would have a new look and a chatbot-like interface. Gurman reported earlier this week that the WWDC 26 teaser is showing off what Siri’s new look will be. The company is currently testing a Siri interface that sits in the Dynamic Island and expands and glows when triggered, similar to how the "26" is highlighted in the WWDC teaser. 

While the Gemini-Siri news was announced in January, this marks the first time it has been featured at a big event, and likely the first of many mentions from both companies in the months ahead. On May 19-20, Google will hold its annual developer conference, Google I/O, where we may get a sneak peek at what's to come. Then, on June 8, Apple takes the stage at WWDC, where the launch of iOS 27, including the new unveiling of the new Siri, is widely expected. 

Following Apple’s typical announcement cycle, even if the new Siri is finally unveiled in June, users won’t be able to access it until the fall, with the public release of iOS 27 and the new iPhone lineup. Of course, The Deep View will thoroughly cover both events to bring you all the latest.

Apple and Google have been fierce competitors in recent years, vying to capture the same audience in the smartphone market and striving to draw users deeper into their respective ecosystems. The most visible example has been the infamous green-versus-blue bubble divide. However, collaboration between the two companies on AI is mutually beneficial. Apple needs help with foundation models while Google needs to fend off upstart rivals OpenAI and Anthropic. If the two companies can deliver the best possible AI experiences across their ecosystems, they have an opportunity to bring these features to a much broader audience that uses AI today. 

Disclosure: Sabrina Ortiz's travel to Google Cloud Next was paid by Google. The Deep View's coverage is editorially independent from the companies we cover.

Sabrina Ortiz, Senior Reporter

TOGETHER WITH CRUSOE

Are You Hitting The “Memory Wall”?

Nobody likes a bottleneck… especially when that bottleneck holds up your AI’s ability to deliver. But with Crusoe’s inference engine, you’ll never have to worry about a chokepoint again. Powered by their MemoryAlloy™ technology, Crusoe delivers larger shared memory capacity – that means more users served at a lower latency, with better throughput, and less wasted compute. Not bad.

The result? Breakthrough time-to-first-token speed and up to 5x higher throughput. If you want to experience the optimized performance of Crusoe for yourself, connect with their team here to help fine-tune your own model.

ENTERPRISE

The next AI bottleneck is the data layer

Enterprises create a lot of data. And while that can be helpful for giving AI models richer context, using it can gum up the works. 

It’s a problem that Vast Data aims to solve. Vast offers an AI operating system that consolidates enterprise data with the systems that need it, unifying data, systems and automation in one central place, allowing agents to reason about and act on a business’s context in real time.

And on Wednesday, Vast announced that it raised $1 billion in Series F funding, more than tripling its valuation to $30 billion. The round drew in investors such as Nvidia and Fidelity, reflecting growing demand to simplify the AI infrastructure stack as AI scales. 

Vast’s system is built on what Renen Hallak, founder and CEO of Vast, calls the DASE architecture, or “Disaggregated Shared Everything,” a framework built by the company. To put it simply, Vast’s system operates data storage and AI systems in parallel, connecting them with ultrafast links to reduce latency. Any server within the system can access the central well of data, making it easier to scale the AI systems. The result? Enterprises no longer need to choose between scale and simplicity, or performance and cost. 

Hallak told The Deep View that the idea came from improvements to neural networks at Google DeepMind in 2014. “The natural question in our minds was, what if we can give it even faster access to even more information?” Hallak said. “Will we get closer and closer to the human brain, and will we eventually surpass what the human brain can do?” 

Vast has grown rapidly over the past year, tripling its staff and generating more than $100 million in quarterly revenue, Hallak told me. The company counts Mistral, CoreWeave, JPMorgan Chase, Microsoft and Google Cloud as customers. 

“We focus on the most data-intensive organizations on the planet, and that's because they are the ones that need this level of scale,” said Hallak. “As we progressed through time, it went from being that small niche that we started from, to generative AI companies, all of the large language model builders and the frontier model builders.” 

However, Vast was around long before the AI boom, Hallak told me. The company was founded a decade ago, putting out its first product in 2019, at a time when a “very small niche” needed it, such as researchers in autonomous systems, medicine or quant trading. Now, as AI models and agentic systems produce more data than ever, the tables have completely turned.

“It's called generative AI for a reason,” said Hallak. “As they generate text and code and pictures and video, then all of that data needs a place to live.”

As agents scale, they generate more data than ever, and the problems that Hallak and Vast aim to solve with the architecture are poised to grow. However, the market faces other constraints as the data well goes deeper, including memory and storage bottlenecks, Hallak told me. So while innovation can happen at all layers of the stack, a bottleneck in one layer can have a domino effect on the rest. In short, until shortages on the components themselves are eased, every part of the market is going to feel the squeeze.

LINKS

  • X: Custom timelines, powered by Grok, allow users to pin up to 75 topics to home tab

  • Qwen3.6-27B: Alibaba’s latest dense, open-source model 

  • Euphony: OpenAI’s open-source tool for visualizing chat data and Codex session logs

  • Odyssey-2 Max: Odyssey’s most powerful world model yet

  • ChatGPT: Workspace agents can handle complex tasks and long-running workflows

GAMES

Which image is real?

Login or Subscribe to participate in polls.

POLL RESULTS

Do you use regularly AI image generators?

Yes (32%)
No (64%)
Other (4%)

The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“This is the kind of photo I'd take, with the dog looking like he's going to make a snack of the human.”

“Looks more like a scene that would catch the eye of a professional photographer.”

“Composition has human tones and effects.”

“There are small puddles beneath the dog … but no reflections in them.”

“The 'Rock Path" looked too unnatural.”

“The main reason was perspective. The human in [this image] was too large for its position in the image.”

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.