OpenAI's GPT-5.5 sets up agentic superapp

Welcome back. Google showed what practical agentic AI looks like by rebuilding Workspace around context, making everyday tools smarter without getting too flashy. Meanwhile, Meta’s move to Amazon's Graviton chips shows the AI race is no longer just about GPUs but about the full stack needed to run agents at scale. And OpenAI’s GPT-5.5 looks like the clearest signal yet of where ChatGPT is headed, toward an agentic superapp that can plan, use tools, and handle longer, messier workflows.  Jason Hiner

IN TODAY’S NEWSLETTER

1. OpenAI's GPT-5.5 sets up agentic superapp

2. How Google rebuilt Workspace for agents

3. Meta’s Amazon deal widens the AI chip race

PRODUCTS

OpenAI's agentic superapp just got its new brain

The model for OpenAI's agentic superapp has arrived, even if the superapp itself isn't here yet.

On Thursday, the ChatGPT app maker took the wraps off its latest flagship model, GPT-5.5, just seven weeks after launching GPT-5.4. The OpenAI team said the model itself and the company's agentic coding tool Codex were used to build GPT-5.5. This process of the models helping to build themselves is called recursive self-improvement (RSI) and it has long been anticipated as a moment when AI would take a dramatic leap forward. 

In a press briefing on the new model on Thursday, I asked OpenAI president Greg Brockman whether GPT-5.5 would power the company's much-anticipated 'superapp,' and he confirmed it would. He also confirmed that OpenAI would continue to roll out aspects of the superapp incrementally. 

When I asked him how GPT-5.5 would be a step toward the superapp, Brockman said, "We're going to be landing some new features, even today, really marching it from being this coding app to being an app that is for anyone doing work with the computer.” 

Here are what The Deep View identifies as the key features of the new model, based on what we learned from OpenAI:

  • Better at long-horizon tasks: That includes workflows that require planning, feedback, iterations, and tool use

  • More intuitive problem-solving: It can better interpret vague tasks and decide on the next steps with less guidance (nevertheless, the most effective workflows will still have strong human intervention)

  • Improved computer use: OpenAI says the model can now do "economically valuable work" across 44 occupations

  • Multi-step scientific workflows: It can now generate novel mathematical insights and handle messy real-world data analysis

All of this adds up to a model that is much more optimized for agentic workflows and helping people carry out tasks. Brockman emphasized, "We are moving to a compute-powered economy."

In the press briefing, OpenAI chief research officer Mark Chen said that GPT-5.5 will allow people to "be the orchestrators, and let the model do the heavy lifting." 

At the same time, OpenAI has scaled up safety to match the model's growing capabilities. VP of Research Mia Glaese said it has also increased its safety protocols to match the new model by undergoing third‑party safeguard testing and red‑teaming for both cybersecurity and bioweapons risks. As a result, the model ships with a stronger set of safeguards and users will see more refusals from the model for cyber‑related misuse attempts.

One of the most impressive things about the GPT-5.5 release is that OpenAI reports zero impact on latency, even though the model has added or improved many capabilities and shot to the top of benchmarks. I'll test the latency claim to confirm, but if it's true, then it would be a clear win. One of the biggest challenges I've seen with the cutting-edge frontier models lately is that they have been getting very laggy. I'm especially thinking of Gemini 3.1 Pro and Claude 4.7 Opus. I'm also looking forward to testing GPT-5.5 in Codex to see how much it can accomplish beyond just coding and app building. The promise of the superapp is that it will transform Codex from a developer tool into a consumer app anyone can use to automate tasks and run workflows. GPT-5.5 is clearly the engine OpenAI has built for that vision. We should expect the frontend app isn't far behind. For real-time updates on the latest developments in AI, you can follow my Twitter/X account at x.com/jasonhiner.

Jason Hiner, Editor-in-Chief

TOGETHER WITH ATLAN

Why your AI agents keep getting the wrong answer

Most AI agent failures aren't model failures. They're context failures: missing definitions, stale metadata, two sources that contradict each other. Teams use multiple tools - Claude, GPT projects, Gemini, Copilot, Cortex, Genie. And most teams often rebuild the same context from scratch for every agent they ship.

On April 29, Atlan is doing a live demo of the infrastructure layer designed to fix this:

  • Context Engineering Studio: bootstrap, test, and ship reusable context repos (just like code repos)

  • Context Agents: 9 AI teammates that write and maintain your context automatically (90% suggestion acceptance in production)                    

  • Context Lakehouse: open, interoperable infrastructure your entire agent stack can draw from

One hour. Live launch. See how the best AI teams solve the production context problem.

Can't make it? Register anyway, you'll get the recording. → atlan.com/activate 

BIG TECH

How Google rebuilt Workspace for agents

Google Cloud may be known as the backbone of the company's enterprise offerings across manufacturing, finance, and other verticals, but it is also home to one of Google's most beloved consumer products: Google Workspace.

At Google Cloud Next, the company introduced its latest Google Workspace upgrade: Workspace Intelligence, a system that better understands the real-time semantic relationships among your Workspace apps, such as Gmail, Docs, and Sheets, to supercharge context and power agentic workflows.  

“If you take a single project, you're gonna have communications about it, you're gonna have Docs, if you have data, it might be in a spreadsheet, the information is actually scattered in many different places,” said Yulie Kwon Kim, VP of Product, Google Workspace, to The Deep View. “That's the reality of life and work. So that's the problem that we're trying to solve.” 

Workspace Intelligence can gather the information you need from across apps, use Gemini reasoning to understand what is most important, and tailor outputs to the user's communication patterns. 

This will power new features across the ecosystem, including: 

  • Google Chat: Ask Gemini in Chat provides users with daily briefings and uses its skills to complete complex tasks, such as scheduling meetings or generating slides.

  • Google Sheets: Users can build or edit entire spreadsheets in natural language, drawing on data from across their apps and the web. 

  • Gemini in Docs: It can now add infographics to Docs grounded in your business data.

  • Google Slides: Create complete, editable slide decks with a single prompt, grounded in your context. 

  • AI Inbox: An inbox that summarizes what is most important in your primary inbox. 

  • Drive: AI Overviews and Ask Gemini are now generally available to find what you need more easily, while Drive Projects introduces a new way to organize files and emails. 

During the keynote, Kwon gave a demo of the new Workspace Intelligence experience. There, she used Google Chat to get insights into urgent tasks, with links to relevant materials and suggestions on how to proceed. Then she was able to ask for a document she could not find, locate it, and even create an entire Slides deck from a prompt. 

The company also launched a new Rapid Enterprise Migration offering that makes it five times faster to move to Google Workspace from Microsoft 365.

During the keynote, Kwon said, "It just works" when describing Workspace Intelligence. While a simple statement, that's exactly how AI in a popular consumer product should be implemented: simple, effective, and not flashy. I had the opportunity to go hands-on with the features during a lunch-and-learn session, and they were indeed incredibly intuitive and, most importantly, useful. My favorite feature, for instance, is being able to get an AI Overview of my own Google Drive, an abyss where I lose things on a daily basis, no matter how hard I try to stay organized.

Disclosure: Sabrina Ortiz's travel to Google Cloud Next was paid by Google. The Deep View's coverage is editorially independent from the companies we cover.

Sabrina Ortiz, Senior Reporter

TOGETHER WITH GRANOLA

The Deep View team is obsessed with Granola.

We're not talking about the food (although a few of us have an unhealthy interest in that too).

We're talking about the AI notepad we've been using in 2026. It works across our teams, summarizes every meeting, and saves us around 10 hours a week per person.

HARDWARE

Meta’s Amazon deal widens the AI chip race

Meta is looking beyond Nvidia GPUs to meet its future compute needs.

On Tuesday, the company signed an agreement with Amazon to deploy tens of millions of AWS Graviton cores, with the potential for future expansion. This deal builds on both companies’ long-standing cloud infrastructure partnership and makes Meta one of the world's largest Graviton customers. 

Meta will use the silicon to support “various workloads” across the company, including AI efforts, with a special focus on agentic workloads.

“As we scale the infrastructure behind Meta's AI ambitions, diversifying our compute sources is a strategic imperative,” said Santosh Janardhan, head of infrastructure at Meta in the post. “AWS has been a trusted cloud partner for years, and expanding to Graviton allows us to run the CPU-intensive workloads behind agentic AI with the performance and efficiency we need at our scale.” 

AWS Graviton processors are ARM-based CPUs, which are essential for the orchestration layer of agentic systems, tackling tasks such as managing workflow state, routing calls between agents, and coordinating the complex, multi-step tasks they require. 

The Graviton5 chip, in particular, features 192 cores, a significantly larger cache than its predecessor, and support for Elastic Fabric Adapter (EFA), which enables low-latency, high-bandwidth communication between instances, according to the release. Amazon also touted the chips’ “leading energy efficiency,” with Graviton5 delivering up to a 25% boost over previous generations.

While thinking about compute demands in the AI race, it is easy to focus only on Nvidia and GPUs. But two things complicate that picture: the AI industry is incredibly compute-constrained, and no single company can meet that demand alone; and GPUs are not the only important component of the AI stack, a fact that becomes increasingly true as AI evolves. Nvidia competitors such as AMD, Intel, Google, and Amazon need to continue stepping up to fill the gaps. And given that demand shows no signs of slowing, the pressure on them to do so is only growing. Look for more compute deals that feature Nvidia competitors.

LINKS

  • Copilot: Word, Excel, and PowerPoint agentic features are generally available

  • Kling: 4K Mode is now lying in Video 3.0 series for higher quality generations

  • Tencent Hy: Hy3 preview (295B A21B) is now open source

  • Claude: New connectors include AllTrails, Instacart, Audible, TripAdvisor, and more

Need domain experts for AI training work? Athyna Intelligence connects AI labs with financial analysts, economists, and quant researchers from Latin America who can evaluate models on:

  • Financial modeling & valuation

  • Investment analysis & portfolio theory

  • Risk assessment & derivatives pricing

Same US timezone. Ready in days. 40–60% cost savings.

(sponsored)

GAMES

Which image is real?

Login or Subscribe to participate in polls.

A QUICK POLL BEFORE YOU GO

Do you use AI to help prioritize your email and chat messages?

Login or Subscribe to participate in polls.

The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The dimensions in [this one] were more realistic for an image that was created by a human.”

“Reflections seemed more real.”

“Circular buildings around an enclosure is more likely than a circular building around an oval enclosure.”

“The clouds reflected in the windows seemed more real.”

“I've been wrong so many times that I now choose the one I think is too whimsical to be real.”

“Something about the shape lines of the building looked odd.”

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.