- The Deep View
- Posts
- Cursor-SpaceX deal signals coding agents' victory
Cursor-SpaceX deal signals coding agents' victory

Welcome back. OpenAI’s Image 2.0 signals a shift from novelty to utility with thinking tools and real-world applications for generative AI images. At Google Cloud Next, the company is trying to bring order to the enterprise agent mess with a broader Gemini Enterprise platform built to manage data and governance at scale. And coding agents are now officially the center of gravity in the AI ecosystem, as the $60 billion Cursor-SpaceX deal shows how far challengers will go to stay in the race. It is a bold but risky bet to catch leaders Claude Code and Codex. —Jason Hiner
1. Cursor's SpaceX deal is a sign of what's to come
2. OpenAI's Image 2.0 model shifts from art to tools
3. Can Google tame the enterprise agent mess?
BIG TECH
Cursor-SpaceX deal signals coding agents' victory
If there was any doubt that coding agents have become the beachfront property of the AI industry, the Cursor-SpaceX deal just put it to rest.
In an unorthodox move on Tuesday, SpaceX signed a deal with Cursor that would allow Cursor and xAI (now a SpaceX subsidiary) to jointly develop a coding agent to better compete with the reigning champ Claude Code and the fast-charging Codex from OpenAI.
For now, Cursor and xAI will see if they can make a collaboration work. And if they can, then SpaceX will acquire Cursor later this year for $60 billion. And if it doesn't work out, then SpaceX will pay $10 billion for the opportunity to learn from the coding company.
That's a great deal for Cursor since the whole company was valued at $10 billion just 12 months ago. Plus, while Cursor was an early leader in the AI coding space, it's now facing an uncertain future as frontier labs make coding tools their top priority. Without guaranteed access to compute and leading-edge models, Cursor was going to have two hands tied behind its back.
So why did both sides do this deal?
What Cursor gets: Access to SpaceX's "Colosus" supercomputer will provide Cursor with the compute to match OpenAI, Anthropic, and Google, while access to xAI's frontier models means Cursor doesn't have to beg its biggest rivals for access to their most powerful models.
What xAI gets: Jumping on the fast track to create a coding agent will get xAI in the game against Anthropic and OpenAI. The two leading labs are putting their compute and their teams' focus into their coding agents right now because they're also betting it will create a pathway to build general agents to eventually empower all knowledge workers
It's telling that in the tweet announcing the move, SpaceX described the collaboration as building "the world's best coding and knowledge work AI." It's chasing the same exact goals that OpenAI and Anthropic are sprinting towards.

Make no mistake: xAI is using its new big brother, SpaceX, to throw a Hail Mary pass in a desperate attempt to catch up with its coding agent rivals. You have to admire the chutzpah. But we have to see this for what it is: a lower-percentage pass. Still, both xAI and Cursor are upstarts with a lot of talent. Even if they learn they can't compete with Claude Code or Codex for a leadership role in coding agents, I wouldn't be surprised if they carve out a niche. Either way, this is more evidence that coding agents have moved to the epicenter of the AI revolution in 2026.
TOGETHER WITH CRUSOE
Solve Your Infrastructure Headaches… Forever
You want the best possible performance out of your LLM, but there’s one big issue holding you back: Your infrastructure. The good thing is, you aren’t alone – businesses everywhere are being limited by what their infrastructure can handle, which is where Crusoe comes in.
Crusoe Managed Inference doesn’t just unlock breakthrough speed and throughput for your models – it does it all without any of the infrastructure overhead that can hold you back. That means you’re ready to deploy fine-tuned models without any of the traditional headaches. Sounds pretty perfect to us. See how Crusoe can help right here.
PRODUCTS
OpenAI's Image 2.0 model shifts from art to tools
OpenAI's latest AI image generation model doesn't just create pictures: it thinks, browses the web, and outperforms its predecessor in nearly every way.
On Tuesday, OpenAI unveiled ChatGPT Images 2.0, a state-of-the-art image generation model that the company claims produces precise results for even complex visual tasks, including detailed instruction following, iconography, text rendering, object placement, and more.
“Image 2.0 is more than just an image generator for fun purposes; it has visual intelligence, and it's really designed for utility, beauty and real-world creative work,” said Adele Li, Product Manager at OpenAI, in a press briefing.
One of the biggest updates of the new model is that it now has thinking capabilities, allowing it to search the web, double-check its outputs, and act as a visual thought partner, according to the blog post. The thinking feature also allows it to produce multiple distinct images at once, up to eight from one prompt, a first for ChatGPT, while keeping character and object continuity.
Other upgrades include:
Languages: Stronger multilingual understanding in non-Latin text rendering, such as Japanese, Korean, Chinese, Hindi, and Bengali, an area where it has typically struggled
Real-world intelligence: It has a more up-to-date understanding of the world, with a knowledge cutoff of December 2025
Aspect ratios: There is support for more ratios, including as wide as 3:1 and as tall as 1:3
Style fidelity: It can better identify visual style characteristics and, as a result, faithfully recreate them, including subtle nuances in texture, lighting and composition.
In a pre-release demo, I watched several renderings unfold in real time, and while much of it was impressive, the standout moment was seeing the tool generate posters and infographics densely packed with text, all rendered with accuracy and detail. This seems to be an application of image generation that holds unique value for producing educational materials, a sentiment echoed by Li.
“Focusing on education is a huge priority for me in the company, and I'm really excited about releasing this to educators [and] teachers, in order to improve the ability for them to bring their students along in the journey, and also create more customized and personalized assets for [teaching] younger children,” said Li.
ChatGPT Images 2.0 is available to all ChatGPT and Codex users, with advanced outputs with thinking available only to ChatGPT Plus, Pro, Business, and Enterprise users. The underlying model, gpt-image-2, is also available via the API for developers and businesses.

This release comes nearly a year after OpenAI launched GPT-4o image generation, which the company says has become one of its most popular features, with over one billion images created per week. That stat alone is telling. While the popular consensus might suggest people are growing tired of AI slop and aren’t finding real applications, the demand clearly tells a different story. It also raises an interesting tension: while more realistic AI-generated images make it easier to mislead people with content that looks realistic but isn't, they may actually help ease the fatigue that comes with seeing content that is so obviously AI-generated and detached from reality.
TOGETHER WITH ATTIO
The Next-Gen CRM You Need To Try
Customer Relationship Management tools have been gamechangers for businesses from the very start… but in the constantly evolving AI landscape, even these are due for an upgrade. And that’s exactly what Attio has just done with their new AI CRM.
Attio doesn’t just build a complete picture of every deal and customer with zero manual logging or missing context – it also plans your next move, from prepping for meetings and running research on prospects to flagging pipeline risks and keeping you on top of every deliverable. It’s all powered by Universal Context, their proprietary intelligence layer which keeps things moving and takes your game to the next level… and you can try it right here.
ENTERPRISE
Can Google tame the enterprise agent mess?
One major tech conference is barely packing up in Las Vegas before another begins. Google Cloud Next, Google's enterprise-focused event, is now underway.
The conference centers on Google's cloud computing business, which spans AI, machine learning, infrastructure, and productivity offerings across nearly every industry, including the widely popular Google Workspace. As a result, it often serves as the stage for the company's biggest and latest AI announcements, and this year was no different, with the new Gemini Enterprise stepping into the spotlight.
The Gemini Enterprise portfolio is now intended to be an end-to-end platform for agentic systems, enabling organizations to connect their apps, data, and processes to build, manage, orchestrate, and deploy agents. The expanded portfolio now includes the Gemini Enterprise Agent Platform, which Google calls the heart of the new Gemini experience.
The Gemini Enterprise Agent Platform is an evolution of Vertex AI, offering Google's full suite of models, tuning services, and other tools to help businesses maximize their agent deployments. It includes an updated Agent Development Kit and Agent Runtime, along with new capabilities: Memory Bank and Memory Profiles, which give agents longer memory, and Agent Identity and Agent Gateway, which give IT teams greater control over the agent fleet.
The Gemini Enterprise app is also new and allows teams to collaborate on agents within a single, secure environment. It is built on the Agent Platform, so the same enterprise data and protections are accessible.
Some new capabilities include:
Agent Designer: Users can create an agent with no code, using natural language or a visual interface
Inbox in Gemini Enterprise: Users can get real-time status alerts about agent behavior directly through email and chat
Projects: A shared workspace for teams and agents that maintains the context of a topic
Canvas: A built-in interactive editor that teams can use to co-create and edit in Google Docs and Slides
Gemini Enterprise also has an open partner ecosystem, so users can access third-party agents from industry leaders. In particular, the new Agent Gallery brings a full catalog of partner-built agents from Google Cloud Marketplace. Google also introduced support for the Bring Your Own Model Context Protocol (BYO-MCP), enabling professionals to connect Gemini Enterprise to third-party business tools.

The year of the agent continues, as every major AI conference of 2026 so far has put agents at the center of the agenda. The OpenClaw momentum continues to cascade, especially in the enterprise where concerns about the governance, security, and manageability of agents has reached a fevered pitch. It's no surprise that Google wants to plant its stake in the ground since it's Google Workplace systems are often where key enterprise data and context is stored. Google does not have the same level of agent buzz as OpenClaw, Claude Code, or Codex, so it needs to generate interest to match its rivals.
LINKS

OpenAI enables cost-per-click ads in ChatGPT
Bezos's AI lab nears $38B valuation in funding deal, report
Anthropic reportedly began requiring some customers to provide a photo ID and a selfie
New social media platform, Bond, uses AI to curtail doomscrolling habit
The New York Times dove into the rise of AI-driven job cuts on Wall Street
Codex hits 4 million active users two weeks after hitting 3 million

ChatGPT Images 2.0: OpenAI’s latest and greatest image generator
Google AI Studio: It is now included in Google AI Pro and Ultra subscription plans
Microsoft Copilot: Users can text or forward an email to Copilot to get things done
Claude Code: Terminal shows recaps when refocusing a session

ServiceNow: AI Solution Architect
Stevens Institute of Technology: Forward Deployed AI Engineer
Teradata: Director - AI Engineering
University of Chicago: AI Application Developer
A QUICK POLL BEFORE YOU GO
Do you use regularly AI image generators? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Not as enhanced or sharp, so hands have taken this photo.” “The crooked horizon line was a dead giveaway.” “Initially, I thought the green was too vibrant, but the depth of field is right.” |
“AI graphics tend to take reality to the next level so it's easy to spot a photo that is beyond the normal reality that we see in the world today.” “The excessive high contrast on the background rocks gave away its artificial nature.” “Out of focus rocks in [this image] don’t look real. Depth of field in [the other image is] more realistic.” |


If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.













