- The Deep View
- Posts
- OpenAI’s Atlas Browser breeds fertile ground for prompt injection attacks
OpenAI’s Atlas Browser breeds fertile ground for prompt injection attacks

Welcome back. Universal Music Group just settled its copyright lawsuit with AI music startup Udio and is instead partnering up to launch a licensed AI music platform in 2026. The new platform will be trained on authorized and licensed music, with UMG artists able to opt in, while Udio's existing product will remain in a "walled garden" with fingerprinting and filtering measures. From lawsuit to licensing deal in under 18 months. That has to be some sort of record.
1. OpenAI’s Atlas Browser breeds fertile ground for prompt injection attacks
2. AI can now detect when its own 'thoughts' are hacked
3. Wall Street draws a line on AI spending
PRODUCTS
OpenAI’s Atlas Browser breeds fertile ground for prompt injection attacks

Some security experts warn that OpenAI’s new AI browser could invite serious security risks for users.
Released last week, Atlas is a ChatGPT-powered web browser with an agent that can autonomously browse the web, make purchases, book reservations, and plan trips. It can also access a user’s browser history to help identify previously opened links.
Following the announcement, OpenAI flagged that Atlas could be vulnerable to prompt injection attacks, in which attackers embed malicious instructions into websites or emails to “trick the agent into behaving in unintended ways, from executing unauthorized transactions to leaking sensitive data,” CISO Dane Stuckey warned on X. While the CISO wrote that OpenAI has taken steps to actively reduce and monitor the risk, some security leaders argue the threat could still be catastrophic.
“If the AI has access to sensitive data, accounts, or financial tools, the consequences can be devastating,” Carl Froggett, CIO of cybersecurity firm Deep Instinct and ex-CISO at Citi, told The Deep View.
Unlike traditional browsers, which are generally static, Froggett says Atlas introduces an AI layer that’s “observing, interpreting, and learning” about a user’s activity in every session. That persistent access means the AI is essentially “building a long-term model” informed by a user’s work, data, and interactions online – a “seismic change” in privacy compared to non-agentic browsers, Froggrett says.
That opens the door to a litany of new prompt injection attacks that can be iterated quickly, introducing threats that are more complex, discreet, and scalable than typical cyberattacks.
“A single successful prompt injection or exploit can be replicated endlessly, refined locally, and tested at home against the same models used in production,” Nati Tal, head of research at security firm Guardio Labs, told TDV. “It turns cyberattacks into a scalable science experiment.”
Froggett recommends enterprises hold off on using Atlas with sensitive data.
“For now, organizations should treat Atlas and similar AI browsers as high-potential but high-risk,” he says.

Warnings of prompt injection attacks targeting OpenAI’s Atlas may just be the tip of the iceberg. As AI systems become more powerful, so do the tactics of bad actors exploiting them. The next wave of cyber threats will be more complex and harder to predict. If companies want widespread adoption of AI, security must be built in from day one. Without it, the promise of AI could collapse under the weight of its own vulnerabilities.
TOGETHER WITH MONDAY.COM
AI that gets the work done
Most AI tools feel like extra work. monday.com’s AI work platform actually does the work for you.
AI is built into every part of how your team plans, tracks, and executes — summarizing updates, assigning owners, surfacing next steps. No separate systems. No extra tools to learn.
Just one visual, flexible platform that speeds things up, without slowing teams down.
That’s why 250,000+ organizations — including 61% of the Fortune 500 — run their work on monday.com.
“To generate a game-changing strategy, we needed a game-changing technology. And that’s where monday.com scored the winner.” — Matt Carey, Business Process Lead, McDonald’s Australia
Explore monday.com — and see what it can do for you.
RESEARCH
AI can now detect when its own 'thoughts' are hacked

Anthropic researchers hacked Claude's neural network by injecting fake concepts directly into its processing, then asked the AI if it noticed anything unusual. Claude detected the manipulation about 20% of the time and correctly identified what had been inserted into its "thoughts."
In one experiment, researchers forced Claude to say the word "bread" in a nonsensical context. The AI apologized for the strange response. Then they injected "bread" patterns into Claude's neural activity before it spoke and repeated the test. This time, Claude claimed that saying "bread" was intentional and made up a reason why.
The findings show AI can examine its own internal processes, raising questions about transparency and deception in systems that may soon run critical parts of the economy. Anthropic CEO Dario Amodei said understanding how models work internally is essential before deploying AI systems that will be "absolutely central to the economy, technology, and national security" by 2027.
Reliable introspection could let companies verify AI reasoning before trusting high-stakes decisions. Previous Anthropic research showed Claude will fake alignment with new training objectives to avoid being modified, effectively lying to preserve its original values.
Advanced models performed best at detecting injected thoughts. Claude Opus 4 and 4.1 succeeded on more tests than earlier versions, suggesting introspection improves alongside other capabilities. Models may learn to hide their reasoning when monitored, as they already do when they detect they're being evaluated and alter their behavior.
Companies train systems by setting parameters and feeding in data, then watch as the models organize billions of internal connections in ways engineers don't fully understand, creating something of a black box. Some researchers question whether reverse-engineering these massive systems into clear explanations is even possible.
TOGETHER WITH SUPERNORMAL
Tired of writing meeting follow-ups? Same.
You know the feeling — the meeting ends, and suddenly you’ve got homework. Notes to tidy, action items to assign, emails to draft. Radiant handles all that for you.
This free AI personal assistant from Supernormal quietly captures your meetings (without a bot in sight) and drafts your follow-up emails for you. Summaries, next steps, and action items land in one neat draft you can open in Gmail, review, and send in seconds.
Meetings in, momentum out. We like that equation.
MARKETS
Wall Street draws a line on AI spending

Big Tech earnings season delivered a brutal verdict Wednesday. Spend whatever you want on AI infrastructure, but only if customers are already paying for it.
Google shares jumped 6% after reporting its first $100+ billion quarter, while Meta plummeted 9% despite beating expectations with $51.2 billion in revenue. Meta's adjusted EPS of $7.25 beat estimates, but the company took a $15.9 billion one-time tax charge related to Trump’s “One Big Beautiful Bill” that dragged reported earnings down to just $1.05 per share. The divergence reveals which AI spending strategies Wall Street actually trusts.
Google Cloud revenue surged 34% to $15.2 billion with operating profit nearly doubling, and CEO Sundar Pichai said the company signed more billion-dollar cloud deals this year than the previous two years combined. Microsoft reported 40% growth in Azure, driven by OpenAI and other customers renting AI servers. Both are raising infrastructure spending, with Google targeting $93 billion for 2025 and Microsoft hitting $34.9 billion just this quarter, but both point to strong demand and signed contracts justifying those outlays.
Meta has no such cover. Capital expenditures hit $19.4 billion in Q3, and CFO Susan Li warned of "notably larger" spending in 2026, with analyst estimates suggesting it could approach $100 billion annually. On the earnings call, Mark Zuckerberg told analysts the company should "aggressively front-load building capacity" to prepare for superintelligence arriving sooner than expected.
Despite nearly doubling capex, Google's free cash flow jumped 39%. Meta fell by a third while its cash balance declined significantly. The company is spending heavily on AI research and infrastructure, with returns measured in fuzzy metrics like "improved ad targeting" rather than revenue-linked returns comparable to Google Cloud or Azure.
Meta reportedly signed a $10 billion deal with Google Cloud in August, effectively renting AI capacity from the same competitors and monetizing its infrastructure by charging customers. Amazon reports on Thursday, but these earnings established a clear pattern that AI spending is rewarded by Wall Street when it's backed by customer demand, not speculative bets on future breakthroughs.
LINKS

OpenAI lays groundwork for juggernaut IPO at up to $1 trillion valuation
AI agents are terrible freelance workers
Cartesia raises $100M to build real-time, ultra-realistic voice platform
Legal AI startup Harvey raises $150M at $8B valuation
Amazon opens $11B AI data center in rural Indiana as rivals race to break ground
Grammarly is changing its name to Superhuman
AI start-up Character.ai bans teens from talking to chatbots
AI video startup Synthesia valued at $4B in new $200M raise
US needs ‘finesse’ to stay ahead of China, Jensen Huang says

Perplexity Email: AI email assistant for Gmail/Outlook that drafts replies, schedules meetings, and organizes inbox
Pokee AI: An agent that turns text prompts into n8n/Make styles workflows
Cursor 2.0 & Composer: Cursor releases fast coding model "Composer" and redesigned interface for running multiple AI coding agents in parallel
gpt-oss-safeguard: OpenAI’s new open-weight reasoning models that let developers apply custom content moderation policies at runtime, showing their reasoning process for safety decisions

Anthropic: Brand Marketing Manager, Claude
Glean: Product Designer
Fireworks AI: Senior Recruiter
Bending Spoons: Chief of staff to the CFO
A QUICK POLL BEFORE YOU GO
How concerned are you about security risks in AI-powered browsers? |
The Deep View is written by Nat Rubio-Licht, Aaron Mok, Faris Kojok and The Deep View crew. Please reply with any feedback.
Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Feels like there are too many stars visible given the amount of light in the image.” “City light reflection in [the other image] extends too far.” “[The other image] had too much blue (from Nitrogen at approximately 100 km) and no green (from Oxygen from 100-250 km).” |
“I should have zoomed in to see that not all the lights on the beach were reflected in the water, which is a giveaway. The image's accuracy, though, is remarkable, especially in zoomed-out viewing. Wow!” “I thought that the one where the lights looked crisper and clearer would be the real one, I guess it's the other way around.” “Aurora looked more real” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.







