- The Deep View
- Posts
- OpenAI's AWS deal is bigger than it looks
OpenAI's AWS deal is bigger than it looks

Welcome back. Today’s issue looks at AI’s trust problem from three angles. Cisco is offering a “DNA test” for AI models, giving companies a way to verify where models came from before putting them near sensitive systems. Anthropic is aiming Claude Security at enterprise codebases, even as its more powerful Mythos model raises alarms about cyber risk. And OpenAI’s new AWS partnership is bigger than just cloud access. It gives AWS customers OpenAI models, Codex, and managed agents inside infrastructure they already trust. —Jason Hiner
1. OpenAI's AWS deal is bigger than it looks
2. Claude cybersecurity tool steps into a minefield
3. Cisco launches "DNA Test" for AI models
BIG TECH
How the AWS-OpenAI deal fills gaps for both
OpenAI’s new partnership with Amazon Web Services is a much bigger deal than just officially marking the end of its exclusive Microsoft pact.
As the world’s largest cloud platform, AWS is trusted by millions of organizations from startups to enterprises to government agencies. They often store their data there, and they already have their corporate security policies and governance in place across AWS systems.
That’s one of the biggest reasons Amazon Bedrock has rapidly become a preferred platform for piloting and launching AI projects. But at the What's Next with AWS event on Tuesday in San Francisco, AWS CEO Matt Garman tacitly admitted that Amazon Bedrock has been missing something: customers have been asking for OpenAI models.
That’s why both companies were eager to pound their chests about the new partnership. On Tuesday, the two teams revealed that they’ve been collaborating for eight weeks to launch a set of solutions for the enterprise.
"We are co-developing an agent platform from the ground up,” said OpenAI CEO Sam Altman. “This is day one. We're on a journey to give customers transformative agentic solutions."
For now, they announced three programs in limited preview:
OpenAI models on Amazon Bedrock: One of the biggest draws of Bedrock is its wide selection of open-source and proprietary models from Anthropic, Meta, Nvidia, Mistral, DeepSeek, Cohere, and others, making it simple to deploy and test different models. They can access all of them through the Bedrock APIs and use the existing controls to manage security and costs. Adding OpenAI’s models fills the product’s biggest gap.
Codex on Amazon Bedrock: After just eclipsing 4 million weekly users, Codex will now be available to an even broader set of business customers. Enterprise software teams can now use Codex to automate workflows from within their existing AWS environment, using their AWS credentials and infrastructure.
Amazon Bedrock Managed Agents from OpenAI: Because agents can get so complex and dangerous, there’s a growing appeal to run them in a cloud like AWS, where identity management, auditing, persistent memory, and production-ready enterprise standards are already in place. That makes this the most turnkey agentic announcement from the two companies, and it matches Anthropic’s recent Claude Managed Agents offering.
Clearly, AWS customers don’t just get access to the latest OpenAI models in this deal, but also get access to its agent harness, which has quickly gained momentum as a rival to Claude Code. A harness is simply the software that makes LLMs agentic. “The harness is the playbook for the model,” said Anthony Liguori, AWS distinguished engineer, at the event.

It’s only been in the last couple of months that OpenAI has made a hard pivot toward the enterprise, but it continues to make deliberate moves that show a much stronger commitment to becoming a business partner. That's a welcome development after OpenAI drew broad criticism for focusing on too many bets in late 2025. And as rival Anthropic continues its mission to become the preferred model provider for enterprises—and positions itself for a soaring valuation because of it—any strategy that can give OpenAI a leg up among businesses is welcome. And enterprises will always welcome the competition and additional options.
Disclosure: Jason Hiner's travel to What's Next With AWS was paid by Amazon. The Deep View's coverage is editorially independent from the companies we cover.
TOGETHER WITH MODE MOBILE
Investors are watching this fast-growing tech company
No, it's not Nvidia… It's Mode Mobile, 2023’s fastest-growing software company according to Deloitte.
Their EarnPhone has helped users earn and save over $1B, driving $115M+ in revenue and an eye-popping 32,481% revenue growth. And having secured partnerships with Walmart and Best Buy, Mode’s not stopping there…
Like Uber turned vehicles into income-generating assets, Mode is turning smartphones into an easy passive income source. The difference is, investors like you still have a chance to invest in Mode’s pre-IPO offering at $0.50/share.
They’ve just been granted the stock ticker $MODE by the Nasdaq and over 59,000 investors participated in their previous rounds.
GOVERNANCE
Claude cybersecurity tool steps into a minefield
AI has the power to make or break cybersecurity. Aiming to prevent the worst-case scenarios that their ever-more-powerful models could cause, providers are scrambling to find solutions.
On Thursday, Anthropic released Claude Security, a dedicated offering for weeding out code vulnerabilities, in public beta for Enterprise users. Though not as powerful as Claude Mythos, which Anthropic said can surpass “elite human experts” at finding holes in software, the product is powered by Opus 4.7, the company’s most powerful generally available model, and can be leveraged by a much wider set of organizations.
With AI models becoming more capable of discovering vulnerabilities, it’s only a matter of time before their capabilities are put to use to exploit them, too, Anthropic said in its post. “Now is the time for organizations to act to improve their security, preparing for a world in which working software exploits are much easier to discover. ”
The product, previously Claude Code Security, was released in a limited research preview in February and has since been tested by hundreds of organizations, the company reported.
Rather than searching for known patterns, Claude Security reasons about code the same way a security researcher would, Anthropic says.
Claude Security offers scheduled and targeted scans of codebases, integration with audit systems, and the ability to track triaged findings.
The tool also doesn’t require API integrations or custom agent buildouts. So long as an enterprise uses Claude, they can leverage Claude Security.
Anthropic is also partnering to integrate Claude Security into several existing software tools, including CrowdStrike, Microsoft Security, Palo Alto Networks, SentinelOne, TrendAI, and Wiz.
Claude Security is being released as the White House seeks to leash Anthropic from widening the reach of Claude Mythos, its most powerful model yet. On Thursday, the Wall Street Journal reported that the administration opposes Anthropic’s plan to release Mythos to around 70 more organizations, bringing the total number to 120 entities. Sources told WSJ that the opposition is largely due to security concerns.
However, OpenAI may be breathing down Anthropic’s neck. Research released on Thursday by the AI Security Institute found OpenAI's GPT-5.5 cybersecurity capabilities are comparable to Mythos Preview, claiming the model is “one of the strongest models [they] have tested” based on its cyber capabilities.

Anthropic and OpenAI both seem to be playing with fire. Anthropic is muzzling Mythos because it is a seemingly double-edged sword, capable of both supporting and completely upending cybersecurity and even national security. Offering Claude Security on Opus 4.7 is a bit of a consolation prize for those who aren’t the elite few given access to Mythos. OpenAI, it seems, is taking a similar path, with CEO Sam Altman announcing on Thursday that GPT-5.5-Cyber, its frontier security model, will be available only to “critical cyber defenders” for now. At the end of the day, both companies are aiming to get ahead of problems that their models are effectively poised to worsen. Whether their solutions will be stitches or Band-Aids is uncertain until these powerful models are out in the world. And that's the scary part.
TOGETHER WITH ATTIO
Your CRM Should Be This Smart.
Customer Relationship Management tools have been gamechangers for businesses from the very start… but as AI reshapes how teams operate, most CRMs are still catching up. The gap between what these tools promise and what they actually deliver has never been more obvious. Attio is the first to close it.
Attio is the AI CRM that builds a complete picture of every deal and customer with zero manual logging or missing context – and it doesn't stop there. Attio plans your next move too, from prepping for meetings and running prospect research to flagging pipeline risks before they become problems. It's all powered by Universal Context, their proprietary intelligence layer that keeps every relationship, every deal, and every deliverable in full view… and you can try it right here.
PRODUCTS
Cisco launches ‘DNA Test’ for AI models
Deploying AI in any organization means granting it access to sensitive data and processes. Cisco wants companies to know exactly what they're letting in before they do.
On Thursday, Cisco launched a Model Provenance Kit, which it describes as a “DNA Test for AI models” that verifies a model's origins and checks whether it has been tampered with. The tool is ultimately supposed to give organizations more confidence about the models they deploy.
"Model provenance will underpin AI governance and AI security by making it possible to trace how systems are built, how they evolve, and how their outputs can be reliably attributed in high-stakes environments,” said Amy Chang, head of AI threat intelligence and security research at Cisco, told The Deep View.
Model Provenance Kit analyzes the model’s identity using architecture metadata, tokenizer structure, and learned weights to produce a “rich fingerprint” for each one, as well as a single provenance score indicating whether two models share a common origin or training lineage. There are two modes:
Compare mode: Takes any two models and gives them a score that reflects how much they share in lineage.
Scan mode: The user would begin with a single model and match it against a database of known fingerprints to get the closest lineage candidates.
The company evaluated its accuracy against a 111-pair benchmark, and only 4 out of the 111 pairs were misclassified, and those involved “extreme architectural transformations.” Model Provenance Kit is available today, and the repository is accessible via GitHub, and the model fingerprint database is available on Hugging Face.
This targets a major issue in the AI space: People are downloading open-source model repositories from platforms such as HuggingFace without being able to verify the model's exact origins. For instance, Cisco’s announcement highlights how a developer can claim a model was trained from scratch, yet it could still be a copy of another model.
Other possible issues include biases in the training data, vulnerabilities, licensing caveats, or models that have simply been modified during development without those changes being accurately logged. Cisco highlights that this makes companies vulnerable to poisoned or compromised models, licensing and regulatory risks, supply chain integrity risks, and incident response risks.

Cisco’s goal is to help enterprises adopt AI safely and securely by providing the tools to do so. A prime example is the recently released LLM Security Leaderboard, which ranks top models based on their responses to adversarial attacks and associated security risks. Its Model Provenance Kit is another. Together, these tools put Cisco in a strong strategic position: By helping clients understand the safety and integrity of the models they evaluate, Cisco becomes a natural partner for the infrastructure needed to deploy them.
LINKS

Elon Musk confirms xAI uses OpenAI models to train Grok
Anthropic considers fresh funding round that bumps valuation to $900 billion
Spotify rolls out a badge confirming an artist is a human and not AI
Anthropic developed BioMysteryBench, a bioinformatics benchmark
US FDA to use AI to monitor clinical trial data in real time
OpenAI addresses why models increasingly mentioned goblins, gremlins

Google AI Studio: multi-chat and web search are live in build mode
World Labs: Expand feature is now available to everyone
Google Photos: New wardrobe feature lets users build a digital closet
GPT-5.5: OpenAI dropped a new prompting guide
Freepik: The AI creative platform rebrands to Magnific

Build a strong team, without the usual hiring headaches
Get AI-assisted matching and human vetting tailored to your needs in days.
Save up to 70% on salary costs while still hiring high-skill professionals.
Scale with a repeatable process that keeps quality high and effort low.
(sponsored)
POLL RESULTS
Have security concerns held you back from using agents more?
Yes (77%)
No (20%)
Other (3 %)
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The ground, the animals, and the gate — all of them — being on-focus is extremely difficult to achieve. The [other image] does, [this image] — as expected — cannot.” “Focal blur looked realistic in [this image]. Fur had small imperfections of matting which you expect in real life.” “Theme and background seemed more realistic. ” |
“I've been to this shrine and there's never NOT someone in the background of your photo.” “The focus is really weird. Everything is slightly in focus, even all the way in the background. Too much detail everywhere.” “[This image] seemed more unreal based on the setting looking set up and centered in the picture.” |


If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
*Mode Mobile Disclaimers: Please read the offering circular and related risks at invest.modemobile.com. This is a paid advertisement for Mode Mobile’s Regulation A+ Offering.
Mode Mobile recently received their ticker reservation with Nasdaq ($MODE), indicating an intent to IPO in the next 24 months. An intent to IPO is no guarantee that an actual IPO will occur.
The Deloitte rankings are based on submitted applications and public company database research, with winners selected based on their fiscal-year revenue growth percentage over a three-year period.
Pro forma revenue and EBITDA, includes full year numbers of the businesses acquired throughout 2025.













