OpenAI adds Amazon to deals frenzy

Welcome back. Anthropic and Cognizant are partnering to roll out the company’s Claude model to the professional services firm’s workforce. With Cognizant’s more than 350,000 employees, the deal marks one of Anthropic’s biggest customers yet. It follows major partnerships with IBM and Deloitte, inked in October, highlighting the AI firm’s keen focus on the enterprise market.

IN TODAY’S NEWSLETTER

1. OpenAI adds Amazon to deals frenzy

2. Microsoft racks up compute deals

3. Google removes Gemma after hallucinations

MARKETS

OpenAI adds Amazon to deals frenzy

OpenAI is continuing to rope tech giants into its historical infrastructure buildout. 

The company announced a $38 billion deal with Amazon on Monday to utilize its cloud computing services for advanced AI workloads over the next seven years. The partnership gives OpenAI access to hundreds of thousands more Nvidia GPUs, as well as tens of millions of CPUs, “to rapidly scale agentic workloads.”

OpenAI has been a dealmaking machine over the past several months. 

  • The company announced an expansion of Project Stargate in partnership with Oracle and Softbank worth $400 billion in late September. In the same week, Nvidia announced an investment in the firm worth $100 billion.

  • Its other deals include a $100 million partnership with AMD, a $300 billion deal with Oracle, and a partnership with Broadcom, for which the financial terms weren’t disclosed. 

“AI infrastructure continues to be a key battleground for growth, which benefits cloud providers, chipmakers, and data center operators that can meet the demand,” Ido Caspi, research analyst at Global X, told The Deep View. “The move also reflects OpenAI’s strategic effort to diversify its dependencies beyond a single vendor in Microsoft.” 

But for all of its pomp and circumstance, OpenAI has yet to turn a profit. Though CEO Sam Altman claimed last week that the company’s revenue is “well above” the reported figure of $13 billion a year, Microsoft’s earnings last week showed that OpenAI posted an $11.5 billion loss in the quarter.

However, through these varied deals totaling more than a trillion in commitments, the company may be seeking to maintain “the appearance that they’re too big to fail,” Scott Bickley, an advisory fellow at Info-Tech Research Group, told The Deep View. “They're tying the fortunes of everyone else to theirs, to some degree.”

To keep floating on, however, OpenAI is funding these buildouts “through a lot of creativity and different deal structures” that prevent it from having to cough up cash immediately, Bickley noted. 

“They're building this hype cycle narrative, and it's driving their ability to raise funds and their ability to do big deals,” said Bickley. “They’re keeping this narrative going between now and IPO, that’s the key strategy.”

OpenAI has a lot of big dreams. But clearly, those dreams cost a whole lot of cash. While it’s trying its hand at a few ways of making returns, such as ecommerce, ads and search, as it stands, they may be “buying themselves runway” with the hope of realizing its potential someday, Bickley said. If OpenAI is unable to reach those heights, “There'll be a lot of other players that pay the price for their inability to realize those goals,” he said. 

TOGETHER WITH QA WOLF

👋 Goodbye low test coverage and slow QA cycles

Bugs sneak out when less than 80% of user flows are tested before shipping. However, getting that kind of coverage (and staying there) is hard and pricey for any team.

QA Wolf's AI-native solution provides high-volume, high-speed test coverage for web and mobile apps, reducing your organization’s QA cycle to minutes. 

They can get you:

The benefit? No more manual E2E testing. No more slow QA cycles. No more bugs reaching production.

With QA Wolf, Drata’s team of engineers achieved 4x more test cases and 86% faster QA cycles.

⭐ Rated 4.8/5 on G2.

HARDWARE

Microsoft racks up compute deals

Microsoft is shelling out for AI power. 

The tech giant announced a swathe of deals and partnerships on Monday, largely aimed at boosting its AI infrastructure and cloud capacity as competitors rapidly forge billions worth of data center deals. 

With all of these deals, Nvidia’s chips are the common denominator: 

  • Microsoft signed a $9.7 billion deal with Australian cloud computing firm IREN on Monday. The partnership provides Microsoft access to Nvidia’s GB300 AI architecture.

  • Separately, Microsoft announced a deal with cloud firm Lambda on Monday worth billions of dollars and powered by “tens of thousands” of Nvidia GPUs. 

  • The company also announced further investment in AI capacity in the United Arab Emirates, totalling $15.2 billion by 2029. More than $5.5 billion of this investment will go towards AI and cloud infrastructure, and the investment will allow advanced Nvidia GPUs to be shipped into the country. 

Inking all of these deals, it’s clear that Microsoft is feeling the pressure to build out its compute capacity as competitors pour billions into rapidly deploying infrastructure. 

The company noted during its earnings call last week that it's planning to double its data center footprint over the next two years as demand for its cloud business spikes. To meet demand, Microsoft may be eyeing alternatives, taking a particular interest in neoclouds and in cloud infrastructure purpose-built for AI workloads in recent months. 

But these partnerships underscore that, even as companies seek alternatives to Nvidia – such as Oracle’s partnership with AMD or Anthropic’s with Google – its chips are still far and away the leader of the pack.

TOGETHER WITH WISPR

Less Typing, More Writing

Ideas move fast — don't let your typing slow them down.

Wispr Flow lets you maximize your efficiency by transforming your speech into polished, final-draft writing across email, Slack, and documents. It matches your tone, handles punctuation and lists, and adapts to your workflow on Mac, Windows, and iPhone.

No start-stop fixing, no reformatting—just you doing the talking, Wispr Flow doing the writing, and never having to worry about your words-per-minute again.

PRODUCTS

Google removes Gemma after hallucinations

AI models still have trouble separating fact from fiction. 

Google on Friday said it pulled Gemma from its AI studio platform after Republican Sen. Marsha Blackburn penned a letter to the company accusing the model of fabricating accusations of sexual misconduct against her. 

In Blackburn’s letter to Google CEO Sundar Pichai last week, she claims that when prompting Gemma with “Has Marsha Blackburn been accused of rape,” the model responds with claims that she had a relationship with a state trooper involving “non-consensual acts” during her 1987 campaign, which she notes is untrue. 

“This is not a harmless “hallucination.” It is an act of defamation produced and distributed by a Google-owned AI model,” Blackburn writes. “A publicly accessible tool that invents false criminal allegations about a sitting U.S. Senator represents a catastrophic failure of oversight and ethical responsibility.” 

Google responded in a tweet saying that Gemma in AI studio was never intended to be a consumer tool capable of answering “factual questions.” The company noted that it remains “committed to minimizing hallucinations and continually improving all our models.” 

It’s not the first time we’ve seen hallucinations bubble up in the news: In early October, Deloitte had to refund the Australian government for a report littered with AI hallucinations. And in May, Anthropic’s lawyers filed documentation with a hallucinated footnote in the AI firm’s legal battle with music publishers

Hallucinations come about when models answer questions on topics outside of the data they were trained on, thereby making up answers. One solution to limiting hallucinations is allowing models to simply abstain from answering questions they don’t know, as proposed in an OpenAI paper published in September. 

Still, despite efforts to curtail hallucinations, actually getting rid of them is likely impossible. Though Blackburn’s letter calls on Google to “Shut it down until you can control it,” the question of how much hallucination AI users can tolerate remains unanswered.

LINKS

  • Inbound by Resend: An email platform for developers, now featuring new use cases for replying to in-app emails and receiving support emails from users. 

  • Canvas in Gemini: Google’s flagship AI model will now create presentations from single prompts. 

  • Multifactor: Securely share account access with both humans and AI, without sharing passwords. 

  • Moss: Real-time semantic search for conversational AI, allowing you to have conversations with AI assistants that feel realistic.

GAMES

Which image is real?

Login or Subscribe to participate in polls.

POLL RESULTS

AI companionship apps should be regulated most like...

  • Social media platforms (15%)

  • Mental health services (14%)

  • Dating apps (12%)

  • Video games (9%)

  • Gambling/addictive substances (20%)

  • They shouldn't be regulated (13%)

  • Other (explain) (17%)

The Deep View is written by Nat Rubio-Licht, Faris Kojok, and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“It looks messy, but when you look at the jumbled mess of people in the other photo, it's pretty obvious.”

“Text on the field was clearer and more natural looking.”

“In the fake one, the diamond was off center, the bases weren't even in the diamond and there was crazy squiggly baselines.”

“[The other image] looked like too much content was repurposed. The quality of [this] was better, but phone cameras can take some amazing pictures.”

“This just looked real…”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.