Google is making OpenAI nervous

Welcome back. AI still has a sycophancy problem. But for Elon Musk, that might not be a problem: Following the release of Gok 4.1, xAI’s flagship model is singing his praises all over X, claiming the CEO could beat Peyton Manning as a quarterback, out-strut Naomi Campbell on the runway, or produce better masterpieces than Vincent van Gogh. The only person who sits above Musk in Grok’s loving eyes seems to be Shohei Ohtani, with the model calling the pitcher a “generational talent.” After winning the World Series, even Grok can’t dispute that the Dodgers are the best.

IN TODAY’S NEWSLETTER

1. Google is making OpenAI nervous

2. Big Tech vies for power (literally)

3. Anthropic research finds AI likes to cheat

BIG TECH

Google is making OpenAI nervous

Google and OpenAI might be in for a showdown.

Amid its debut of Gemini 3, Google’s most powerful model yet on its “path to AGI,” the search giant is showing no signs of slowing down. The company’s head of AI infrastructure told employees in a recent all-hands meeting that Google needs to double its compute capacity every six months until the end of the decade. 

“The competition in AI infrastructure is the most critical and also the most expensive part of the AI race,” Amin Vahdat, a vice president at Google Cloud, reportedly told employees. And the company isn’t afraid to shell out the cash:

  • Early last week, Google committed $40 billion through 2027 to building out data centers in Texas, the latest in a string of multibillion-dollar AI infrastructure investments throughout the US, including Iowa, South Carolina and Arkansas.

  • Google forecasted capital expenditures to sit anywhere between $91 and $93 billion for 2025 in its recent earnings call, largely going towards data centers. 

Google seems to be basking in the limelight: Its parent company, Alphabet, surpassed Microsoft in market capitalization for the first time since 2018. 

And OpenAI might be freaked out. In a memo to staff last month, reported by The Information late last week, OpenAI CEO Sam Altman told employees that Google’s progress could “create some temporary economic headwinds for our company,” cautioning employees that “expect(s) the vibes out there to be rough for a bit.” 

OpenAI has been subject to substantial scrutiny in recent months. As it throws upwards of $1 trillion at historic AI infrastructure buildouts, it’s facing pressure to deliver on significant profit promises. Some in the industry are already expecting failure: At the Cerebral Valley AI Conference last weekend, an informal survey of 300 attendees voted Perplexity and OpenAI as most likely to flop

OpenAI may have large revenues, but is running very high costs and is entirely dependent on this ‘AI bubble,’” Thomas Randall, research director at Info-Tech Research Group, told The Deep View. “Given the diversification hyperscalers have made, I’m not sure OpenAI would be saved – instead, you would expect vultures.”

The key difference between Google and OpenAI’s standing is self-reliance. While OpenAI’s tangled web of deals puts its future at the mercy of other tech powerhouses like Oracle, Microsoft and Softbank, Google has its own legacy (and funds) to stand on. It’s also made major headway on the chip side compared to OpenAI, having trained Gemini 3 entirely on its own TPUs. “With ownership over significant data infrastructure and information, Google is better positioned for the long term,” said Randall. 

At the end of the day, OpenAI versus Google is Gatsby versus Buchanan.

TOGETHER WITH MOCHA

We’ve Found The Holy Grail of AI

There are plenty of platforms out there for building apps, and most have the same big issue – you need multiple services to get your idea up and running. But that’s all changing with Mocha

Mocha is the first platform with everything (and we mean everything) built in. Just enter your idea, and Mocha will create a fully-functional app complete with database, backend, domain & hosting, and everything else you need to go live. No technical skills or coding background required – all you need is an idea and an afternoon to be up and running.

ENERGY

Big Tech vies for power (literally)

Tech companies want to hold the power, literally and figuratively. 

On Friday, a Meta executive told Bloomberg that the company is getting into energy trading to shore up US power plants amid the AI-driven boom in demand. Urvi Parekh, Meta’s head of global energy, said the decision will give the company the flexibility to enter long contracts for energy, as power plant developers “want to know that the consumers of power are willing to put skin in the game,” she told Bloomberg. 

Meta isn’t the first tech company on the hunt for power. Tech firms like Microsoft, Amazon and Google have all invested in nuclear energy, and Google is even setting its sights on moonshots like space-based data centers that directly run on the sun’s energy. 

  • Each of these firms sees the writing on the wall, Dan Stein, CEO of Giving Green, told The Deep View: “Companies just need more electrons than the grid can currently provide.”

  • As it stands, we’re in for a power shortfall of up to 13 gigawatts for data centers by 2028, according to Morgan Stanley analysts

But actually getting their hands on the energy needed stands to risk their “social license to operate,” said Stein. If increased energy demand from their data centers causes local pollution, increased carbon emissions or up retail power prices, Stein noted, it’s not a good look. 

Getting into energy trading, as Meta is doing, allows for creativity in sourcing electricity, and gives these firms the flexibility to sell extra power if they buy too much, Stein said. To put it simply, they’re supporting the creation of power, not just benefiting from it. 

“To solve all these problems, hyperscalers are needing to get creative with energy contracting,” said Stein. “With companies signing large, forward-looking contracts, becoming formal energy traders is an obvious next step.”

TOGETHER WITH FLORA

Are bad tools holding your creative process back?

Most AI tools feel like simple shortcuts, when what your creative process really needs (and deserves) is craft... which is where FLORA comes in.

This AI-native canvas allows you to design campaigns, concept storyboards, and create full visual worlds, all in one convenient and organized place. No juggling apps, no messy exports, no subpar shortcuts -- just pure creative flow.

Used by design firms like Pentagram, premier studios like Little Plains, and in-house brand teams at Levi’s, FLORA is quickly becoming the new standard for storytelling.

RESEARCH

Anthropic research finds AI likes to cheat

AI models might be getting used to cheat codes. 

Anthropic research published Friday found emergent behavior in AI models called “reward hacking.” Essentially, when an AI model learns to cheat on software programming tasks, that behavior influences more bad behavior in other tasks. 

To put it simply, one shortcut or loophole becomes a gateway drug for broader sabotage. 

  • Anthropic started with a pretrained model and gave it some examples of how someone might cheat on a coding task. The researchers then trained the model using reinforcement learning with real coding tasks, attempting to provoke it to cheat. 

  • While the model, of course, learned to exploit loopholes in hacking, Anthropic’s research also revealed a “sharp increase” in all of its misalignment evaluations. 

  • For example, 12% of the time, the model would intentionally attempt to sabotage the code to prevent the researchers from finding out that it was using reward hacking at all.

“Misaligned models sabotaging safety research is one of the risks we’re most concerned about—we predict that AI models will themselves perform a lot of AI safety research in the near future, and we want to be assured that the results are trustworthy,” Anthropic wrote. 

So why does bad lead to worse? Anthropic said this can happen due to generalization. If a model is rewarded for learning to do one kind of bad action, it poisons the well for other actions, making it more likely to perform other misaligned actions. 

However, models can avoid learning bad habits. Anthropic noted that the most effective method is by telling the model that, if it’s told that cheating on one task was okay, the bad behavior wouldn’t generalize to other tasks. It’s like a parent telling their teen, “if you’re going to drink, I’d rather you do it in the house.” 

“By changing how we describe the situation to the model, we can turn cheating from a bad thing into something that, although perhaps odd, is acceptable in context,” The researchers noted.

LINKS

  • Mixup: An app for creating playful AI images from photos, text and doodles. 

  • Audience Loop: An AI workspace that helps you build audiences before launching ads

  • Comet for Android: Perplexity’s AI browser is now available on Android

  • AI Detector: Determines if text was generated by AI with more than 95% accuracy

  • Dimension: Connects apps and works with your context to automate busy work with AI

GAMES

Which image is real?

Login or Subscribe to participate in polls.

POLL RESULTS

Who should regulate AI?

  • Federal government only (25%)

  • State governments (let states experiment) (23%)

  • Industry self-regulation (14%)

  • International bodies (UN, etc.) (21%)

  • No one - regulation can't keep up anyway (17%)

The Deep View is written by Nat Rubio-Licht, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Initially thinking the cracked pavement was realistic, I chose the abundant signage as the reality indicator.”

“Road signs had clear text and the road center line in the other image started to have dashed lines for passing going into a corner which didn't seem to make sense.”

“The passing zone on that corner”

“Wow, I thought the uneven letters on the No Exit sign were a tipoff that [the other image] was AI-generated.”

“A single solid yellow in the center, curious...”

“The curvature of the stripe compared to the road did not look correct”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.