- The Deep View
- Posts
- AI's utopian promise masks a race for power
AI's utopian promise masks a race for power

When I think about the grand promise of AI, I think about a TikTok that recording artist Grimes posted in 2021.
In the video, the musician makes the bombastic argument that AI will enable the “fastest path to communism,” painting the picture of a utopia in which no one has to work, farming and production are automated, and corruption is rooted out, asserting that the tech could “solve for abundance.”
When I first watched this in 2021, I thought what everyone else thought: That’s just Grimes being Grimes. Five years later, I look at an industry that has completely taken over the zeitgeist and realize her off-the-cuff post is not that far from the narrative driving the tech industry’s AI fever.
Though the cyber-utopian vision that Grimes describes is optimistic to a point of fantasy, the core idea (if you take out her references to communism) holds a similar promise to the one that’s touted by the biggest names in tech today: That AI is going to be the most important catalyst for humanity’s advancement.
It’s the pledge that the entire industry rests upon, the rationale used to explain every dollar, every data center and every gigawatt of power. The more these companies and tech leaders justify their efforts in this way, the more power they manage to accumulate, both figuratively and literally.
Let’s take OpenAI’s recent historic $110 billion funding round as an example. A few weeks ago, the company raised more money in one funding round than the GDP of 121 countries at a valuation that’s more than the GDP of 168 countries. The round was supported by some of the biggest kingmakers in technology, including $30 billion from Softbank, $30 billion from Nvidia and $50 billion from Amazon.
The proposed point of this astronomical sum? “Scaling AI for everyone,” OpenAI posited.
“Leadership will be defined by who can scale infrastructure fast enough to meet demand, and turn that capacity into products people rely on,” OpenAI said in its announcement. “This funding and these partnerships let us do both, and move faster on our mission to ensure AGI benefits all of humanity.”
In statements, the leaders took the opportunity to wax poetic about all the good their technology is capable of bestowing upon humanity. OpenAI CEO Sam Altman said the funding will allow OpenAI to “turn real scientific progress into systems that deliver meaningful benefits for people at global scale.” Nvidia CEO Jensen Huang, meanwhile, said the partnership will enable the companies to scale the benefits to “industries and societies worldwide,” and Masayoshi Son, CEO of SoftBank, said the round will allow it to advance its own “ASI [Artificial Super Intelligence] strategy,” which itself aims to evolve humanity with AI “ten thousand times more intelligent than human wisdom.”
The $5 trillion-dollar race to scale AI
This same set of players is involved in the biggest infrastructure buildout in history, a buildout that has filled the coffers of one company above all others: Nvidia. Jensen Huang, CEO of the gaming chipmaker-turned-AI behemoth, stood before an audience of 30,000 eager attendees in a stadium in San Jose in mid-March and announced that the company was well on its way to $1T (yes, trillion with a T) in data center revenue through 2027. Huang later told the press that the figure only refers to sales of its Grace Blackwell and Vera Rubin chips, and that the final result will likely exceed that.
Just to contextualize exactly how much money one trillion dollars is, it puts Nvidia’s chip sales roughly on par with the GDP of Taiwan.
Nvidia’s strategy has long been to feed the ecosystem. Provide the chips, the software platform (CUDA), the open models and whatever else an AI builder needs to be successful. But in doing so, Nvidia has baked itself into the very foundation of the industry. As The Deep View editor Jason Hiner wrote last week, “Nvidia doesn’t just want to be a strong competitor in the AI space, it wants to be the game maker.”
Nvidia’s success follows a past 12 months that have been studded with multi-billion-dollar data center deals it has also benefited from, inked by the likes of OpenAI, Oracle, Amazon, Google, Meta, Microsoft and Softbank, aimed at deploying massive amounts of infrastructure to power a singular intention. And while the Stargate deal, a lofty $500 billion project announced at the start of 2025 that aimed to bring about 10 gigawatts of data center power, is no longer on solid ground, it’s far from the only project in development, with demand predicted to nearly triple by 2030.
As OpenAI said, “Leadership will be defined by who can scale infrastructure fast enough to meet demand.”
OpenAI, however, has a foil: Anthropic. Founded by OpenAI defectors, Anthropic’s primary purpose is to create AI in the most responsible way possible. Even after years of painting itself as the safest AI lab, the company loosened its long-held ethical standards in recent weeks, changing its Responsible Scaling Policy to exclude pledges to hold back its models if Anthropic can’t guarantee proper risk mitigations in advance of release and prevent it from training models beyond a certain level without safety measures.
In an interview with TIME about the changes, Anthropic’s chief science officer Jared Kaplan said “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.” Kaplan said that holding itself back from training models “wouldn't actually help anyone.”
Though Anthropic still has the moral integrity to say it doesn’t want its models used for acts of literal warfare, the point remains: Anthropic has loosened its own guidelines in the name of building bigger, better and faster models that advance us towards an AI-enabled future that promises greater prosperity.
And why do they need to get there so quickly? To beat the other AI companies to it. It believes that if it controls cutting-edge AI, it’ll be a more benevolent leader than its rivals.
But at the end of the day, OpenAI, Anthropic and every other major AI firm want the same thing: To be at the helm of the technology they believe will advance humanity to heights unachievable by human minds alone.
And with every funding round, every event that boosts share price or revenue, every data center deal, more and more power centralizes into a small set of hands connected to a smaller set of companies. And while these companies are fighting amongst themselves over customers, power, and cash, they largely control an ecosystem that’s potentially worth multiple trillions of dollars.
And with trillions of dollars at stake, no one really does anything for free.
As such, we must question their motivations. Can tech companies do things primarily for the good of humanity? Or are these grandiose aspirations of human evolution and progress used as a justification to make more money, centralize their power and ultimately impose their will?
Let’s get one thing straight: There is no such thing as financially motivated altruism.
Because these corporations are either owned by shareholders or soon will be, supercharging humanity can never be the primary goal. The goal will be to assert their influence and impose their vision. To do so, they have to maintain the narrative that it’s all for an important purpose and that the ends will justify the means.
As Luis Lastras, director of language technologies at IBM, told me in a recent interview for The Deep View: “There's a ton of risk if very few people decide what this machine should be. There's a tremendous amount of risk … We don't want our values to be concentrated into very few decision-makers. We're definitely at risk of that happening, because I think it’s the way capitalist societies [naturally] work.”
The struggle for control, legacy and influence
We’ve seen this scenario play out in the social media industry. Facebook, for instance, was originally built on the promise of fostering global connection. And as it expanded into a social media empire that reaches multiple billions of people, it’s undeniable that it has achieved that purpose. But the purpose of this wasn’t solely to connect people, but to make money off of them. And along with making billions upon billions, the result is that Facebook, now Meta, has concentrated so much power that it can sway elections and get children addicted to its algorithms.
This is not to say that the AI these companies develop cannot be used for good. AI is being used to identify the genetic drivers behind some of humanity’s most pervasive diseases. It’s being used to reduce medical errors in clinics. It’s helping scientists improve climate forecasting, assisting farmers in adopting regenerative agriculture practices, and supporting astronomers in tracking cosmic events.
And for the sake of argument, let’s consider a scenario in which AI is used solely and entirely for this good. Even in this win-win scenario, consider who wins more. In controlling the models, the data centers, and now the energy that may be the foundation for human progress, these companies automatically attach themselves and their names to every win of humanity, thereby centralizing innovation, power and progress entirely in their hands.
Naturally, the other side of the coin is all the heinous outcomes that could come as a result of AI, and the risks that come with centralizing power in the hands of organizations whose primary responsibility will soon be to make money, grow rapidly, and please their shareholders.
The warnings have long been piling up that AI can be used to create bioweapons, develop and drive misinformation campaigns with little effort, enable widespread cybercrime and fraud, diminish critical thinking skills, lead to mental health crises and cause widespread impacts to the labor market.
We also have to be mindful of the perverse incentives that could develop as these companies create technologies far more powerful than the computer, the internet, and the mobile phone combined.
Right now, a handful of companies are vying for control, legacy and influence over a generational shift.
While their goals may sound neutral or even noble, the outcomes could still be catastrophic. That's why we have to continue to ask the tough questions about how power is consolidating in the AI market, not race to crown a handful of winners.
This doesn’t mean that AI can’t live up to its potential, but that potential doesn’t inherently benefit the public good. Naturally, it first benefits this in control of the infrastructure, capital and technology. Take Google, which was built on the promise of democratizing information, and then became the world's largest advertising company. The AI industry is making far bigger promises with far more at stake.
If we accept the narrative that this technology is too important to slow down, we should also ask: too important for whom?


If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

