Anthropic's growth may outpace risk control

Welcome back. Though data has always been the backbone of AI, enterprises are still struggling to get a grip on what’s at their disposal. Meanwhile, workforces that are using AI face the challenge of losing certain skills. And as enterprises navigate their own personal AI strategies, Anthropic is navigating how its powerful models could affect society as a whole. Nat Rubio-Licht

IN TODAY’S NEWSLETTER

1. Anthropic growth may outpace risk control

2. AI's cognitive trade-off nobody is talking about

3. Why data is still enterprises’ number one AI pitfall

RESEARCH

Can Anthropic's safety keep up with scale?

Anthropic is growing like wildfire. The company is still sorting out what that means. 

On Thursday, the AI giant released the four primary areas of focus for The Anthropic Institute, a research effort it introduced in March to tackle some of the most pressing societal challenges that AI could present. These four areas each span several different ways that frontier models can impact security, the economy, and society broadly. 

In the announcement, Anthropic said that its own “internal economy” has started to shift, new threats have emerged from the systems that it’s building and it’s seeing early signs that AI is helping speed up its own research and development. “In order to realize the full benefits of AI progress, we want to share as much of that information as we can.” 

The research agenda includes: 

  • Economic diffusion: This branch will study how powerful AI systems could alter our economy, including who adopts it, how it’s impacting productivity, and of course the question of its impact on the job market and the shifting role of paid labor. 

  • Threats and resilience: This area takes a wide view of the word “threat,” studying the double-edged sword of AI in things like health and education, its impact on geopolitics, its use in surveillance, and the “offense-defense balance” of its deployment in cybersecurity and bioweapons.  

  • AI systems in the wild: This branch will study how society interacts with this technology, aiming to decipher how AI is impacting critical thinking skills, behavior, values and beliefs. This also takes aim at where AI agents fit into existing laws, governmental frameworks and systems of accountability. 

  • AI-driven R&D: This field aims to study the impact of AI being used to carry out research and innovation, examining the impact of automating the research process with less human oversight. This includes how AI R&D is governed, controlling the acceleration of AI, and what happens when these models start to build themselves.

This set of research priorities comes as Anthropic’s popularity completely explodes. At the company’s developer day event on Wednesday, CEO Dario Amodei told press that the company had planned to grow around ten times bigger this year, but could now see a growth rate of as much as 80 times. The company’s breakneck growth has led it to partner with SpaceX to use all of the compute capacity available at its Colossus 1 data center — a deal that will be null and void if Anthropic’s AI engages in “actions that harm humanity,” SpaceX CEO Elon Musk said in an X post on Wednesday. 

Amodei told reporters that this rate of growth is “too hard to handle,” and he’s “hoping for some more normal numbers.” Though the company’s growth is undeniable, it should be noted that reported revenue numbers aren’t always accurate when considering cloud partnerships with hyperscalers like AWS, Google and Microsoft, a figure which Anthropic reportedly may overcount, according to The Information.

Taking all of this into consideration, Anthropic walks on a tightrope right now. The company has long staked its reputation on putting safety, ethics and responsibility first. Still, AI’s risks broadly have already started to bubble up, from the security vulnerabilities of vibe-coded apps and websites to the potential for mental health crisis and cognitive atrophy at play when over-relying on this technology. The question remains: Even as the company studies the risks of these models on humanity, due to the meteoric rise of this technology, could those risks present themselves faster than our ability to combat them? 

Nat Rubio-Licht

TOGETHER WITH CHECKSUM

Your AI writes code faster. Who's testing it?

63% of engineering teams now ship code faster with AI. 72% have already had a production incident from AI-generated code. The bottleneck didn't disappear. It moved downstream.

Checksum is an AI-native continuous testing platform that auto-generates and self-heals your E2E test suite, runs inside your existing CI/CD pipeline, and keeps pace with the velocity you're already getting from AI coding agents.

Clearpoint Strategy saves $500k a year. Postilize ships 30% faster. Stellic reduced manual testing time by 40%

WORKFORCE

AI's cognitive trade-off nobody is talking about

Knowing how to use ChatGPT at work is not enough to succeed on its own. 

Companies are pushing AI literacy, or the ability to integrate AI tools into workflows, as essential for staying competitive in the workplace. But Jayney Howson, chief learning officer at ServiceNow, says that directive misses something more fundamental. 

As workers offload more thinking to AI, some are starting to outsource their judgment, losing the ability to read people, make calls under uncertainty, and trust their own instincts, which can weaken decision-making and degrade the quality of their work.

“I really do fear that we’re sleepwalking into losing those [soft] skills,” Howson told The Deep View at the ServiceNow Knowledge 2026 event.

ServiceNow is betting heavily on upskilling as its strategy for adapting to AI. Through ServiceNow University, a learning platform launched a year ago, more than 2 million people have enrolled in courses designed to teach workers how to use AI in their day-to-day roles.

Howson sees AI upskilling as essential for productive workplaces. But she is also starting to see how that shift is changing how employees think on the job.

In some cases, workers are relying on AI to structure their thinking rather than using it to support their own judgment, making it harder to operate without it. That dynamic is most visible in roles that depend on human instinct.

In sales, for example, AI is used to generate meeting briefs, summarize customer histories, and prep for calls, helping teams move faster and land bigger deals. But it can also make it easier to default to AI-generated answers instead of responding in the moment.

A strong salesperson might sense from a customer’s reaction that it’s not the right time to push a deal, even when the data suggests otherwise, and hold back. That kind of judgment is easy to lose if it’s no longer practiced.

“I sit in meetings with sales professionals who don’t have the same confidence in understanding the customer because they’re leaning on AI,” Howson said.

What Howson describes extends beyond sales. Every department is at risk of cognitive offloading. 

To toe the line between AI and human capabilities, Howson says workers need a mix of three skills. The core skills required for their role, the ability to use AI effectively, and the judgment to know when not to rely on it.

“You have to build that human ability just to be okay being a bit messy,” Howson said. “If you don't have that confidence in yourself, you will not build those AI skills.”

There are clear limitations to AI literacy. As companies push large language models into the workplace through mandates and upskilling requirements, workers may be quietly forfeiting their skills and abilities to AI tools. Even if overall productivity improves, cognitive offloading to AI can leave workers with a weaker grasp of how to do their jobs. As a result, workers risk using AI as a crutch. If those systems fail or are unavailable, employees may struggle to rely on their own judgment when making decisions, thereby weakening their performance. Companies that want to reap the benefits of AI will need to balance adoption with the preservation of human judgment and instinct.

Aaron Mok

TOGETHER WITH ORACLE NETSUITE

Guide: Financial storytelling meets AI

Your job isn’t just presenting numbers—it’s getting leaders to interpret what matters and decide what to do next. Don’t let strong analysis end with a polite nod.

Download Financial Storytelling in the Age of AI to apply the “Made to Stick” SUCCES(S) framework—plus AI-assisted drafting—to make messages clearer, more credible, and easier to remember.

Get Your Guide

GOVERNANCE

Why data is still enterprises’ number one AI pitfall

Teradata wants to remove one of the biggest roadblocks keeping companies from getting AI from pilot to production: their data. 

In the pilot phase, enterprises can curate data in very specific ways that don’t always translate to the live data used in production-grade AI, Steve McMillan, President and Chief Executive Officer of Teradata, told The Deep View. This was the driving force behind the company’s Autonomous Knowledge platform, unveiled Thursday, designed to serve as a hub for customers' data and AI tools, with a focus on supporting agentic solutions. 

The product addresses a gap Teradata identified through more than 150 proof-of-concept runs in 2025 and a study of more than 1,000 customers.

“Once you are dealing with your live enterprise data, it can be very messy, and so we help our customers curate that data in such a way that we can take the messiness away, structure it in the right way, so that you can have a trusted data platform feeding your agentic solutions,” McMillan told me at the company’s launch event. 

Though enterprises are aware that data is the foundation of AI, the problem has evolved. Now, companies recognize the value of their data, but aren’t structuring and governing it well enough to actually deploy AI at scale.

Here are a few things that enterprises can do to help themselves: 

  • Trusting your tooling is key. “We have an approach in Teradata that says, keep your enterprise data together, have it well governed, so that you can trust it, and then deploy tools on that data so that you're not replicating it, you don’t have to take security risks,” said McMillan.

  • Cost efficiency is all about context. “In the realm of tokenomics, context is important, because when you have the right context, the right experience, you will (add) nuance to your questions, your queries, which basically means you will use fewer tokens, right? Because a lot of the stuff is already provided,” said CPO Sumeet Arora. 

  • AI systems are smart, but not wise. “I'm often found describing them as like a fifth grader with a PhD, they [AI models, LLMs] are incredibly intelligent in the sense that they have all of these frameworks trained into them…but they know nothing about your company…so you have to understand how the data relates to how you work and choices you make, because that is the context that you must give to the fifth grader in order to have them wield all of the frameworks and tools and and generalized knowledge to make choices and to guide decisions,” said CTO Louis Landry.

It's interesting that nearly four years after AI’s explosion, data remains one of the biggest bottlenecks for companies. I remember hearing the refrain "garbage in, garbage out" from the outset, and Landry mentioned he first encountered it in college. Given that he's been in the industry for twenty-plus years, this has long been a guiding principle. Teradata joins a myriad of companies focusing on that and building products specifically designed for the agentic era. It gives customers more options to tackle the problem before increased autonomy arrives, which will only further add pressure on enterprises to shore up the quality and governance of their data.

LINKS

  • Save to Spotify: The audio app’s new command-line tool lets agents upload AI-generated podcasts   

  • Gpt-oss-20b-tq3: A new 20 billion parameter model for Apple Silicon, good for chatbots, creative writing and coding assistants

  • Amazon Bedrock AgentCore Payments: A set of features that enables agents to access and pay for things they use, including web content, APIs, MCP servers, and other agents

  • OpenAI Voice Models: OpenAI just launched three new voice models, GPT‑Realtime‑2, GPT‑Realtime‑Translate, and GPT‑Realtime‑Whisper

Build a strong team, without the usual hiring headaches

(sponsored)

GAMES

Which image is real?

Login or Subscribe to participate in polls.

A QUICK POLL BEFORE YOU GO

Do you think that AI is evolving faster than our ability to contain the risks?

Login or Subscribe to participate in polls.

The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The overexposed background sky looked like classic photography.”

“Using my logic, [this image] looked like a really good as a picture. [The other image] was too perfect to be real.”


“Gradual change in distance focus was much better in this photo.”

“[This image'] is too perfect in every way. Have you ever tried to get a puppy to pose, the beach to glistening?”


“You would get the dog framed like that and it looking at the lens and pin sharp for the lens used. Do dogs even look at camera lenses?”


“[This image] puts everything in focus, background, foreground, shadows etc. etc. So fake.”

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.