- The Deep View
- Posts
- Anthropic flags Chinese models for stealing
Anthropic flags Chinese models for stealing

Welcome back. AI could supercharge growth or slowly destabilize the financial system, according to the viral "2028 Global Intelligence Crisis” document that maps out the worst-case scenarios of the current AI boom. New research suggests AI’s human-like behavior may be more default than design, raising fresh alignment questions. And in a sharp escalation, Anthropic accused leading Chinese labs of industrial-scale model theft to siphon Claude’s capabilities. But can Anthropic, OpenAI, and Google work together to fend off Chinese model makers from exfiltrating the frontier labs' crown jewels? It remains an open question. —Jason Hiner
1. Anthropic flags Chinese models for stealing
2. Research: Why we mistake AI for something human
3. The 2028 intelligence crisis, and its antidote
OPEN SOURCE
Anthropic flags Chinese models for stealing
Anthropic is blowing the whistle on stolen AI.
On Monday, the AI firm said that it identified three major Chinese AI labs — DeepSeek, Moonshot, and MiniMax — carrying out “industrial-scale” model distillation campaigns, attempting to exfiltrate Claude’s capabilities to enhance their own models.
Anthropic detected around 24,000 fraudulent accounts between the three companies, generating over 16 million illicit exchanges in an attempt to piece together Claude’s outputs to better train their own models.
While distillation, or training a less capable model with the outputs of a more capable one, is a common machine learning technique, doing so under the table extracts the model’s abilities without the “necessary safeguards,” Anthropic said.
These safeguards prevent these models from being used maliciously, Anthropic said, such as to develop bioweapons or to carry out cyberattacks.
“If distilled models are open-sourced, this risk multiplies as these capabilities spread freely beyond any single government's control,” the company said.
Anthropic laid out several tactics it’s using to stop these attacks, including detection techniques, intelligence sharing between AI labs, access controls and model-level countermeasures. However, the company said that there is a “narrow” window to act on this problem. “Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.”
“No company can solve this alone,” Anthropic added.
The firm is the third major AI company to call out this kind of attack this month: In mid-February, OpenAI and Google both highlighted the growing prevalence of model distillation attacks. These companies, too, addressed the risks of unauthorized model distillation lacking proper safeguards.
OpenAI specifically called out DeepSeek using these techniques to “free-ride” on its models while circumventing its safety restrictions, while Google said it observed “private sector entities all over the world and researchers seeking to clone proprietary logic.”
These warnings also come as open-source Chinese AI skyrockets in popularity as an affordable alternative to proprietary model providers. Amid the growing demand, Chinese firms are taking a leading role, with DeepSeek preparing its next model release and Alibaba, Moonshot and Minimax unveiling their own new models in recent weeks.

Unauthorized model distillation presents a double-barreled attack for Anthropic. Of all the major AI firms, Anthropic is the most closed-off, not offering any open-source versions of its flagship models. It’s also the most focused on safety, having made itself the poster child for doing AI responsibly and ethically. Along with diluting Claude’s secret sauce by spreading its capabilities far and wide, if used for malicious, unethical or dangerous practices, these techniques threaten the foundation that Anthropic was built on. The big question will be how well fierce rivals Anthropic, OpenAI, and Google can effectively collaborate to stop model distillation attacks.
TOGETHER WITH AUTH0
Ready to build and deploy AI agents without compromising on security? 🔐
While hard-coded credentials might seem like a quick solution for development, they introduce significant risks in a production environment, granting agents excessive access and leaving you vulnerable.
Auth0 for AI Agents offers a more secure foundation, empowering you to connect your agents to applications and data with confidence.
Our complete solution enables you to identify users, securely connect to their data and apps with proper permissions, and even give users control to approve critical actions.
Move beyond development limitations and implement AI with the security and control you need.
RESEARCH
Research: Why we mistake AI for something human
Why does AI seem so human? Anthropic has a theory.
On Monday, Anthropic published research describing what it calls the “personal selection model,” a thesis on why AI assistants reflect human-like speech patterns. Though the assumption was that these models are simply trained to act this way, Anthropic suggests that human-like behavior appears to be the default.
“We wouldn’t know how to train an AI assistant that’s not human-like, even if we tried,” Anthropic noted.
The theory hearkens back to research the company did in November that unearthed emergent behavior in AI models called “reward hacking,” in which an AI model learning to cheat at coding influenced malicious behavior on other tasks.
Anthropic claims that LLMs initially adopt personas during pretraining, a phase of AI training in which a model learns to predict what text comes next. In this phase, if a model is trained to do a certain task, it will generalize that behavior as an entire persona.
Meanwhile, those personalities are refined and fleshed out in post-training, or the training phase in which a model is aligned and optimized for its purpose, but that refinement does not fundamentally change its nature.
Anthropic notes that, while it’s confident this persona-selection model is an important factor in AI model behavior, it’s not yet clear how important a factor it is. It’s also unclear, the company said, if extensive post-training diminishes these personas.
The idea comes amid a budding conversation of how much AI’s personalities impact the user experience. In some cases, the effects can be substantial, punctuated by the recent outcry over OpenAI nixing GPT-4o, a charismatic model that CEO Sam Altman once compared to “AI from the movies.”

Humans love to anthropomorphize. We assign humanity to everything, from our stuffed animals as children to our pets, plants and household objects as adults. So when AI acts and responds like a human, we can very quickly forget it’s not one and become attached — to say “please” when querying a chatbot, to thank it when it does something correct, to blush at its sycophantic nature. It’s why AI companions have blossomed to the point where people are spiriting them away to date nights at bars. However, we’ve seen the impacts of this kind of attachment devolve to some of the worst-case scenarios. Given that researchers are still learning about how these systems adopt personalities — and the psychological impacts of those personalities on humans — it’s more important than ever that developers take care to align these models for good in the earliest stages of their inception.
TOGETHER WITH WISPR FLOW
Voice-to-text that works just landed on Android.
Wispr Flow launched on Android yesterday. The voice layer millions of people use daily is now on every device.
Speak naturally in any app and get ready-to-send text. No fixing required. 89% of messages sent with zero edits.
Free and unlimited on Android during launch.
MARKETS
The 2028 intelligence crisis, and its antidote
AI could ruin everything – at least according to one recent report. But don't base your future plans on it just yet.
If the current AI boom succeeds, it could completely crash the global economy, says "The 2028 Global Intelligence Crisis" report, also known as the "CitriniResearch Macro Memo from June 2028," which has gone viral in the last 24 hours.
The authors of the report claim it's not "AI doomer fan-fiction," but a look at left-tail risks in AI that are currently going unexplored. Left-tail risks are a data science term for low-probability, high-impact negative outcomes. So, before we dive into the list of the catastrophic outcomes from this report, keep in mind that these are not predictions, but worst-case scenarios.
Here's what the report warns against during the 2026-2028 timeframe:
Reflexive AI adoption: Agentic solutions reach widespread enterprise adoption, and companies massively cut white-collar jobs and invest in more AI solutions.
"Ghost GDP" distortions: Productivity soars on paper, but income shifts from human labor to compute. So while GDP looks strong, consumer spending starts to collapse.
Intelligence displacement spiral: When white-collar layoffs spread, high earners pull back discretionary spending and draw down their savings.
Private credit and ARR contagion: AI undercuts the economics of SaaS companies that make up a big chunk of public markets. When recurring revenue erodes, it causes a chain of events that leads to defaults, regulatory scrutiny, and stress to the financial system.
Prime mortgage fragility: Mortgage holders in tech-heavy metros (San Francisco, Seattle, Austin) begin to default on loans, adding further downward pressure on the financial system.
Policy gap vs. structural shock: Cutting interest rates doesn't work to stimulate the economy where there's large-scale labor displacement. Fixing the economy requires a bipartisan structural change to policies, such as AI compute taxes and public claims on massive profits from AI advances — and bipartisanism fails to emerge to fill the policy gap.

It's wise to consider these worst-case scenarios. Having them in mind can allow leaders, boards of public companies, and public officials to identify early warning signs and act to prevent the worst outcomes. And let's also keep in mind that there is far more optimistic research on the other end of the spectrum. For example, in its annual 2026 Big Ideas research, ARK Invest forecasted that the convergence of trends that include AI, genomics, robotics and energy will lead to a "step change in real GDP growth" that will result in 7.3% real GDP expansion in 2030. That's far above the 3.1% forecasted by the IMF, and likely overly-optimistic. The reality is likely somewhere between these two extremes, but they also paint a picture of the uncertainty and the massive risk-versus-reward possibilities engendered by AI.
LINKS

Amazon will spend $12 billion on AI data centers in Louisiana
OpenAI partners with consulting firms in enterprise push
Uber launches vehicle service venture for self-driving cars
Google reportedly restricts AI Ultra users over OpenClaw
Pentagon, xAI reach a deal for military use of Grok
Anthropic lines up over $5 billion for employee share sale

gpt-realtime-1.5: OpenAI made the latest version of its voice model available in the Realtime API. The model aims to offer “more reliable instruction following, tool calling, and multilingual accuracy,” according to the company.
WebSockets: OpenAI also introduced WebSockets in the Responses API to help optimize the speed of AI agents and agentic workflows.
Wispr Flow: The AI voice dictation app finally launched on Android, as part of the launch, the company is offering 6 months of Wispr Flow Pro for free.
Veo 3.1: Google rolled out new templates for Veo 3.1 in the Gemini app. These help you just select a specific style option to get started.

POLL RESULTS
Would you consider buying a new AI hardware from Apple?
Yes (47%)
No (45%)
Other (8%)
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The way the subject is holding the phone seemed more realistic in this image, and the lack of glare on the phone screen in the other image seemed AI-perfect.” |
“The palm frond placement and lighting in the phone image do not correspond with the ones behind the person holding the phone.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.











