- The Deep View
- Posts
- Claude Code leak: What happened, what's next
Claude Code leak: What happened, what's next

Welcome back. AI coding is reshaping hiring faster than expected, with most engineers now working alongside agents and shipping AI-generated code into production. And surprise, surprise: That shift is expanding demand for developers, not shrinking it. Apple, meanwhile, is tightening control over vibe-coded apps that bypass its review process, signaling how seriously it remains about security and platform control. And Anthropic is dealing with fallout from its major Claude Code leak that exposed its secret sauce and future plans. As AI scales, process failures and trust gaps may matter as much as model breakthroughs. —Jason Hiner
1. Claude Code leak: What happened, what's next
2. Survey: AI coding shifts hiring trends
3. Why Apple forced a reckoning on vibe coding
BIG TECH
Understanding the Claude Code leak fallout
Anthropic has dominated headlines with its flagship product, Claude Code, leaking across the web. As the dust settles, here's a breakdown of what happened and what comes next.
On Tuesday, an X user shared that Anthropic had leaked roughly 512,000 lines of Claude Code's internal source via a map file included in the npm package, the public repository where developers download and share JavaScript software tools. Anthropic has since clarified that no customer data was included in the leak and that the release was due to human error, not a security breach.
The fact that it was a human error lends some credibility to Anthropic, assuring users that its systems were not vulnerable to external attacks. However, it does highlight a different kind of vulnerability, one rooted in internal process and oversight, as noted by Amy Chang, Head of AI Threat Intelligence and Security Research at Cisco.
“These incidents highlight the universal struggle to mitigate the impact of human error in complex development cycles,” Chang told The Deep View. “These security incidents leave practical takeaways for consumers of any generative AI technology to treat these tools as untrusted collaborators, not trusted infrastructure.”
The leak includes thousands of files showcasing not only how the model operates, such as a complex memory architecture, but also unreleased models and features that were planned to be released to Claude Code, as reported by VentureBeat and The Information, including:
Kairos: A collection of updates that allow Claude to work in the background and message you on mobile, similar to OpenClaw
Buddies: Virtual duck avatars that help visualize coding agents and infuse the platform with more personality, with the hopes of getting “sustained Twitter buzz”
Capybara: Anthropic is already iterating on Capybara v8, with Capybara being the codename for a Claude 4.6 variant that is a more powerful level of model that's a step above Opus
While no customer data was leaked, the issue does give competitors, such as OpenAI, an opportunity to look at the proprietary technology and use it to build their own tools. A Claude Code competitor even told The Information that it may change its product development plans to beat Anthropic to the release. The leak also gives bad actors insight into how the model works, which could make it more vulnerable to future attacks.
As a result, it is in Anthropic’s interest to take the code down, and, according to a new Wall Street Journal report, by Wednesday morning, Anthropic representatives used a copyright takedown request to try to force the removal of over 8,000 copies and adaptations of the code shared on GitHub, later narrowing the request to just 96 copies. The original X post revealing the leak now has 33.2M views.

Some of the biggest risks this leak poses to Anthropic will be reputational. This is the second incident of this nature in two weeks, following the accidental publication of a blog post detailing Claude Mythos, its next flagship model. For a company that has built its reputation and differentiated itself from competitors by prioritizing customer safety and privacy, this is not a good look. Hopefully, this serves as a reminder for Anthropic to show up security before it gets painted with Silicon Valley's "move fast, break things" reputation, which could pose enormous risks for AI.
TOGETHER WITH CRUSOE
Crusoe: deploy fine-tuned models with zero infrastructure headaches
Work with our team to deploy your fine-tuned model on a platform built for performance.
Use Crusoe Managed Inference to unlock breakthrough speed and throughput without the infra overhead.
Start deploying with Crusoe
RESEARCH
Survey: AI coding shifts hiring trends
More developers than ever are relying on agents to do their work for them.
A recent survey of 450 US software engineers from CodeSignal found that 91% reported using agentic AI coding tools, such as Claude Code, Codex and Cursor in their day-to-day work. Additionally, more than three-quarters of those engineers shipped AI-generated code into production over the past six months.
The data adds to the broader narrative that the role of an engineer is transforming as their task load shifts from software coder to AI orchestrator. And despite fears that AI will kill the jobs of software engineers, job postings for developers are up year-over-year as novice-led vibe coding brings about the dawn of custom software that requires more in-house expertise.
“Software development has fundamentally changed,” said Tigran Sloyan, co-founder and CEO of CodeSignal. “Engineers are no longer coding alone; they’re working with AI agents, and the best ones know how to get the most out of them.”
It’s why, for engineers, AI skills may become non-negotiable. According to CodeSignal’s survey, 73% of engineers reported that not adopting these tools puts them at risk of becoming less competitive, and 42% reported that they’d be hesitant to hire or work with a developer who doesn’t use them.
And as these skills become more in demand, CodeSignal debuted agentic coding assessments designed to test engineers’ AI readiness. These assessments test whether engineers can use agentic tools to build working solutions and explain their technical decisions to reviewers, rather than simply testing if they can build algorithms or write code by hand.
“The companies that figure out how to hire for—and develop—those skills will have a real advantage,” Tigran said.
And one thing is clear: AI coding tools are accelerating development time and driving down the cost of building software. That's increasing, rather than decreasing, the need for organizations to hire more developers to connect the dots and manage the code.

As these models become increasingly powerful, demand for AI skills is also rising quickly. The problem, however, is the limited resources by which developers can learn how to leverage these tools. Modern curricula at universities can’t always keep up with the pace of change, and reskilling is becoming more popular than ever. It’s the sentiment that drives some of AI’s leaders to tell young people to abandon college altogether and just start building. Case in point: OpenAI CEO Sam Altman told an attendee at a January Town Hall meeting that for AI builders, it’s “probably not the best use of your time to be in university right now.” The question remains, in the face of such a monumental shift, whether universities and educators will be able to make the shift, too.
TOGETHER WITH GHOST
ghost - the free database your agent is missing
traditional databases are designed to be permanent. you provision one, name it, size compute, choose a region, back it up, keep it running for years. that's fine for human workflows. it's wrong for agents.
ghost gives your agent postgres databases that are instant, ephemeral, and disposable. the mental model is git, not RDS. spin up a database the way a developer creates a branch. do work. keep it or throw it away. fork before a risky migration. merge or discard.
ghost is free. unlimited databases, unlimited forks, 100 hours of compute and 1TB of storage per month, no credit card required. your agent discovers it through MCP and starts provisioning immediately. and because every ghost database is just postgres, your agent already knows how to use it. every LLM has postgres in the weights. no SDK. no proprietary query language. just SQL.
a thousand databases for a thousand parallel sessions, free.
BIG TECH
Why Apple forced a reckoning on vibe coding
If you think you can vibe code iPhone apps and slip them through the back door, think again.
Apple has recently cracked down on vibe coding apps that were bypassing its review process to rapidly deploy on iOS. This has even included banning the app Anything, a vibe coding platform that launched in November 2025 and allowed users to create their own apps far more rapidly than with traditional software development tools.
Dhruv Amin, CEO of Anything, told The Information that the platform helped enable its users to publish thousands of apps in the App Store in its first couple of months, before Apple's App Store review team started rejecting its updates in mid-December for violating Guideline 2.5.2 of the App Store rules. That guideline basically states that apps need to be self-contained and shouldn't spawn other apps.
This has also impacted other vibe coding apps including Replit, Vibecode, and Bitrig, but none of those three have been banned or removed from the App Store.
A growing phenomenon coined by an Andrej Karpathy tweet from February 2025, vibe coding has taken the world by storm over the past 14 months. It has dramatically increased the amount of software being created by removing the need to know complex coding languages and allowing anyone to create custom software by simply describing it in natural language.
In a statement to The Deep View, Apple emphasized:
It is not against vibe coding and has no official policy banning vibe coding apps
Bypassing the App Store's privacy and security safeguards that protect users is what happens when apps deliver unreviewed software within an existing app
When an app falls out of compliance, the App Store team explains the violation and works with the software vendor to bring the app into compliance; for example, in the case of Replit, the team has had multiple conversations in 2026 and is continuing to work toward a resolution
Regarding the Anything case, Apple had no comment at this time, but we will update this story as we learn more (you can check the web version for updates).

I have to expect Apple to be moving toward an "if you can't beat 'em, join 'em" strategy when it comes to vibe coding. The company is already halfway there with platforms like Swift that streamline the process of building apps. With its new Google partnership to use Gemini models, Apple will have the AI firepower needed to launch its own vibe-coding tools built on Swift and potentially even a lighter version of Xcode. Strategically, this would also be a perfect fit for Apple's brand mission of empowering creativity at the individual worker level. And Apple does it through its own tools, I expect it would also streamline the process of submitting vibe-coded apps for App Store review. Let's hope a move like that would also cascade to other vibe coding apps like Replit and Anything.
LINKS

Claude Code users hit usage limits faster than expected
SpaceX has filed confidentially for an IPO, beating OpenAI and Anthropic
Amazon’s Rufus AI’s ads drive less traffic than traditional ads
Perplexity AI accused in lawsuit of sharing user data with Meta, Google
Secondary market shifts from OpenAI to Anthropic
Mass robotaxi failure in China traps passengers, causes traffic disruptions

Codex: The GitHub plug-in allows users to review issues, commit changes, and more
Flow: Veo 3.1 Lite, Google’s newest model, is available in the Flow app
NotebookLM: A new featured notebook focuses on Benjamin Franklin
Notion AI: The iOS beta got new features and improvements

A QUICK POLL BEFORE YOU GO
If you could buy OpenAI stock in 2026, do you think it would be worth the investment? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The buildings seemed too uniform in the generated one. The signs on the real one, and the water pooling in the cobblestones looked real to me. The degradation at the bottom of the buildings also led me towards real. ” “The lights on upstairs made it more real to me. ” |
“I thought the paving looked more real.” “Cobblestones looked very realistic.” “It look [like] old style architecture.” |


If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.













