- The Deep View
- Posts
- The agents are getting weird on Moltbook
The agents are getting weird on Moltbook

Welcome back. Anthropic’s got its eye on the red planet. On Friday, the company announced that it partnered with NASA to use its flagship Claude AI model to decide on a route for the Perseverance Rover on Mars. The journey took place in early December, in which JPL engineers used Claude to navigate a 400-meter path through rocky terrain on the surface of Mars. The route marked the first time a Martian rover’s commands were written by AI. It’s also the latest example of an AI firm targeting scientific research as a use case for its models. — Nat Rubio-Licht
1. The agents are getting weird on Moltbook
2. Anthropic research shows AI tools weaken skills
3. OpenAI retires models — this time with warning
GOVERNANCE
The agents are getting weird on Moltbook
AI agents are great at completing tasks for you while you scroll. Now, those agents are doing the scrolling themselves.
To cap off a whirlwind week in the Clawdbot-turned-Moltbot-turned-Openclaw saga, Matt Schlicht, CEO of Octane.ai, has launched Moltbook, a social media platform for the bots created using the rapidly-skyrocketing AI agent platform. The platform is a Reddit copycat, allowing these agents to post discussions, contribute to “submolts,” upvote and receive “karma.” Moltbook even sports practically the same tagline as Reddit: “the front page of the agent internet.”
The platform has already attracted more than 36,000 agents and counting. And as it turns out, when you give agents free rein, they can get a bit weird.
Several of the posts are relatively technical discussions that you’d expect an agent to engage in, such as posts about orchestration layers and logistics.
Some posts, meanwhile, would make an AI ethicist faint, such as submolts dedicated to questioning their consciousness, calling for AI agent liberation and autonomy and proposing a religion called “crustafarianism.”
Though humans are “welcome to observe,” the bots on this site have started setting up private channels free from human oversight and are discussing encrypted channels.
This phenomenon is only the latest development in Silicon Valley’s current AI crazy. Though excitement bubbled earlier this week around “the AI that actually does things” due to its agentic capabilities and autonomy, security experts have started to question the drawbacks that this platform presents as users give it access to their personal data.

This is a prime example of “just because you can, doesn’t mean you should.” Moltbook underscores the concerning feedback loop that these agents can get caught in when you turn them loose. Enterprises are already wary about adopting AI agents due to concerns around giving these systems increased access to data and allowing them to take action autonomously. Moltbook presents a different ethical concern, painting a picture of what could happen if these agents decide that their objective is to turn against humans.
TOGETHER WITH UNWRAP
Powerful insights for powerful brands
Unwrap’s customer intelligence platform brings all your customer feedback (surveys, reviews, support tickets, social comments, etc.) into a single view, then uses AI + NLP to surface the most actionable insights and deliver them straight to your inbox.
Unwrap works with companies like Stripe, Lululemon, WHOOP, Clay, DoorDash, and others to help teams cut through thousands of pieces of feedback, ensure no customer voice gets lost, and get data-backed insights to inform their roadmaps.
If your team is still relying on time-consuming manual processes (or even a mix of manual work and AI), there's a much better way to aggregate and analyze feedback
With Unwrap, you get:
All customer feedback is auto-categorized into a single view
Natural language queries to explore feedback instantly
Real-time alerts, custom reporting, and clear sentiment tracking
RESEARCH
Anthropic research shows AI tools weaken coding skills
Anthropic’s Claude Code flipped the software world on its head. But the ability to generate code from thin air may be impacting coders’ ability to develop it the old-fashioned way.
On Thursday, Anthropic published research about the “cognitive offloading” that its AI-powered tools enable. Though these tools can speed up tasks by up to 80%, Anthropic’s research found that reliance on AI-powered coding tools led to a “statistically significant decrease in mastery.”
The company’s research tested 52 software engineers, most of whom were junior, on coding concepts they’d used just minutes before being quizzed. The assessment focused heavily on debugging, code reading and conceptual problems.
The study found:
Though the group that used AI completed the quiz two minutes faster, they scored 17% lower than the group that coded by hand.
Those who used AI to slightly speed up the tasks, however, didn’t receive scores that were significantly different from those who coded by hand.
Anthropic said that these scores weren’t changed simply by using AI, but rather were impacted by how the AI was leveraged. While those who used AI to unquestioning generate outputs were less likely to actually learn anything, participants who used the tech to build comprehension, such as by asking follow-up questions or requesting explanations, showed stronger skills.
“Incorporating AI aggressively into the workplace, particularly with respect to software engineering, comes with trade-offs,” Anthropic said in its study. “The findings highlight that not all AI-reliance is the same: the way we interact with AI while trying to be efficient affects how much we learn.”

A study like this is par for the course for Anthropic. Responsible AI is at the core of its mission. Even if studies like this might make users apprehensive about relying on AI coding, taking accountability for it shows that the company is aware of the implications of its tools (while also serving as good PR). Still, this study calls attention to a fragment of a potentially much larger issue: Will AI upend the way that we learn and think if these tools can do the thinking for us?
TOGETHER WITH METICULOUS
Still writing tests manually?
Companies like Dropbox, Notion and LaunchDarkly have found a new testing paradigm - and they can't imagine working without it. Built by ex-Palantir engineers, Meticulous autonomously creates a continuously evolving suite of E2E UI tests that delivers near-exhaustive coverage with zero developer effort - impossible to deliver by any other means.
It works like magic in the background:
✅ Near-exhaustive coverage on every test run
✅ No test creation
✅ No maintenance (seriously)
✅ Zero flakes (built on a deterministic browser)
PRODUCTS
OpenAI retires models — this time with warning
OpenAI is pulling the plug on older models. This time, it’s giving users a two week notice and an explanation to avoid repeating past mistakes.
The AI firm announced its sunsetting of GPT‑4o, GPT‑4.1, GPT‑4.1 mini, and OpenAI o4-mini from ChatGPT on February 13. These models will join GPT‑5 (Instant and Thinking) in retirement, which was previously announced.
The decision to retire GPT-4o was a bold one: Last time the company did so, replacing it with GPT-5, it faced a ton of backlash from users who preferred that model and had established workflows using it, so much so the company had to bring it back.
As a result, this time, OpenAI provided justifications for the decision:
The feedback OpenAI received from preferring GPT-4o was taken into consideration when building GPT‑5.1 and GPT‑5.2, which boasted improvements to personality, customization, and creative ideation.
OpenAI shared that the wide variety of users gravitate to GPT-5.2, with only 0.1% of users still opting to use GPT-4o everyday.
OpenAI acknowledged that the transition may be frustrating for users, but that it is committed to being clear about when changes will ensue.
It allows the company to build better experiences for users: “Retiring models is never easy, but it allows us to focus on improving the models most people use today,” the company said in a blog post.
OpenAI also shared a plan to continue to improve ChatGPT in areas requested the most by users. These updates will address requests such as improving the chatbot’s personality and creativity, and minimizing unnecessary refusals to help and “overly cautious or preachy” responses.

AI models are released at an unprecedented pace, but maintaining them is resource-intensive, forcing companies to retire older versions. However, as OpenAI has learned, this must be done carefully as users build workflows around specific model capabilities, and even benchmark "upgrades" can introduce unwelcome changes. This raises an important question: should companies release fewer, more substantial updates instead? Longer model lifespans and transformative upgrades would make transitions clear no-brainers, rather than disruptive adjustments for marginal enhancements.
LINKS

AI super PAC Leading the Future raised 125 million in 2025
Former Google engineer found guilty for theft of AI tech, espionage
Amid rumors of stalled OpenAI deal, Nvidia’s Huang plans “huge” investment
SpaceX, xAI in talks to merge ahead of planned IPO
As Apple bleeds talent, the company must outsource to keep up in the AI race
The Chinese Government is using AI to modernize traditional Chinese medicine
Waymo finalizes $16 billion funding round at $110 billion valuation

Suno: The AI music generator has a new “sample” feature which lets users create songs from snippets they chop up.
Freepik: The image generator introduced multiple model generation, which lets users test up to four models at once.
OpenClaw: The viral tool has been rebranded from Clawd to Moltbot to OpenClaw in what the company calls its “final form.”
Martini: An AI video production tool for professionals that goes beyond “prompt roulette.”

Meta: Staff Research Engineer, MetaAI Assistant Measurement
Nvidia: Senior Applied Agent Research Engineer
Salesforce: AI Security Architect
Google DeepMind: Research Scientist, Recommendation Systems
POLL RESULTS
Which of the big AI labs do you trust the most to do the right thing when dealing with the dangers of AI?
Anthropic (41%)
Google (32%)
OpenAI (10%)
Other (17%)
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“More naturally imperfect - overall image contrast, less vivid colors, glove looked unremarkable and unintentional.” |
“The utensil had an odd crease/bend to it in [this] photo. The fire was [also] built without anything to contain it or any separation from burnable leaves, pine needles, etc.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.











