- The Deep View
- Posts
- OpenAI taps OpenClaw for personal AI agents
OpenAI taps OpenClaw for personal AI agents

Welcome back. Personal AI agents just went mainstream. OpenAI is teaming up with OpenClaw founder Peter Steinberger in a deal that could define 2026. OpenClaw has proven to be among the most capable AI agents we've seen so far, while also pioneering an interface breakthrough. Meanwhile, IBM is rewriting the entry-level job playbook, tripling its hiring of junior roles and shifting them toward human-centric work AI can’t replicate. And Meta is reportedly exploring facial recognition for its AI glasses, raising new privacy concerns in the race to own wearable AI. —Jason Hiner
1. OpenAI bets big on personal AI agents with OpenClaw
2. IBM is changing entry-level jobs, not killing them
3. Meta’s AI glasses could soon identify people
CONSUMER
OpenAI bets big on personal AI agents with OpenClaw
OpenClaw may be AI's biggest inflection point since ChatGPT, and it now has a special relationship with OpenAI.
On Sunday, OpenClaw founder Peter Steinberger announced that he was "joining OpenAI to work on bringing agents to everyone." He also stated that the OpenClaw project itself would "move to a foundation and stay open and independent."
This doesn't come as a surprise after Steinberger was interviewed on the Lex Fridman podcast at the end of last week and said that VCs had been chasing him to give him money to turn OpenClaw into a company. But, Steinberger told Fridman that the alternative path was to work with one of the big AI labs and that "Meta and OpenAI seem the most interesting." He said his condition was that the project remain open-source and perhaps follow a model similar to Google's Chrome and Chromium.
"I think this is too important to just give to a company and make it theirs," Steinberger said.
OpenClaw has become the biggest story in AI so far in 2026, stealing the spotlight from Anthropic's Claude Code and Claude Cowork, two other agentic solutions that have also begun to change the way people work. But OpenClaw (formerly known as Clawdbot and Moltbot) has been a runaway freight train of momentum since it went viral at the end of January.
There have been two main reasons for OpenClaw's rapid popularity:
It's largely viewed as the most independent and capable personal AI agent; once you set it up, it can figure out creative ways to do tasks you tell it, but it can also learn about you and proactively suggest things it could help you with
One of OpenClaw's biggest innovations is relatively simple: the ability to send it instructions from messaging apps such as iMessage, WhatsApp, and Slack and have it carry out those tasks even when you're not at your computer
OpenAI is clearly thrilled with the Steinberger deal, as three of its top executives, Sam Altman, Greg Brockman, and Fidji Simo, all tweeted about it on Sunday night.
"I'm very excited to make this into a version that I can get to a lot of people, because this is the year of personal agents and I think that's the future," Steinberger said. "And the fastest way to do that is teaming up with the big labs."

I initially had serious concerns about OpenClaw due to the security implications, but those concerns have largely been resolved as best practices have emerged: host it on a separate machine and assign it separate permissions as you would for a new employee. Meanwhile, I continue to hear more and more folks expressing sentiments similar to Jason Calacanis on his recent podcast: "OpenClaw is the most paradigm-shifting piece of AI software since ChatGPT… In our company, in the two weeks or so since we've been using it, it's been offloading 10% of our chores per week per knowledge worker. We think we'll be at 50-60% of our work being clawed by… April." Even beyond OpenClaw and Steinberger, personal AI agents are shaping up to be the biggest impact trend of 2026.
TOGETHER WITH ANYSCALE
Scaling LLM Fine-Tuning with FSDP, DeepSpeed, and Ray
Hitting memory limits when fine-tuning LLMs is a sign you’re ready to scale. In this technical webinar, we’ll walk through how to fine-tune large language models across distributed GPU clusters using FSDP, DeepSpeed, and Ray.
We will dive into the real systems problems: orchestration, memory pressure, and failure recovery. You’ll see how Ray - the open source compute framework used by companies like Cursor, xAI, and Apple - integrates with PyTorch. We’ll cover how to launch and manage distributed training jobs, configure ZeRO stages and mixed precision for better memory efficiency, and handle checkpointing.
Seats are limited to keep the session interactive.
BIG TECH
IBM is changing entry-level jobs, not killing them

With even some of the best developers barely writing code anymore, what is there left for an entry-level tech worker to do? According to IBM, a lot.
Last week, the company announced plans to triple entry-level hiring in the US in 2026. However, these positions aren’t going to look like the early career jobs of the past, Nickle LaMoreaux, the company’s HR chief, said at Charter’s Leading with AI Summit.
IBM has overhauled its job descriptions for low-level positions, shifting the focus from tasks that AI can automate to areas that AI can’t. This means less coding and admin work, more person-to-person work, such as customer engagement. Though IBM didn’t reveal specific hiring targets, this workforce expansion will be implemented across the board.
“The entry-level jobs that you had two to three years ago, AI can do most of them … you need to be able to show the real value these individuals can bring now,” LaMoreaux said at the summit. “And that has to be through totally different jobs.”
The decision is a complete reversal of the common view that AI will demolish the job market for young and early-career workers. It also adds another piece of evidence to the growing pile of conflicting studies and research on AI displacement. For instance:
A study from Harvard claims that AI tools actually intensify work, rather than lessen it, as people feel more capable of taking on a broader scope of tasks.
Meanwhile, MIT claims that AI can already automate thousands of hours of work, and make certain jobs obsolete.
And a study from Gartner splits the difference: While many will lose their jobs as a result of AI-enabled automation, 50% of those workers will be rehired to do similar work.
There’s no doubt that AI automation will have “extraordinary repercussions” for enterprises, Luis Lastras, director of language technologies at IBM, told The Deep View. However, businesses that are seeking to use AI to shave staff and boost the bottom line might be thinking about this technology the wrong way, he said.
If an individual can now do five times as much in one day as they previously could, enterprises shouldn’t be looking at doing the same amount with less people. Rather, they should be looking at ways to empower people to do more: more exploration, more experimentation, more creation, he said.
“If I were a business owner, I would focus a lot on very strong people, not on fewer people,” Lastras told me. “Because I would want to scale my ability to experiment.”

The truth is that AI’s impact on jobs may still be too early to call. No one could predict the employment impact of the printing press, the calculator, the car, the internet, and so on. The difference, however, is AI’s potential to automate work — and even, to some extent, thought — in its entirety. In a perfect world, all employees with automatable jobs would be given the opportunity to experiment, build, learn and try new things. However, we live in an economy dominated by public companies focused on both growth and profits. As shareholders breathe down enterprises’ necks for returns, companies constantly feel the tug to cut costs. Many, if they see the opportunity to save money today by cutting staff, will take it, even if it means compromising the opportunity to make more money tomorrow.
TOGETHER WITH CODER
Why enterprise security teams are blocking AI coding tools
Picture this: Your developers started using Cursor last month. Productivity shot up 40%. Pull requests doubled. The platform team started planning a company-wide rollout. Then your security team stepped in.
Not because they're innovation killers. Not because they don't understand the value. Because when AI agents run on local laptops with unrestricted access to your private repositories, your APIs, and the open internet, you've created a governance nightmare that no CISO can defend.
So the tools get blocked. Innovation stalls and developers go back to fighting their local environments while your competitors figure out how to ship faster with AI. There has to be a better way… and there is.
PRODUCTS
Meta’s AI glasses could soon identify people

Smart glasses bring AI into your world. They could also identify anyone in it.
Meta has dominated the AI smart-glasses market, with its Ray-Ban collaboration becoming the world's best-selling AI glasses, moving over 7 million units sold in the past year. The appeal lies in seamlessly integrating mics, cameras, and speakers into a lightweight design. However, a New York Times report reveals that Meta is exploring using those same cameras for a new facial recognition feature.
The feature, internally called “Name Tag,” would allow the wearer to identify the people around them, as well as relevant information through the Meta AI assistant: the same one currently used for general queries, according to the four people involved with the plans who spoke to the NYT.
According to two sources, the feature would be limited in scope, potentially recognizing only people the wearer is connected to on Meta platforms or those with public Meta accounts, rather than identifying anyone indiscriminately.
An internal May document obtained by the NYT also reveals Meta planned to pilot the feature with attendees at a conference for the blind before rolling it out more widely, signaling it’d be first marketed as an accessibility feature. Beyond accessibility, the feature could deliver benefits for users and Meta alike.
“For consumers, facial recognition removes the barriers and embarrassment of being caught in a situation when you think you know someone, but aren’t 100% sure,” said Ramon Llamas, Research Director, Mobile Devices and AR/VR at IDC. “For Meta, facial recognition on the glasses can help strengthen the connections among its different products and services and drive longer usage of each.”
Notably, the document suggests Meta planned to leverage the "dynamic political environment" in the United States to distract from potential backlash from civil society groups. Meta has experienced similar scrutiny before.
In 2024, two Harvard Students integrated a facial recognition service that allowed them to identify strangers and retrieve personal information. At the time, the company stated that the flashing light on the glasses serves as an indicator to the public that the camera is running. Meta also had to shut down it’s decade old Facebook facial recognition technology in 2021 due to privacy concerns.
“It raises many questions as to what would be a reasonable approach to privacy regarding what information can be accessed, to what extent, how reliable that information is, and so much more,” added Llamas. “That’s where Meta has to come up with the right formula for reasonable usage.”
The future of the feature is still not guaranteed, as the company is reportedly evaluating how the feature could be released in a way that addresses “safety and privacy risks,” according to the documents. Meta similarly considered adding facial recognition to the original launch of its AI glasses in 2021, but decided against it.

Meta's dominance in AI smart glasses has created a formidable barrier for new entrants like Google and Samsung, which are expected to launch this year. But maintaining that lead requires keeping users on board, not driving them away with privacy-eroding features. Which begs the question: Why would Meta risk its market position with a controversial feature? The answer likely lies in the data itself. For Meta to gamble on such a feature, it may have data that facial recognition would deliver significant value and that it's a feature users want.
LINKS

Anthropic appoints ex-Microsoft CFO and Trump aide Chris Liddell to board of directors
Motion Picture Association decries Seedance 2.0 for ‘Massive’ infringement
Baidu lets users opt-in to access OpenClaw in search app
Anthropic, Codepath partner to bring code to bring Claude Code to students
OpenAI uses GPT-5.2 for breakthroughs in theoretical physics
Pentagon threatens to cut ties with Anthropic over AI guardrails
Salesforce wants to be the agentic AI platform for the world’s largest organizations

MiniMax M2.5: the latest model from Chinese AI lab MiniMax, capable of matching Opus 4.6 and GPT-5 on agentic coding benchmarks at a fraction of the cost.
Teamily AI: A "human + AI" social network for better collaboration and connection with agents.
Exa Instant: An AI-powered search engine that's 15 times faster than rivals, built to power real-time AI products.
Google Photos Custom Caricatures: Google photos can now generate a custom representation of you based on your images, no prompts necessary.

Tencent: Hunyuan AIGC Algorithm Researcher (World Model Foundation Direction)
Samsung Research America: Staff GenAI Research Engineer, Digital Health
Cloudflare: Models Engineer, Developer Relations
Apple: Senior Video Standards Engineer
POLL RESULTS
If Apple dramatically improves Siri, would you switch to it as your primary chatbot?
Yes (23%)
No (69%)
Other (8%)
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“She looked like she was having a blast!” |
“The surfer in [this] image is not looking where he's going.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.










