- The Deep View
- Posts
- Perplexity may have built a better OpenClaw
Perplexity may have built a better OpenClaw

Welcome back. AI is coming for your pocket. Samsung’s Galaxy S26 may be the first true agentic AI phone, with Gemini now able to take multi-step actions in the background and then ask for your approval before executing key tasks. It’s an early but meaningful glimpse of ambient, task-driven AI. Perplexity has entered the personal agent race with Perplexity Computer, an orchestrator that can run up to 19 models, breaking down complex outcomes into coordinated sub-agents and running for hours within a secure sandbox. My first tests with Perplexity's new agent have been impressive. I think we can safely say 2026 is shaping up to be remembered as the year agents got real. —Jason Hiner
1. Perplexity may have built a better OpenClaw
2. Galaxy S26 becomes Samsung’s AI showcase
3. Agentic Gemini feature debuts on its first phone
STARTUPS
Perplexity may have built a better OpenClaw
Claude Code and OpenClaw have taken 2026 by storm by offering the first glimpses of personal AI agents. Perplexity just unveiled an agent that could prove to be more versatile and easier to use.
On Wednesday, the AI search firm launched Perplexity Computer, which it calls "a general-purpose digital worker that operates the same interfaces you do" and "a system that creates and executes entire workflows, capable of running for hours or even months."
It's live starting today at perplexity.ai/computer. It's only available on the web for now and not in the Perplexity app. It's also only available to Perplexity Max subscribers ($200/month) to start, while Perplexity says it will roll out to Pro ($20/month) and Enterprise subscribers in the coming weeks. To access it from perplexity.ai, you'll simply click the "Computer" icon/link in the upper left corner under the main Perplexity icon.
Perplexity Computer coordinates with tools, files, personal context, various AI models, deep research on the open web, agentic web access, coding capabilities, and file creation.
It draws from 19 models, open-source and proprietary, from all the leading labs. At the start, it "uses Opus 4.6 for orchestration and coding tasks, Gemini for deep research, Nano Banana for images, Veo 3.1 for video, Grok for speed in lightweight tasks, and ChatGPT 5.2 for long-context recall and wide search," according to Perplexity.
The agent runs into a secure development sandbox.
Perplexity has been using the agent internally since January and reports that its employees have used it to rapidly publish engineering documentation, build a 4,000-row spreadsheet overnight that would have normally taken a week, and used it to create websites, dashboards, applications, analysis, and visualizations.
Because agents can rack up token costs so quickly, Perplexity has introduced per-token billing for consumers for the first time. Max users get 10,000 tokens as part of their plans and Perplexity is giving them an extra 20,000 tokens for the launch of Perplexity Computer so they can kick the tires on it.

There are two aspects of Perplexity Computer that could make a breakthrough product. The first is the orchestration of various models that are best-in-class at different functions. The second is the fact that you can tell Perplexity Computer the outcome you're looking to achieve, and then it will break the work up into various agents and sub-agents — based on the best capabilities of the various models — and then carry out and coordinate the work while the various agents manage their tasks simultaneously. I have a Perplexity Max subscription and I'll be testing it out and reporting back. You can also follow me on X/Twitter at x.com/jasonhiner, where I'll be posting updates on my experience with Perplexity Computer.
TOGETHER WITH CRUSOE
Crusoe: deploy fine-tuned models with zero infrastructure headaches
Work with our team to deploy your fine-tuned model on a platform built for performance.
Use Crusoe Managed Inference to unlock breakthrough speed and throughput without the infra overhead.
HARDWARE
Galaxy S26 becomes Samsung’s AI showcase
Smartphones are becoming AI-first devices at a rapid pace, and Samsung's latest flagship is further proof.
The S26 lineup, comprising the S26, S26+, and S26 Ultra, brought upgrades expected of a new smartphone generation, including improvements to form factor, camera system, and display. But the most significant hardware updates and the most exciting new features were united by a common theme: deeper integration of AI, especially agents.
Beyond the hardware, there is a plethora of new AI features. I listed the most noteworthy and provided a brief description in this article; however, here are two of my favorites.
Gemini can now perform tasks for users, and it will handle the setup, so all you have to do is approve. For example, you can ask Gemini something like, “Call me an Uber to SFO,” and it will handle the rest.
Exemplifying a simple yet useful feature is the new Now Nudge, which provides real-time suggestions across any messaging app by working within the keyboard. It will feed you proactive information based on the context of your conversation, such as pulling contact information or calendar dates, or the simplified Document Scan, which is now as easy to access as pointing your camera at any document.
The Galaxy S26 lineup is available for pre-order today and will be generally available on March 11. The Galaxy S26 Ultra starts at $1,299.99, the Galaxy S26 at+ at $1,099.99, and Galaxy S26 at $899.99.

Since generative AI soared in popularity, phone manufacturers have been racing to add new AI integrations and features. However, these additions need to be done tastefully, as adding features that aren't truly helpful, just for the sake of it, has led companies such as Apple to face significant backlash. Google's Pixel 10 raised the bar for what an “AI phone” should look like. With the S26 launch, Samsung picked up where Google left off, adding AI features in subtle yet helpful ways that should improve users' everyday experiences.
TOGETHER WITH AUTH0
Stop letting auth complexities slow down your app development 🛠️
We've overhauled our B2B plans to give you what you need to ship faster.
Get started for free with non-negotiable features like Self-Service SSO and SCIM, removing friction so you can focus on building what matters.
Our new, flexible plans are designed to grow with you, ensuring you are ready for enterprises from your first user to your millionth. Build your production-ready app with a customized login experience and scale your customer base confidently.
PRODUCTS
Agentic Gemini feature debuts on its first phone
I've been an AI beat reporter for over three years, but lately, I've found myself at more device launch events than ever, because AI is now being infused into everything. After watching more features come and go than I can count, one of Samsung's latest actually intrigued me.
With the launch of the Samsung Galaxy S26, Samsung introduced a task automation feature in beta powered by Gemini. The way it works is simple: it doesn’t perform an action for you, but it does take the steps necessary, so all you have to do is approve it. To activate it, all you have to do is ask Gemini.
In my demo, all I said was “Call me an Uber to SFO in 15 minutes.” Then it got to work in the background, with a blue pill-shaped button labeled “View Progress,” which is, of course, optional, as the aim is for it to run in the background while you do other tasks.
When you click the button, you can watch it work in the sandbox environment, such as entering an address and selecting the vehicle type. Then it requires your final confirmation to act. Both the sandbox environment and final confirmation are to protect users from the agent going rogue.
This release marks the first time a truly agentic feature has shipped to customers in a consumer device, despite many attempts in the past, including the most notorious: the Rabbit R1 failure. At CES 2026, Motorola's 312 Labs showcased Project Maxwell, an AI-powered pin, as a mere proof of concept with no release timeline, and yet it worked the same way as Samsung's Gemini task automation feature.
During the Unpacked keynote, TM Roh, Samsung's President and CEO of the Device eXperience Division, called the Galaxy S26 lineup the first true "agentic AI phones." It's an ambitious label, but if this feature ships and works as promised, it may well be the clearest glimpse yet at what an agentic smartphone can do.

Also notable is that this is a Google feature, meaning its reach will extend far beyond the Galaxy S26 lineup. It's now also shipping to Google's own Pixel 10 and Pixel 10 Pro, and with Google's partnership with Apple to power the new Siri, this capability could soon make its way to iPhones as well. That said, my confidence in the feature isn't high just yet, since every attempt to use it outside of the demo on the Galaxy S26 Ultra has fallen short. Still, this is a beta on day one, and it deserves the benefit of the doubt as the feature rolls out. I'll be putting it through its paces as I switch into the Galaxy S26 Ultra as my daily driver throughout this review, and I'll keep you updated. You can follow me on X/Twitter at x.com/sabrinaa_ortiz for updates in real-time.
LINKS

Anthropic acquires Vercept to improve Claude’s computer use capabilities
Nvidia reports 75% jump in data center revenue at $62.3 billion
BigAI companies to sign pledge to supply their own power for data centers
AI music platform Suno hits 2 million paid subscribers, $300 million ARR
Quiver.AI emerges from stealth with $8.3 million in seed funding
Anthropic is giving Claude Opus 3 it’s own Substack
Alphabet-owned robotics firm Intrinsic joins Google

Claude Cowork: Anthropic’s latest addition to its Cowork tool includes scheduling tasks automatically, including recurring tasks such as a morning brief or weekly spreadsheet updates.
Opal 2.0: An upgraded version of Google Labs’ no-code visual builder for AI workflows, now featuring agentic capabilities.
Ask Fellow: Tracks your meetings to draft follow-up emails, generate video clips, create and export docs and manage your calendar, using only natural language.
DeepSource AI Code Review: DeepSource’s AI review agents catch quality and security issues accurately, weeding out vulnerabilities better than LLMs.

Databricks: Sr. Developer Advocate, Databricks AI Agentic Systems
AWS: Applied Scientist, LLM Code Agents, Kiro Science
SpaceX: Sr. GNC Engineer (Dragon)
Google DeepMind: Principal Research Engineer, Gemini Evals
POLL RESULTS
If you had to choose: Meta AI glasses or Google AI glasses?
Google (58%)
Meta (19%)
Other (23%)
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“The color of the river water in [this] image was [more] realistic.” |
“The consistency of the patterns in the palm trees in [this image] is too repetitive. Nature isn't that repetitive.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.












