- The Deep View
- Posts
- New coalition demands pro-human AI pact
New coalition demands pro-human AI pact

Welcome back. Apple may still look like it’s trailing in the AI race, but its latest research shows it’s quietly working on a critical problem: hallucinations. We break down Apple’s new method for pinpointing exactly where AI goes wrong. At MWC, Honor unveiled a playful “robot phone” with a tiny gimbal arm that hints at how strange AI hardware could soon get. And a new “Pro-Human AI Declaration,” backed by voices from across tech and politics, warns that the race to replace human labor with AI carries momentous societal risks. The pact demands that human thriving gets centered in the process. —Jason Hiner
1. New pact pushes back on AI replacement race
2. Robot phone hints at stranger AI devices ahead
3. Apple finds a new way to spot AI hallucinations
GOVERNANCE
New pact pushes back on AI replacement race
AI ethicists have put out another plea for the world to pay attention to the tech’s risks.
On Wednesday, a coalition of leaders across industries announced the “Pro-Human AI Declaration,” united by a broad, simple proclamation that AI “should serve humanity, not the reverse.”
“This race to replace poses risks to societal stability, national security, economic prosperity, civil liberties, privacy, and democratic governance,” the statement reads. “It also imperils the human experiences of childhood and family, faith, and community.”
The declaration, which counts names like Yoshua Bengio, Steve Bannon, Susan Rice, Sir Richard Branson and Joseph-Gordon Levitt among its endorsers, proposes five central tenets in creating trustworthy and controllable AI:
Keeping humans in charge, which suggests meaningful human controls and override capabilities, an AI “off-switch,” independent oversight and an end to the superintelligence race
Avoiding power concentration, such as preventing AI monopolies, having democratic authority over major work, society, and civic life impacts, and shared prosperity of AI’s benefits
Protecting the human experience, which proposes that AI should not be allowed to exploit or stunt children's growth, should not be addictive and should not “supplant” foundational relationships
Human agency and liberty, suggesting that AI should not be allowed to have personhood, that humans should also retain rights to their data and privacy, and that AI should not “enfeeble” users.
Responsibility and accountability for AI companies, advising that AI should not create a "liability shield” for companies, developers or users, and that all failures by models should be made transparent.
This is not the first time tech ethicists have implored the industry to pay attention to the dangers that lie ahead in our current AI trajectory. In October, the Future of Life Institute put out a petition calling for a moratorium on developing superintelligence, claiming that the tech harbors “extreme large-scale risks.” The petition garnered more than 135,000 signatures, many of whom also endorsed this Pro-Human AI Declaration.

AI is moving so fast that it often breaks out of restraints quicker than we can make them. Getting people to pay attention to the risks the tech presents is a huge challenge. The fact is that people won’t pay attention to responsible AI until AI actually creates a major crisis. So I ask: What will it take? How many wrongful death lawsuits against LLM providers are going to have to pile up? How many people need to lose their jobs? How many self-driving cars need to crash? Though the ethos of innovation has long been to move fast and break things, what will it have to break to get people to act?
TOGETHER WITH CRUSOE
Crusoe Managed Inference: Now Available for Custom Models
Generic cloud infrastructure wasn't designed for proprietary architectures and trillion parameter models. Move past the prototype phase with confidence that long prompts won't slow you down. Crusoe's inference engine is powered by MemoryAlloy™ technology to maintain ultra-low latency and superior throughput, even as your context grows.
Infrastructure management won't slow you down either. Now you can deploy your fine-tuned models with best-in-class speed and reliability using Crusoe Managed Inference. Work with our experts to optimize Crusoe's inference engine for your unique weights so your team can focus on building instead of wrestling with infrastructure.
HARDWARE
Robot phone hints at stranger AI devices ahead
At MWC, smartphones fold, change colors, and even have robotic arms.
While every manufacturer is touting an AI smartphone, Chinese phone maker Honor is taking things a step further with a camera that physically extends from its Robot Phone via what the company calls the "industry's smallest 4DoF gimbal system."
Almost resembling a Pixar lamp, a small arm unfolds from the back of the phone and swings out to broaden the phone's view of its surroundings. Honor is pitching four key benefits:
AI assistance: A wider field of view provides added context for more useful AI responses.
Image stabilization: Like a traditional gimbal, it helps keep photos and videos steady.
AI object tracking: The camera can lock onto a moving subject, which is useful for shooting content or video calls.
Entertainment: The robot camera can bob its "head" to the beat of music, which is gimmicky but clever.
I demoed all four features, and the best way to describe the experience is simply fun. Object tracking worked as promised, the robot did bust some moves, and the AI assistant performed comparably to most standard chatbots, with the added charm of a Pixar lamp "head" swiveling toward you.
Beyond the fun factor, however, the practical case for the phone is harder to make — particularly at what will likely be a steep price. The exact cost has yet to be announced. What it does illustrate, though, is a broader truth about the smartphone industry: companies are going to extraordinary lengths to stand out in today's AI and robotics hype cycle.

Since AI exploded in popularity, companies have raced to incorporate it into their devices, with plenty of big promises and ambitious announcements, but relatively little change to the actual smartphone experience. So it's exciting to see a manufacturer think beyond the screen. I don't expect everyone to be carrying a robot phone anytime soon, but I do hope Honor's bold move inspires other companies to break from the standard slab form factor and seriously explore what AI-driven hardware could look like.
TOGETHER WITH PIGMENT
If you’re building with AI, listen to this
AI is reshaping the way tech leaders build products, make decisions, and operate at scale. But, today, the guidelines for exactly how that’s done are still being written.
The Perspectives series brings you inside the rooms where these playbooks are taking shape - whether you’re scaling a product, leading AI transformation, or building the systems that power modern enterprises.
Tune in on your favorite podcast platform and gain actionable insights from founders and leaders at Sierra, Profound, Intercom, Datadog, ElevenLabs and more.
RESEARCH
Apple finds new way to spot AI hallucinations
Apple may not have homegrown AI. But it wants to make sure the technology is done right.
On Tuesday, Apple published research detailing a new way to find and quash incidents of hallucination, the pesky mistakes that an AI model makes when it doesn’t have enough training data and starts making guesses. Apple’s research introduces “Reinforcement Learning for Hallucination Span Detection,” which pinpoints not just when an AI model hallucinates, but where exactly within a line of text the model goes wrong.
Apple’s model gives its AI framework small rewards each time it accurately identifies incorrect phrases or words, based on how closely its responses match those of human evaluators.
This turns hallucination detection from a “binary task” into a “multi-step decision-making process,” Apple said in its research.
To put it simply, it’s the difference between a teacher saying you failed a test with no explanation and a teacher telling you exactly which answers you got wrong and why.
“Most existing research works focus on a binary hallucination detection problem, where the goal is to determine if the model output contains hallucinations or not,” Apple said in the paper. “While useful, this formulation is limited: in many real-world applications, one often needs to know which specific spans in the model output are hallucinated in order to assess the reliability of the generated content.”
And Apple’s system proved itself, outperforming conventional methods on the RAGTruth Benchmark, an AI truth-checking test for tasks like summarization, question answering, and data-to-text.

While Apple may be seen as miles behind in the AI race, this is a misconception. Apple has effectively removed itself from the competition entirely, instead hitching its wagon to Google through a multi-year agreement to use Gemini to power Siri. However, Apple still bears the burden of doing AI right. With almost 2.5 billion devices in the hands of users worldwide, it’s vital that an AI-powered Siri makes as few mistakes as possible, especially if many of those users aren’t AI-savvy. This research is a sign that Apple understands the consequences of getting it wrong.
LINKS

Lawsuit alleges that Gemini is responsible for a Florida man’s death
AI companies officially sign pledge to supply power for AI data centers
Meta is reportedly creating a new applied engineering organization
Anthropic’s Amodei called OpenAI’s Defense Deal “safety theater”
OpenAI reportedly preps GPT-5.4 with “extreme reasoning”
Perplexity, Coreweave sign multiyear data center deal

ChatGPT: OpenAI’s chatbot should be “more accurate and less cringe” due to the GPT-5.3 Instant update rolling out to everyone.
Claude: Anthropic rolled out an updated skill-creator across Claude Code as plugin, Claude, and Cowork.
Glaze: Raycast launched a new tool meant to enable users to create desktop apps, reimagined by you.
Anything: The platform launched Research Agents, and now it “sends parallel agents across your codebase before writing a single line of code.”

Meta: Partner Engineer, Generative AI
Databricks: Sr. Developer Advocate, Databricks AI Agentic Systems
ByteDance: Senior Research Scientist - Machine Learning System
Cloudflare: Models Engineer, Developer Relations
A QUICK POLL BEFORE YOU GO
Which AI coding tool do you prefer? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Irregular shaped bowls are too imaginative for gen AI.” “Depth of focus not as perfect as [the other image], and makes pic more realistic.” “Shadows seemed more consistent.” |
“[The other image] is showing portrait-like focus, indicating a human operator, whereas [This image] is sharp throughout, suggesting AI correction.” “Hard to believe I blew this one. The textures of the wood and stone table top are so good and perfectly imperfect. Silverware in a Ball jar? PERFECT! But all AI. Geez.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.











