• The Deep View
  • Posts
  • Apple’s Siri revamp slips as features get split

Apple’s Siri revamp slips as features get split

Welcome back. Apple’s long-promised Siri overhaul slips again, underscoring a familiar truth in AI: execution beats timelines. Anthropic, flush with a $30B fund-raising round and a $380B valuation, is investing in infrastructure, energy, and regulation to reinforce its ethical brand, even as pressure mounts to deliver returns. Meanwhile, OpenAI and Google warn that other labs are cloning their frontier models through distillation, raising fresh questions about guardrails, geopolitics, and the durability of the advantage for frontier labs. A familiar theme is emerging: in 2026, trust, control, and real-world performance will define the next phase of the AI race more than the ability to build the best model and win benchmarks. Jason Hiner

IN TODAY’S NEWSLETTER

1. Apple pushes Siri upgrade further into 2026

2. $30B later, Anthropic doubles down on 'good AI'

3. OpenAI warns against Chinese model copycats

WHAT MATTERS THIS WEEK IN MARKETS IN UNDER 2 MINUTES

With inflation data, earnings momentum, and rate expectations colliding, this week sets the tone for risk assets heading into the next macro cycle. Here’s what investors should be watching closely.

Market Snapshot (Week Preview)

S&P 500 vs. 10Y Treasury Yield (Last 6 Weeks)

Key Themes to Watch

  • Rates vs. Equities: Bond yields remain elevated, testing equity valuations.

  • Earnings Sensitivity: Forward guidance will matter more than headline beats.

  • Macro Catalysts: CPI / Fed commentary could reset short-term expectations fast.

Across earnings calls and macro commentary, Perplexity Finance is detecting a shift from inflation anxiety to growth durability — particularly in sectors tied to AI, infrastructure, and consumer resilience. Explore here.

BIG TECH

Apple pushes Siri upgrade further into 2026

When will Siri's promised upgrade arrive? Don't hold your breath, at least not yet.

Since last June, Apple users have been waiting for the promised Siri overhaul, one that would make the assistant more conversational and capable of taking actions on your behalf using personal context. Bloomberg previously reported that Apple had set an internal March release target with the iOS 26.4 update, but a new report reveals the features will now be staggered across multiple future updates.

The delay stems from recent testing snags that revealed software problems, including Siri taking too long to handle requests, according to people familiar with the matter cited in the report. However, some new features may arrive as soon as iOS 26.5, as internal versions of that update already include notices about certain Siri upgrades.

Internal test versions of iOS 26.5 reveal it will include two unannounced features: a new web search feature that functions similarly to Perplexity, and custom image generation. Following Apple’s track record with iOS release schedules, the wait between iOS 26.4 and iOS 26.5 will likely be short.

Yet, the most anticipated feature might not be included. An internal version of 26.5 lets users “preview” Siri’s capability of referencing personal data to add context to prompts, with that “preview” designation likely being a sign that the full-on feature is not ready to ship just yet. 

The report cites a landslide of other challenges, including running behind on advanced commands for voice-controlled in-app actions, early tester accuracy issues and bugs, Siri defaulting to OpenAI’s ChatGPT instead of Apple’s technology, which now includes AI tech from Google Gemini. 

This isn’t stopping Apple’s ambitions as the company is reportedly also working on a revamped, chatbot-like Siri for iOS 27, iPadOS 27, and macOS 27, as we previously reported.

The latest Siri AI delay triggered a sharp one-day selloff in Apple, with the stock dropping about 5% on February 12, 2026, and leaving shares down roughly 4% year‑to‑date

TOGETHER WITH PIGMENT

How AI Is Changing Forecasting, Planning, and GTM Execution

AI is transforming how modern Go-To-Market teams forecast, plan, and execute.

On February 26, 2026, join sales and revenue leaders from OpenAI, Spotify, and Pigment for a live conversation on how AI is changing forecasting, planning, and GTM execution in practice.

You’ll learn how high-performing sales teams use AI to improve alignment, make faster decisions, and drive better outcomes. Save your spot and future-proof your GTM strategy.

THE AI MODEL RACE

Who’s Pulling Ahead and Why it Matters

The race for the top AI model isn’t slowing down—it’s getting more complex.

The race for the top AI model isn’t slowing down—it’s getting more complex.

By the end of this period, a small group of frontier models has clearly separated from the pack. But leadership in AI is no longer just about benchmark scores or flashy demos. According to Perplexity insights, the real signal is emerging at the intersection of capability, adoption, and economic viability.

This is where the story gets interesting.

Adoption Is Becoming the Defining Metric

Perplexity’s analysis suggests we’re entering a new phase of the AI cycle. Early on, innovation was driven by research breakthroughs. Now, leadership is increasingly defined by how deeply models are embedded into products, workflows, and organizations.

Enterprise adoption data points—API usage growth, integration into internal tools, and developer dependency—are becoming stronger indicators of long-term success than headline-grabbing releases.

As one Perplexity insight puts it: capability creates attention, but adoption creates gravity.

Today’s leading AI models—developed by companies like OpenAI, Anthropic, Google, and Meta—are optimizing for different outcomes.

This $16M market shows an extremely tight race between Anthropic (50.45%) and Google (47.50%) for best AI model by end-of-February, with recent news indicating Anthropic previously held a commanding 60-72% lead that has now evaporated—suggesting Google's Gemini 3 Pro may be closing the capability gap and creating a value opportunity for traders betting on late-month benchmark updates.

TOGETHER WITH ASAPP

100 ways to use gen AI in the contact center

Most AI in the contact center talks. Very little of it works.

Real impact starts when AI agents can resolve issues, execute workflows, and operate across enterprise systems.

ASAPP’s 100 use cases for generative AI agents in the contact center shows how leading brands in travel, insurance, banking, retail, and healthcare are deploying AI that delivers outcomes: lower costs, higher CSAT, and measurable revenue lift.

GOVERNANCE

OpenAI warns against Chinese model copycats

Major AI firms are sounding the alarm on secondhand models. 

On Thursday, OpenAI sent a memo to US lawmakers warning them that Chinese AI firm DeepSeek is using distillation techniques to “free-ride” on the capabilities of OpenAI's models, as well as those of other frontier labs. The firm says DeepSeek is using “obfuscated methods” to undercut OpenAI’s defenses. 

Open AI memo claims that Chinese LLM providers and university research labs are using its models in such a way that would be “highly beneficial” in creating competitor models through distillation. 

  • OpenAI also has observed accounts associated with DeepSeek employees using methods to “circumvent” access restrictions.

  • Although OpenAI has added safeguards to prevent this distillation, the company claims that these techniques are becoming more sophisticated as a result.

  • Although distillation is a commonly used technique in AI training, OpenAI claims that doing this under the radar can result in models that are missing key guardrails, resulting in “dangerous outputs in high-risk domains.” 

“It’s important to note that there are legitimate use cases for distillation … However, we do not allow our outputs to be used to create imitation frontier AI models that replicate our capabilities,” OpenAI said in the memo. 

And OpenAI isn’t alone in calling out these risks. On Thursday, Google’s Threat Intelligence Group published a report detailing a flood of “commercially motivated” actors seeking to clone its flagship model, Gemini. The company said in the report that these actors are using “distillation attacks,” in which they prompt Gemini thousands of times as a means of learning how it works to bolster their own models. 

Though Google didn’t call out any specific group in its report, the company said it “observed and mitigated frequent model extraction attacks from private sector entities all over the world and researchers seeking to clone proprietary logic.”

Despite the risks that Chinese open-source models might present, their cheap and powerful AI models are irresistible. DeepSeek and Alibaba’s Qwen models have raked in hundreds of millions of downloads globally and are even attracting the attention of Silicon Valley AI firms that are using the Chinese tech to build their products. However, if Google and OpenAI are on the mark, these models might not be so different from the proprietary offerings of US firms — just without the essential safety guardrails that set them apart. 

Nat Rubio-Licht

LINKS

  • Experian in ChatGPT: The Experian Insurance Marketplace app on ChatGPT allows users to access its trusted insurance comparison platform on ChatGPT. 

  • Google Gemini: Gemini 3 Deep Think got an upgrade that allows users to urn a sketch into a 3D-printable reality. 

  • OpenAI’s GPT-5.3: The company launched a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex meant for real-time coding.

  • GLM-5: Z.ai released a new model this week that is proficient in agentic tasks, even beating leading models from Anthropic, Google and OpenAI across benchmarks.

(sponsored)

GAMES

Which image is real?

Login or Subscribe to participate in polls.

A QUICK POLL BEFORE YOU GO

If Apple dramatically improves Siri, would you switch to it as your primary chatbot?

Login or Subscribe to participate in polls.

The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“It looks more realistic and less enhanced. E.g., the eggs in [the other image] are too perfect with the right amount of yolk. The colours in [the other option] "pop" a little too much (e.g., the radishes). [This option] looks like how the dish would look if I prepared and plated it.”

“Out-of-focus foreground and background indicate that the food was the primary focus.”

“[This option] was a little messy and [the other option] was too perfect.”

“I chose [this option] as real because of the unrelated background image.”

“It was a guess based on the faded background item.”

“The AI did a good job of being messy, I like that.”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.