- The Deep View
- Posts
- AI leads people to work more, not less
AI leads people to work more, not less

Welcome back. AI isn’t just reshaping jobs, it’s quietly changing how much we work. A Harvard study shows AI adopters often end up with fuller plates, not lighter ones. Meanwhile, a former Anthropic employee explains why he left the AGI race to focus on applied AI and agents at Google, arguing that distribution and trust matter more than the superintelligence race. And Runway’s $315M fundraising round signals where investors think AI goes next: world models that understand physics, video, and reality itself. As we've been covering a lot on The Deep View, the post-LLM era is emerging, and it looks a lot more embodied. —Jason Hiner
1. Study: AI leads people to work more, not less
2. Google AI exec bets on getting AI to the masses
3. Runway takes $315M leap into the world model race
RESEARCH
Study: AI leads people to work more, not less
Though many workers worry that AI is going to take their jobs, evidence suggests that it’s actually giving AI adopters more work, not less.
In an eight-month study of approximately 200 workers at a US-based tech company, Harvard University researchers discovered that AI tools consistently intensified work, rather than reducing the load. The researchers found that AI tools allowed workers to complete tasks faster, enabling them to take on a broader scope of tasks, thereby extending their work hours.
Though the company being studied offered enterprise subscriptions to AI tools for their employees, the researchers noted that these employees were not mandated to use AI. Rather, the workers did so of their own accord.
The problem, however, is that once the excitement over these shiny new AI tools wore off, workers found that their workload had increased without them noticing. The researchers identified three main ways that these workloads intensified:
AI made tasks that were once out of reach feel achievable to new audiences. For example, coding and engineering tasks are now within reach for non-technical employees.
Reduced friction in starting and completing tasks also blurred the boundaries between work and non-work.
Finally, these tools allowed for easier multitasking, with the tech being seen as a “partner” that could handle more tasks in the background. The consequence of that, however, was also an increased taskload.
Harvard’s study joins a litany of conflicting research detailing how AI will impact the way we work. While some say that AI can already automate thousands of hours of work and make certain jobs obsolete, others argue that AI will create new jobs entirely. This study lands somewhere in the middle: Creating new work in the jobs that we already have, while quietly piling on more right under our noses.

Since this study is focused on an American company, it may demonstrate a symptom of US work culture more than the impact of AI alone. However, it highlights the downside of unlocking more productivity: When AI enables people to do more, people often feel as though they have to do more, too. This comes as AI-powered displacement is also creating a constant undercurrent of anxiety among workers. Though all new tech comes with a learning curve, AI’s learning curve could involve learning to do less.
TOGETHER WITH AIRIA
Reinvent Your AI Journey with Airia
You want every employee—regardless of skill level—to confidently embrace AI, but that doesn’t mean sacrificing governance or innovation speed.
Airia is the enterprise AI platform built to unify innovation and security while optimizing your AI ecosystem.
Empower all employees with no-code, low-code, or pro-code tools for quicker AI adoption and productivity gains.
Test prompts, LLMs, and agent variants in safe, production-like environments to reduce development cycles.
Implement automated threat detection and governance tools to ensure compliance while eliminating risks.
Manage agents, data flows, and security protocols from a single hub for seamless control.
Future-proof your enterprise with AI built for complex and regulated environments.
BIG TECH
Google AI exec bets on getting AI to the masses

After working on the launch of Claude 2 through Claude 4, Michael Gerstenhaber left Anthropic to join Google five months ago, where he's now focused on bringing AI to more people and organizations.
In fact, that's why Gerstenhaber left Anthropic. While he believed in the value of the technology and the importance of sharing it with the world, he felt Anthropic's focus on reaching AGI was incongruent with that goal.
“So I left because I accidentally got AGI pilled along way. Dario [Amodei, Anthropic’s CEO] has a very specific effect on people, and I believe that the technology is one of the biggest of our time, probably the biggest,” said Gerstenhaber. “Distributing the technology has become, if not a moral endeavor, a very exciting endeavor for me because of its importance.”
Like OpenAI, Anthropic is racing toward AGI, but the two companies frame their missions differently. Amodei has spoken out about the risks of AGI, including the displacement of entry-level white-collar jobs. At the same time, OpenAI explicitly centers AGI as its goal. We reached out to Anthropic for comment on Gerstenhaber's assessment, but the company did not have a response.
At Google, Gerstenhaber serves as Vice President of Product for Vertex AI and Agents, the company's platform for building and deploying AI in the enterprise. The role puts him at the center of Google's AI cloud infrastructure, everything from inference APIs to agentic capabilities, where he works directly with customers to find the right solutions.
“At Google, we do have that ability to distribute. We're the only Cloud that's vertically integrated among the power plants with the data centers, with the TPUs in the data centers, with access to the smartest models in the world, whether it's ours or my former colleagues, and the platform itself with customers on the cloud,” said Gerstenhaber.
He has already seen AI drive meaningful workflow transformations across companies, including through agentic solutions. For instance, he cited a large pharmaceutical company that delegated statistical analysis and coding of clinical data to agents. Another example was Thomson Reuters’s development of agentic products, such as CoCounsel and Westlaw, for legal research.
He acknowledged AI agents haven't reached their full expected value, not because the technology isn't ready, but because of trust issues. Organizations lack clear ways to define scopes, struggle with accountability when AI fails, and can't easily evaluate whether workflows are performing correctly. His advice for implementation? Take bite-sized steps.
“People should find the scope over which they don't need a human at all, and that might be a very narrow scope, not a very ambitious scope," said Gerstenhaberand, "then you'll widen the aperture from there.”
TOGETHER WITH YOU
Successful AI transformation starts with deeply understanding your organization’s most critical use cases. This practical guide from You.com walks through a proven framework to identify, prioritize, and document high-value AI opportunities.
In this AI Use Case Discovery Guide, you’ll learn how to:
Map internal workflows and customer journeys to pinpoint where AI can drive measurable ROI
Ask the right questions when it comes to AI use cases
Align cross-functional teams and stakeholders for a unified, scalable approach
STARTUPS
Runway takes $315M leap into the world model race
The AI industry just got another indicator that the future lies beyond words alone.
On Tuesday, video AI firm Runway announced a $315 million Series E funding round. The funding slingshots the startup to a $5.3 billion post-money valuation, a source familiar with the matter told The Deep View. The round was led by General Atlantic, and included participation from investors such as Nvidia, Fidelity, Adobe and AMD.
New York-based Runway specializes in generative video, with its core offering being the Gen series of video models. In December, the company released Gen-4.5, its most recent iteration of the model capable of handling text and image inputs to produce realistic, cinematic videos with improved motion and prompt adherence compared to previous models.
However, with this funding round, Runway has its eyes on a new prize: World models.
In the announcement, Runway said it intends to use the funding to “pre-train the next generation of world models and bring them to new products and industries.” The company called world models the “most transformative technology of our time.”
It follows a December blog post from the company entitled “Universal World Simulator,” detailing its vision to train video models at such a large scale that they become world models. “To predict the next frame, a video model must learn how the world works,” Runway wrote.
Runway’s interest mirrors a broader industry shift towards AI that can work with more than just text. World Labs and AMI Labs, founded by AI godparents Fei-Fei Li and Yann LeCun, are each in talks for funding at multibillion-dollar valuations to build their models. Meanwhile, Google’s Genie world model is already being put to use by Waymo to train for rare encounters.
The industry is betting on world models, capable of perceiving and acting on the world, as physical applications of AI become more and more tangible. These models could help robotics systems understand physics, which is crucial in scaling physical AI safety, Anastasis Germanidis, co-founder and CTO of Runway, told The Deep View.
“If you take any self driving company’s data set, the vast, vast majority is going to be non accidents.” Germanidis said. “But the place where their models need to perform the best … is exactly in those moments that you don't have any data. Being able to generate data for those use cases … they become a lot better at reasoning through those scenarios.”

ChatGPT’s rapid rise in popularity caught many by surprise in late 2022, skyrocketing large language model providers from a tech industry fascination to household brands getting Super Bowl airtime. But more than three years later, many are asking what life looks like after the LLM. Many are eyeing visual AI, whether that be video generation, computer vision tech or world models, the lofty technology that some are calling AI’s next frontier. Runway’s eye-popping funding round is the latest signal that investors don’t want to get caught unawares.
LINKS

OpenAI won’t use the name "io" for its hardware due to a trademark lawsuit
AI security firm Reco raises $30 million Series B
Five of xAI’s twelve founding members have left the company
Gary Shapiro steps down as CEO of Consumer Technology Association
Boston Dynamics CEO Robert Playter stepping down after more than 30 years
Small business AI voice agent Newo raises $25 million

Deep Research in ChatGPT: The research tool is now powered by GPT-5.2, OpenAI’s latest model. Other updates include connecting it to connected apps and restricting web searches to trusted sites only.
ElevenLabs Audiobooks: Available in ElevenSuite, this tool is meant to alllow users to create, refine, and publish audiobooks using lifelike AI voices
Similiarweb AI Studio: An enterprise AI intelligence conversational AI that fundamentally transforms how organizations access marketing data, according to the company.
ByteDance Seedance 2.0: The company launched a pre-release version of its new AI video model that is going viral because of the quality of the cinematic video.

xAI: Member of Technical Staff, World Model
Peregrine: Lead of Applied AI
Tesla: Embedded Security Engineer, Vehicle Software
Samsung Research America: Principal Scientist, Language & Personal Intelligence
A QUICK POLL BEFORE YOU GO
What’s going to drive the most business value in 2026? |
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“[This image] looked noisy, and thus like a real photo. [The other image]’s lighting seemed off.” |
“[The other image] had detail, shadowing and clarity. [This image] is fuzzy with repetitive detail that is easy to replicate.” |

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.










