- The Deep View
- Posts
- The hidden management cost of AI agents
The hidden management cost of AI agents

Welcome back. Hollywood’s stance against AI is starting to splinter, as the Oscars and Golden Globes draw different lines around what counts as human creativity in filmmaking. Meanwhile, Adobe Enterprise CMO Rachel Thornton spoke with Sabrina Ortiz about how AI is reshaping marketing discovery and workflows, but not the core principles of good storytelling and customer connection. And inside enterprises, a new management burden is emerging as AI agents move from experiments into critical infrastructure. Managers now have to monitor agent behavior, token costs, security, and quality control alongside human teams. This is putting more pressure than ever on overworked middle managers. —Jason Hiner
2. AI shifts the funnel, not the fundamentals
3. Hollywood's AI opposition splinters
ENTERPRISE
The hidden management cost of AI agents
With agents moving into the workforce, the stakes for managers have never been higher.
ServiceNow is going all-in on AI agents, but that doesn't mean they should be treated as digital staff members. Instead, Jacqui Canney, ServiceNow’s chief people and AI enablement officer, told The Deep View she believes managers will stop thinking about agents as coworkers and start seeing them as embedded parts of new workflows.
“I actually hope our managers aren’t thinking ‘I've got five agents on my team,” said Canney.
On Tuesday, the enterprise AI giant announced updates to its AI control tower, a command center for managing AI deployments, that now include observing agent behavior and tracking ROI. It also expanded its autonomous workforce offering, adding new “AI specialists” that complete workflows in IT, customer relationship management, and security and risk from start to finish. Still, that shift is creating responsibilities tied to oversight, governance, and cost control.
ServiceNow says its platform gives managers visibility into how agents operate. But Canney said that oversight also increases the pressure on managers to prevent failures as AI systems scale. For some managers, that pressure is already showing up in day-to-day operations.
“[Managers] had a hard job before," said Canney. "Now, they have a harder job.”
When the City of Raleigh, North Carolina, a ServiceNow customer, first deployed an internal IT support desk agent about a month ago, the help desk supervisor initially saw their workload spike, Mark Wittenburg, the city’s CIO, told The Deep View. Alongside managing staff, the supervisor had to train the agent, monitor its behavior, and quality-check its responses.
“That's been a transition for the supervisor,” Wittenberg said.
The rise of AI agents is also creating new financial and security concerns for managers. Jayney Howson, ServiceNow’s Chief Learning Officer, told The Deep View that tracking token usage will become increasingly important for managers as employees and agents work closely together across organizations.
That combined usage could rack up AI bills and increase the risk of data leakage. If managers aren’t prepared, she says, they will be left cleaning up the mess.

Managers are facing increasing pressure in their jobs. Tech industry layoffs are squeezing middle-management roles, and those who remain are taking on more tasks and responsibilities. In addition to managing human employees, managers are now responsible for overseeing AI agents and ensuring they operate as intended. In other words, managers are being stretched thin. With AI adding to their workload, they will need to adapt to this new agentic reality without losing control. Defining agents as workflows and setting limits and oversight will be a key part of the equation.
TOGETHER WITH PLAID
Who’s afraid of AI in finance?
Not consumers. Our new consumer survey report breaks down what’s next for intelligent finance. And we’re hearing that the future is bright. In fact:
55% of people have used AI for money tasks in the last 12 months
86% of AI users say it helps them better understand money
50% of people say managing money without AI will soon feel outdated
Learn what your customers actually want out of AI and how to meet their expectations with insights from our new report, The state of intelligent finance.
WORKPLACE
AI shifts the funnel, not the fundamentals
Reaching the right audience for your product, business, or brand remains top of mind, even if the full impact of AI is still emerging.
Rachel Thornton, CMO of Adobe Enterprise, who is a marketing professional marketing to marketers, spends a lot of time thinking about this topic. I sat down with her at Adobe Summit to learn more about how she is approaching marketing at a time when there are deep concerns about LLMs reshaping the way people get information.
The first thing marketers need to think about: how to actually show up as AI changes brand discovery.
“If you go back a couple of years, you probably would have gone to Google when you would search for something, and that's still valid, but increasingly, people are going to places like ChatGPT or Perplexity,” Thornton told The Deep View.
Thornton’s advice on navigating this landscape: think of the agents.
“If you think about, for example, brand visibility and showing up in an LLM, that comes from not just having a great experience when customers come, but also [when] agents [come],” said Thornton. “So you almost have to think about it as, basically, you have, as a marketer, a new audience.”
The second bucket she encourages marketers to consider is how AI can help marketing teams move faster, including creating content, understanding customer data, and delivering more personalized experiences.
In particular, Thornton sees value in how AI can help marketers with their orchestration needs, ensuring campaigns work well across different channels, with agents acting as a supporting cast to help bring those customer journeys to life.
For more of Thornton’s advice and the full piece, click here.

Since the generative AI revolution took off, people's biggest concerns have centered on how it would impact creative work. There are still major problems and considerations worth examining, such as the use of artists' work in training data, the displacement of professionals by this technology, and the question of what creative work even means when a bot is producing it rather than human imagination. However, one concern that Thornton tackles particularly well is how AI can coexist with creativity in a marketing context, especially by speeding up workflows and allowing creatives to devote more time to the most important aspects of the creative work itself.
Disclosure: Sabrina Ortiz's travel to Adobe Summit was paid by Adobe. The Deep View's coverage is editorially independent from the companies we cover.
IN PARTNERSHIP WITH LAMBDA
How to push Model FLOPS Utilization past 60%
Most large-scale training runs operate at 35-45% Model FLOPS Utilization (MFU). You're paying for more than twice the compute you're using.
Lambda's engineers benchmarked Llama 3.1 models from 8B to 405B on NVIDIA Blackwell GPUs and traced efficiency loss to its root causes:
Memory overhead
Parallelism strategy
Serialized communication
The result: a reproducible framework that pushed MFU past 60% (25%+ improvement vs. industry benchmark) with no changes to model architecture.
CULTURE
Hollywood's stance against AI is fracturing
AI is growing increasingly capable of making realistic images and videos. Hollywood is still deciding what to do about it.
Last week, two major entertainment awards institutions, the Academy of Motion Picture Arts and Sciences and the Golden Globes, made somewhat contrasting decisions about how to handle the onset of AI in the movie industry. It signals that a schism may be forming in an industry that had once largely rallied against the use of generative AI technologies.
Here’s what could be eligible come awards season:
The Academy of Motion Picture Arts and Sciences, which hosts the Oscars, last week updated its rules that ban AI-generated acting and writing for films from awards eligibility. Specifically, the Academy notes that acting must be "demonstrably performed by humans" and writing "must be human-authored" in order to be nominated. The Academy noted that this is a “substantive” change from previous rules. However, outside of acting and writing, the use of AI tools will “neither help nor harm the chances of achieving a nomination.”
The Golden Globes took a more lenient approach. On Friday, the organization said that AI does not automatically disqualify a movie or TV show from awards contention. However, in order to be considered, “human creative direction, artistic judgment, and authorship remain primary throughout the production process.” AI may be used as part of the production process, but cannot replace “core creative contributions” of human talent. For instance, AI doesn’t render an actor’s performance ineligible, so long as those tools were used to enhance, rather than replace, and must be authorized by the performer.
With these decisions, these entertainment industry institutions are seeking to draw the lines before it’s too late. Many, however, have fought AI thus far: The biggest talent agencies in the industry, for instance, largely opted their clients out of their likenesses being used in OpenAI’s Sora (RIP).
And in January, a coalition of actors, artists and performers launched a petition called “Stealing isn’t Innovation” via the Human Artistry Campaign, protesting the “illegal mass harvesting of copyrighted works.” Some of the hundreds of signatories include Scarlett Johansson, Cate Blanchett, REM, and Jodi Picoult.
However, other industry figureheads have warmed up to the idea of figuring out where AI fits in. For instance, Charles Rivkin, head of the Motion Picture Association, rallied against OpenAI's Sora in October for concerns over copyright infringement, but said in a speech at CinemaCon last month that AI can “bolster the art of storytelling” in the hands of creators.

AI adds a layer of complexity to an industry that’s based entirely on the act of creation. While a lot of the content that AI generates can be chalked up to slop, it’s getting harder and harder to tell authentic from synthetic. There are some measures in place to sift out real from fake content, such as the C2PA standard, but the proliferation of this content may be outpacing our ability to track it. It raises the question: Will consumers eventually be unable to tell the difference between something made by humans and something made by machines? And even more concerning: Will they care?
LINKS

Figure AI debuts new robot model with two robots to make a bed together
Google DeepMind hires Alex Imas as director of AGI economics
Akamai reportedly strikes seven-year, $1.8 billion cloud deal with Anthropic
Major hedge fund cuts $8 billion stake in Microsoft
OpenAI, Anthropic and other labs meet with religious leaders about AI ethics
Apple reached a deal with Intel to start manufacturing in-house chips

Codex for Chrome: OpenAI’s coding tool can now create and use its own tabs and tab groups while you simultaneously use your browser.
Perplexity Agent Skills: Perplexity published an internal manual for how it builds agents.
Gemini Nano: Google may have installed it locally on your device without you knowing it.
Hermes Agent: Passed OpenClaw to take the No. 1 spot on OpenRouter's token rankings.

Anthropic: STEM Fellow
Google DeepMind: Frontier AI Research Scientist
Baseten: Post-Training Research Scientist
Booz Allen: Defense Mission Expert, Senior
POLL RESULTS
Do you think that AI is evolving faster than our ability to contain the risks?
Yes (77%)
Somewhat (15%)
No (5 %)
Other (3%)
The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“Scaffolding doesn't seem as complex as [the other image].” |
“The angles on [This image] are unrealistic.”
|


If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.













