Why Anthropic defied the Pentagon

Welcome back. If you want a high-paying desk job in 2026, there’s not much hope in avoiding AI. New data from Ladders shows 50% of roles with $100,000-and-above salaries now require AI skills. That's up from just 20% in 2021, signaling that AI literacy has shifted from nice-to-have to core competency. Google’s Nano Banana 2 tackles one of AI image generation’s biggest flaws: text rendering, and potentially unlocking enterprise design use cases in the process. And in Washington, Anthropic is refusing Pentagon pressure to loosen its AI safeguards, betting that long-term trust with enterprises matters more than short-term revenue from government contracts.
Jason Hiner

IN TODAY’S NEWSLETTER

1. Why Anthropic defied the Pentagon

2. Google’s Nano Banana 2 solves a key AI flaw

3. For $100K jobs, 50% now require AI skills

POLICY

Anthropic defies Pentagon over AI guardrails

Amid pressure from the Pentagon to give in to its demands to loosen its safeguards, Anthropic continues to stand firm.

In a statement on Thursday afternoon, Anthropic CEO Dario Amodei made it clear that the company cannot accede to the Department of War’s demand to roll back its safeguards that prevent its AI models from being used in two key areas: mass surveillance of U.S. citizens and fully autonomous weapons. 

Amodei noted that AI’s use in mass surveillance posed “serious, novel risks to our fundamental liberties.” And while the tech may someday be helpful in fully autonomous weaponry, the guardrails simply don’t exist today to deploy this safely.  

"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei said in his statement. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

Amodei said that its Claude models are widely deployed throughout the defense and intelligence community, including in the government’s classified networks, in national laboratories, and in mission-critical applications such as intelligence analysis, modeling and simulation, operational planning, and cybersecurity operations. Thus far, its safeguards haven’t presented an issue in these cases, he said. 

Though Anthropic’s “strong preference” is to continue to support military action, it will only do so with its safeguards in place. Otherwise, it cannot “in good conscience” submit to their requests and continue its relationship. 

Amodei’s response is the latest move in the fight between the company and the Pentagon. Earlier this week, the agency took its first steps in blacklisting Anthropic by labelling it a “supply chain risk,” a label generally reserved for companies from adversarial countries. 

  • The unprecedented move would not only threaten Anthropic’s contract with the military but also force all defense vendors to cut ties with Anthropic. 

  • And after his meeting with Amodei, Secretary of War Pete Hegseth contradicted himself by threatening to invoke the Defense Protection Act, forcing Anthropic to tailor its models to military desires regardless.

  • Additionally, the Pentagon struck a deal with xAI on Monday to use its Grok models in classified systems, including weapons development and battlefield operations.

Policymakers, however, have started to warn that the sparring match between Anthropic and the Pentagon will only sour future relationships between the government and Silicon Valley AI firms, with Dean Ball, former AI adviser to the Trump Administration, calling Hegseth’s contradictory threats “incoherent.” 

Anthropic standing firm in its decision not to give in to the Pentagon’s threats was its only option, given that the company has built its reputation around AI safety and only deploying AI with guidelines that ensure it does no harm. Though the company is confronting its moral and ethical standards with recent changes to its Responsible Scaling Policy, backing down would have been a sharp about-face, betraying its core principles. Though the fallout could cost Anthropic a large chunk of its revenue from government agencies and vendors, there may be a silver lining: Gaining further trust with its primary audience of risk-averse but AI-hungry enterprises.

Nat Rubio-Licht

TOGETHER WITH PIGMENT

If you’re building with AI, listen to this

AI is reshaping the way tech leaders build products, make decisions, and operate at scale. But, today, the guidelines for exactly how that’s done are still being written.

The Perspectives series brings you inside the rooms where these playbooks are taking shape - whether you’re scaling a product, leading AI transformation, or building the systems that power modern enterprises.

Tune in on your favorite podcast platform and gain actionable insights from founders and leaders at Sierra, Profound, Intercom, Datadog, ElevenLabs and more.

PRODUCTS

Google’s Nano Banana 2 solves a key AI flaw

Google has once again raised the bar on AI image generation. 

On Thursday, the company unveiled Nano Banana 2, the latest iteration of its image model, offering advanced world knowledge and quality and reasoning at faster speeds than its predecessor. Arguably, the biggest upgrade is how it handles text.

Nano Banana 2 is powered by real-time information and images gathered from web search. In a post on X, Google noted that users can create images with “real-world accuracy,” including improved lighting, textures and details. 

“This deep understanding also helps you create infographics, turn notes into diagrams and generate data visualizations,” Google said in its announcement. 

Of all of the upgrades that Nano Banana 2 touts, two in particular stick out: Creative control and text rendering. 

  • Nano Banana 2’s ability to render text with more accuracy is something that past image generators have largely struggled with, often making it one of the easiest ways to flag that an image was generated using AI. The model can also translate localized text within an image between languages. 

  • The model also offers more creative control, including better instruction following, subject and character consistency and production-ready specs with resolutions from 512px to 4K. 

  • These capabilities open the door for Google’s image model to be far more valuable for enterprise use cases, such as graphic design or marketing, where it can now be used to create printable materials. 

Nano Banana 2 is currently available across the Google and Gemini suite, including in the Gemini app, search, AI Studio, Google Cloud and in the Google Ads platform.

Though it’s easier to make the case for embedding language models or agents into enterprise processes, image generation models are a harder sell, with inconsistency and poor text rendering capabilities impeding marketing departments from using them. Nano Banana 2, however, might break the mold, allowing creatives and marketers to render billboards, printed programs, or entire campaigns with text that looks much more polished and professional. Given that Google is powering this model with web data, however, copyright issues may still present a thorn. As copyright infringement cases against AI firms persist, enterprises might want to pause before taking the legal risk, even if the capabilities of Google’s new model seem enticing.

Nat Rubio-Licht

TOGETHER WITH AIRIA

Reinvent Your AI Journey with Airia

You want every employee—regardless of skill level—to confidently embrace AI, but that doesn’t mean sacrificing governance or innovation speed.

Airia is the enterprise AI platform built to unify innovation and security while optimizing your AI ecosystem.

  • Empower all employees with no-code, low-code, or pro-code tools for quicker AI adoption and productivity gains.

  • Test prompts, LLMs, and agent variants in safe, production-like environments to reduce development cycles.

  • Implement automated threat detection and governance tools to ensure compliance while eliminating risks.

  • Manage agents, data flows, and security protocols from a single hub for seamless control.

  • Future-proof your enterprise with AI built for complex and regulated environments.

WORKFORCE

For $100K jobs, 50% now require AI skills

If you want a high-paying job, there aren't many places left to hide from AI.

According to internal data from the U.S.-based job site Ladders, as seen by The Deep View, the number of knowledge worker job listings requiring AI skills has now skyrocketed to nearly half of all roles. 

"We found in our data that about 50% of all high-paying jobs at the $100,000-plus level now include some type of requirement for AI literacy," said Marc Cenedella, CEO of Ladders, in an exclusive interview with The Deep View. 

That 50% with AI requirements is up from 20% in 2021, when most AI requirements focused on machine learning, deep learning, automation and big data. 

Ladders, formerly TheLadders.com, launched in 2003, specializing in white-collar jobs paying $100,000 and above. Today, the site has listings for 1.1 million jobs in the U.S. and Canada and reviews 72 million job listings a year.

Here are more details from the company's internal research on AI in job listings:

  • For executive roles, 45% now require AI skills

  • Across all of the different roles and industries, at least 40% of the job listings now contain AI requirements

  • Other specific roles where at least 45-50% of the jobs listed now require AI skills include Data, Finance, Design, Product, Software Engineering, and HR. 

AI is also impacting the process of finding and landing the best jobs, and not always in a good way. Generative AI has made it easy for job seekers to quickly create personalized cover letters and resumes. But it may be secretly torpedoing candidates' chances at the best jobs, and not for the reasons they might think.

"When it comes to writing your resume, AI will give you an exactly average, typical resume, which is not what you want," said Cenedella. "You want one that's going to stand out and help you get the job. So for job seekers, it's confusing, because they'll use [AI] and it'll produce something that is extremely [typical] and reads well, but it's not actually helping them."


The fact that 50% of jobs now require AI skills doesn't tell the whole story. "When you actually read through job postings, you see that it's moved from a familiarity prerequisite,” said Cenedella, “[to where] you're going to be expected to have this knowledge within your purview." In other words, you now need to show that you know how to put the AI skills to work. And the only way you're going to do that is to actually use the technology, make mistakes with it, and figure out where it does and doesn’t make sense to use it in your daily routines. But with the technology moving so quickly, keeping up with the latest developments is a job in and of itself. So if you've been waiting to get started, now might be the time. If you want to follow my latest takes on the AI space in real-time, you can find me on X/Twitter at x.com/jasonhiner.

Jason Hiner, Editor-in-Chief

LINKS

  • GPT-realtime-1.5: OpenAI’s voice workflow model, now available in the realtime API. The model now offers better instruction following, tool calling and multilingual accuracy. 

  • Navi: A virtual data analyst by Parawise for your most complex work that’s simple, reliable and scalable. 

  • Aemon: An AI-powered R&D engineer, discovering optimal solutions for problems that go “beyond what human experts can find.” 

  • ShortKit: A video infrastructure platform that lets you turn an app into a TikTok-quality feed.

  • AI/ML Engineer: End-to-end model development, training pipelines and production deployment

  • AI Workflow Designer: Agentic system architecture, LLM orchestration and automation pipelines

  • Data Scientist: Statistical modeling, predictive analytics and data-driven decision systems

  • Cybersecurity Engineer: Threat detection, vulnerability assessment and AI-augmented security systems

(sponsored)

GAMES

Which image is real?

Login or Subscribe to participate in polls.

A QUICK POLL BEFORE YOU GO

Should Anthropic have acquiesced to the Pentagon's request to remove safety restrictions?

Login or Subscribe to participate in polls.

The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“It would seem obvious [this image] was fake because of how linear the space looks with perfect shapes and lighting. However, there are many real spaces in the world that look mathematically perfect. And [the other image] demonstrates how AI can create asymmetrical and highly varied scenes that seem too real.”

“The shadow formations in [this image] are more realistic.”

“The empty billboards [in this image] seemed really out of place.”

“[This image] lacks texture in walls and looks too perfect.”

“I was hoping only AI would post signs without any writing on them, and I was going to be very mad if [this] image was real. Crisis averted!”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.