Skip to main content

Anthropic

Anthropic doesn’t trust the Pentagon, and neither should you
Play

Techdirt’s Mike Masnick on the history of the NSA and mass surveillance in America, and why Anthropic’s fight with the Pentagon should worry us.

Nilay Patel
R
Robert Hart
Anthropic upgrades Claude’s spreadsheet and slide deck skills.

Claude can now communicate across Excel and PowerPoint, saving you from needing to keep switching tabs or re-explaining datasets at every step. Anthropic said it’s Claude “carrying the conversation across apps without losing track of what’s happening in either.”

Anthropic is launching a new think tank amid Pentagon blacklist fight

Co-founder Jack Clark, who will lead the new Anthropic Institute, said he had “no concerns” about research funding.

Hayden Field
R
Robert Hart
Anthropic’s latest Claude Code update is designed to find bugs for you.

The multi-agent tool, called Code Review, should catch “bugs human reviewers often miss,” Anthropic said. Agents run in parallel and deliver a high-level overview, plus in-line comments for individual issues.

Code Review is available in research preview for Enterprise and Teams customers.

J
Jess Weatherbed
Microsoft is bringing Claude Cowork to Copilot.

The Cowork integration was built in close collaboration with Anthropic and aims to help Copilot perform “long-running, multi-step tasks,” according to Microsoft’s announcement. The feature is in testing and will be available to preview later this month through Microsoft’s Frontier program.

T
Twitter
Thomas Ricker
Anthropic usage is booming despite “supply-chain risk.”

The designation from the US Department of War — that’s busy disrupting actual supply chains and human life in several countries — is having the inverse effect of driving up demand for Claude, which has been breaking daily signup records since early last week in every country where Claude is available.

AppFigures data also shows it topping App Store charts for free and AI apps in dozens of countries, including the US, Canada, and much of Europe.

T
Quote
Tina Nguyen
Anthropic responds to the Pentagon.

In a blog post, CEO Dario Amodei confirmed reports that the Defense Department had sent them a letter formally designating them a supply-chain risk, and said Anthropic planned to challenge them in court. He also clarified how it would currently impact Claude users:

The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation. With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.

T
External Link
Thomas Ricker
Anthropic CEO says refusal to pander to Trump caused Pentagon blowup.

In a scathing 1,600 word memo to employees sent on Friday, CEO Dario Amodei suggested Anthropic’s relationship with the government soured because, unlike OpenAI or its executives, “we haven’t donated to Trump” and “we haven’t given dictator-style praise to Trump.”

The leaked remarks could complicate Amodei’s last-ditch efforts to salvage the company’s relationship with the US military and prevent it from being iced out of defense work.

S
External Link
Stevie Bonifield
Defense contractors are already backing off on Claude.

Companies that do business with the US military are pivoting away from Anthropic’s AI after Defense Secretary Pete Hegseth announced he was designating it a “supply chain risk” last week, CNBC reports. While Anthropic can still challenge the designation in court, defense companies say they’re abandoning Claude preemptively “out of an abundance of caution.”

H
External Link
Hayden Field
Sam Altman said he planned to add two sentences to OpenAI’s agreement with the Pentagon.

The OpenAI CEO laid out some updated wording he hoped would address people’s concerns about mass domestic surveillance, though the new language still included the phrase “consistent with applicable laws.” Altman also said he reiterated over the weekend that Anthropic should not be designated a supply chain risk.

Sam Altman’s post

[X (formerly Twitter)]

How OpenAI caved to the Pentagon on AI surveillance

The law doesn’t say what Sam Altman claims it does.

Hayden Field
T
Quote
Terrence O'Brien
The US used Anthropic AI for strikes in Iran despite ban.

On Friday, Donald Trump announced a ban on the federal government’s use of Claude. Though he had to walk back his demand that agencies “IMMEDIATELY CEASE” using it, instead saying there would be a six-month phaseout. Part of that might be because planning for Saturday’s strikes against Iran was underway and relied on Claude for intelligence assessments and target identification. According to the Wall Street Journal:

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.

T
Twitter
Terrence O'Brien
A former Trump advisor calls the fight with Anthropic “attempted corporate murder.”

Dean Ball, who worked as a senior AI policy advisor, said on X that designating Anthropic as a “supply chain risk” or threatening to invoke the Defense Production Act could have a chilling effect on the entire industry. Alan Rozenshtein, a former DOJ official specializing in technology law, told Politico this could be the first step toward partial nationalization of the AI industry.

H
External Link
Hayden Field
OpenAI reached a new agreement with the Pentagon.

CEO Sam Altman wrote on X that the agreement allowed the US military to “deploy our models in their classified network.” He said the agreement reflects OpenAI’s desire for prohibitions on domestic mass surveillance and “human responsibility for the use of force, including for autonomous weapon systems.” Altman also wrote that OpenAI is “asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.” This follows a rollercoaster week of negotiations between Anthropic and the Pentagon.

Sam Altman’s post

[X (formerly Twitter)]

Trump orders federal agencies to drop Anthropic’s AITrump orders federal agencies to drop Anthropic’s AI
Hayden Field and Richard Lawler
H
Quote
Hayden Field
Even Ilya Sutskever weighed in on the Anthropic-Pentagon situation.

The OpenAI co-founder, who left after CEO Sam Altman’s ouster and reinstatement and then started his own AI startup called Safe Superintelligence, posted on X:

It’s extremely good that Anthropic has not backed down, and it’s siginficant that OpenAI has taken a similar stance.

In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside. Good to see that happen today.

We don’t have to have unsupervised killer robots

AI companies could stand together to draw red lines on military AI — why aren’t they?

Hayden Field
T
External Link
Tina Nguyen
The Pentagon is making moves.

In what appears to be preparations to fully blacklist Anthropic for not budging on their acceptable use policies, the Defense Department has begun reaching out to contractors to assess their exposure to the AI company’s products. Boeing and Lockheed Martin, two of the biggest companies in the defense space, have reportedly been contacted.

Does Anthropic think Claude is alive? Define ‘alive’

Anthropic calls its chatbot ‘a new kind of entity’ that might be conscious — and it’s opening a huge can of worms.

Hayden Field
Inside Anthropic’s existential negotiations with the Pentagon

It’s more than just a $200 million military contract at stake.

Tina Nguyen and Hayden Field
Money no longer matters to AI’s top talent
Play

The AI industry is rife with defections, FOMO, and radical mission statements. It’s about to get supercharged.

Nilay Patel
S
External Link
Stevie Bonifield
Anthropic’s new Sonnet 4.6 model is better at using computers.

Anthropic launched the latest version of Claude Sonnet on Tuesday, which it says “approaches Opus-level intelligence,” featuring improvements in coding and computer use with tasks like navigating spreadsheets or filling out web forms. Sonnet 4.6 is now replacing Sonnet 4.5 as the default model for free and pro Claude users.

C
Youtube
Charles Pulliam-Moore
Is “apocaloptimist” the new word for AI hype man?

Focus Features is billing The AI Doc: Or How I Became An Apocaloptimist as an “eye-opening” exploration of “the most powerful technology humanity has ever created.” You’d think the doc might feature some critical voices, but its new trailer makes it feel like it might be one big commercial. The film premieres on March 27th.

D
Quote
Dominic Preston
A marketing opportunity.

As Axios reports that the Department of Defense and / or War is preparing to brand Anthropic a “supply chain risk,” one commenter wonders if the Claude company might revisit its Super Bowl ad to turn that to its advantage.

hodgdon:

“Extrajudicial killings are coming to AI. But not to Claude.”

Get the day’s best comment and more in my free newsletter, The Verge Daily.

J
External Link
Jay Peters
The Department of Defense may designate Anthropic as a “supply chain risk.”

Should Anthropic get the designation, “anyone who wants to do business with the U.S. military has to cut ties with the company,” Axios says. The two sides have apparently been negotiating for months over how the military can use Anthropic’s AI tools.

J
External Link
Jess Weatherbed
Claude gets more free features to capitalize on ChatGPT ads.

After already dunking on OpenAI’s plan to bring ads to ChatGPT, Anthropic is bolstering its own chatbot to attract anyone jumping ship. Free Claude users can now create and edit files (including spreadsheets, presentations, and PDFs), access Skills for specialized tasks, connect to third-party services, and more — features previously limited to paying subscribers.

R
Richard Lawler
Anthropic’s Super Bowl ad has a change that made it less directly about OpenAI and ChatGPT.

The round of Big Game ads Anthropic previewed earlier this week set Sam Altman off, as he called them “clearly dishonest.”

Now, while the original ad says, “Ads are coming to AI. But not to Claude,” nodding to OpenAI’s plans, the one that aired replaced it with a new tagline: “There is a time and place for ads. Your conversations with AI should not be one of them.”

Screenshot from Anthropic ad saying “ads are coming to AI. But not to Claude.”
Anthropic’s original Super Bowl ad’s closing message, which is not the same as the one that aired on Sunday.
Image: Anthropic