Gizmodo reports Anthropic left public files revealing Claude Mythos, a rumored ‘most powerful’ model described as far ahead in cyber capabilities and potentially too dangerous to release, prompting the DoD to celebrate and fueling investor chatter about a future IPO while critics question governance and security practices.
The U.S. Department of Justice told a federal court that Anthropic’s designation as a supply-chain risk and restrictions on Claude AI for Pentagon use are lawful, arguing the government did not violate Anthropic’s First Amendment rights and that security concerns justify limiting access to DoD systems. Anthropic contends the government overstepped its authority and seeks relief in a lawsuit that could cost the company billions in revenue; a hearing is scheduled as the Defense Department weighs replacing Anthropic’s tools with offerings from Google, OpenAI, and xAI while the case unfolds.
Defense Department CTO Emil Michael said Anthropic’s Claude AI models would pollute the defense supply chain due to baked-in policy preferences, justifying a supply-chain-risk designation that could jeopardize hundreds of millions in contracts. Anthropic has sued the Trump administration to overturn the designation, arguing it is unlawful and harms its business. The designation requires contractors to certify they don’t use Claude; Anthropic has published Claude’s constitution, describing how it shapes the model’s safe and ethical behavior. Despite the blacklist, Claude has been used to support U.S. military operations in Iran, and the DoD says a transition plan is in place to move away from Anthropic, noting the change cannot be done overnight.
An Iranian drone strike on a Kuwait port facility that killed six U.S. service members also produced injuries described as more severe than initially reported, including brain trauma, burns and shrapnel, with at least one case possibly leading to amputation. Survivors recalled smoke and chaos at the Shuaiba port, where the unit reportedly lacked hardened protections. Pentagon figures say about 140 U.S. service members were hurt in the first 10 days of the U.S.-Israel campaign against Iran, with 108 returning to duty and eight still severely injured, and more than 30 hospitalized in the U.S. and overseas. The Defense Department later disputed earlier characterizations of the strike, emphasizing protective measures for troops. The dead include Sgt. Nicole Amor and five others; a seventh service member, Sgt. Benjamin Pennington, was killed in a separate strike in Saudi Arabia.
Anthropic filed a lawsuit against the Department of Defense and other federal agencies after the Trump administration designated the company a “supply chain risk” and ordered agencies to stop using its Claude AI. The suit argues the designation and directive are unlawful, infringe First Amendment rights, and threaten hundreds of millions of dollars in contracts, seeking injunctive relief to protect current and future business. The case underscores a broader clash over AI use in government, with Anthropic asserting it can work with the Pentagon while upholding redlines against mass surveillance and autonomous weapons.
OpenAI’s robotics hardware lead Caitlin Kalinowski has resigned, criticizing the rushed announcement of a Department of Defense deal and the lack of clearly defined guardrails around issues like surveillance and autonomous weapons; OpenAI says there are no plans to replace her and emphasizes the agreement includes safety boundaries amid broader scrutiny of AI governance.
Caitlin Kalinowski, OpenAI's head of robotics, announced her resignation, saying the DoD partnership was rushed and guardrails weren’t defined. OpenAI emphasized that the Pentagon deal aims for responsible national-security AI with red lines—no domestic surveillance and no autonomous weapons—and said it will continue governance discussions as industry debate over safeguards continues.
The Pentagon has designated Anthropic a supply chain risk—the first such label for a US company—restricting its use by the DoD; Anthropic plans to sue, arguing the designation is legally flawed and narrowly scoped, as talks with defense officials stall and partners like Microsoft continue to use Anthropic technology for non-defense purposes.
OpenAI CEO Sam Altman told staff that operational decisions on how its AI is used by the DoD rest with the government, not OpenAI, after a Pentagon deal. The Pentagon will seek input and allow OpenAI to deploy its safety stack, while retaining ultimate decision authority with a DoD official, amid criticism and competitive dynamics with Anthropic and xAI.
OpenAI CEO Sam Altman said the Pentagon deal was rushed and sloppy and will be amended to include safeguards, including a ban on domestic surveillance of U.S. persons; the Defense Department has said OpenAI’s tools won’t be used by intelligence agencies. The admission follows ongoing tensions with Anthropic over safety rules, with Altman noting shared red lines but it remains unclear why DoD approved OpenAI’s terms over Anthropic’s.”,
The Wall Street Journal reports federal agencies continued to use Anthropic's Claude in a major Iran airstrike after President Trump ordered a phase‑out, with a six‑month window still in effect; the DoD is exploring alternatives (including xAI and OpenAI) but replacement would take months, and Claude has allegedly been used in prior operations such as Maduro's capture.
OpenAI reached a deal with the Pentagon to provide its AI technologies for classified U.S. government work, including safeguards to prevent domestic surveillance and autonomous weapons. The move follows a failed negotiation with rival Anthropic and comes as Trump pressures agencies to reassess AI use. OpenAI will embed some staff with government teams on classified projects and implement technical guardrails. Implementation may depend on AWS access, with OpenAI recently partnering with Amazon; Google and xAI also have Pentagon contracts in the broader landscape.
President Trump directed every federal agency to immediately stop using Anthropic's AI tools, ordering a six-month phase-out amid a dispute over safeguards and potential military or domestic surveillance uses; Anthropic has resisted unrestricted access to its tools, while the Pentagon pressed for broader use, and industry groups and tech workers warned against war-related deployments and the implications for AI governance.
The DoD used counter-UAS authorities to down a border-patrol drone operating in military airspace near Fort Hancock, prompting an expanded temporary flight restriction. Officials say the action was coordinated with CBP and the FAA to counter cartel drone threats, with no commercial flights affected and plans to improve interagency cooperation moving forward.
The Pentagon signed a $210 million, no-bid contract with Israel's state-owned Tomer to develop the XM1208 cluster munition, a record arms purchase from Israel. While the shells are designed to have a low dud rate, critics warn cluster weapons are inherently indiscriminate and dangerous long after conflict, and the award bypassed public competition under a public-interest exception.