Tag

Autonomous Weapons

All articles tagged with #autonomous weapons

OpenAI’s Pentagon Pact Faces Skepticism Over Surveillance Safeguards
technology1 month ago

OpenAI’s Pentagon Pact Faces Skepticism Over Surveillance Safeguards

The Intercept questions OpenAI’s claim that a new Pentagon contract bars domestic mass surveillance and the use of AI for autonomous weapons, noting the contract language hasn’t been released and experts doubt the safeguards will actually prevent NSA or other surveillance. Critics warn vague terms like “intentionally” and “deliberate” could still enable dragnet data collection, calling for full contract disclosure and greater scrutiny of OpenAI’s statements and affiliations in the deal.

Anthropic’s Pentagon Showdown Highlights AI’s Dual-Use Dilemma
technology1 month ago

Anthropic’s Pentagon Showdown Highlights AI’s Dual-Use Dilemma

Anthropic, once a quiet AI-safety upstart, finds itself at the center of a high-stakes clash with the DoD after resisting broader safety restrictions on Claude for domestic surveillance and autonomous weapons; the Pentagon labeled Anthropic a supply-chain risk and pressed contractors to sever ties, a move that coincided with OpenAI striking its own DoD deal and sparked debate over dual-use AI, accountability, and regulation as Anthropic weighs court challenges and keeps negotiating.

Anthropic CEO pushes guardrails for military AI amid government-designation clash
politics1 month ago

Anthropic CEO pushes guardrails for military AI amid government-designation clash

Anthropic CEO Dario Amodei defends a limited, two-red-lines stance on government use of its AI—no domestic mass surveillance and no fully autonomous weapons—while denouncing the Pentagon’s rapid supply-chain designation as excessive and urging Congress to set guardrails as AI tech speeds forward. He says Anthropic remains willing to support U.S. national security under strict terms and may off-board if a deal cannot be reached, arguing for a balance between security needs and democratic values.

OpenAI clinches Pentagon AI pact with safety guardrails as Anthropic falters
technology1 month ago

OpenAI clinches Pentagon AI pact with safety guardrails as Anthropic falters

OpenAI announced a deal to supply AI to classified U.S. military networks with safeguards against mass surveillance and autonomous weapons, following Trump’s push to curb Anthropic’s access. Anthropic has resisted similar terms, asserting safety constraints, while internal industry and employee reactions show a divide over government use of AI. The move comes as OpenAI also disclosed a $110 billion funding round that would value the company at roughly $840 billion.

Anthropic Upholds Guardrails in Pentagon AI Standoff
technology1 month ago

Anthropic Upholds Guardrails in Pentagon AI Standoff

Anthropic CEO Dario Amodei reiterates two guardrails—blocking mass surveillance of Americans and prohibiting fully autonomous weapons—as a condition for any military collaboration, even as the Pentagon pushes for broader, unguarded use of Claude. The dispute followed Trump-era contract cuts and a Defense Department push to phase Anthropic out within six months; Amodei argues the guardrails reflect U.S. values and safety, while Pentagon officials push for more permissive use. The sides have not reconciled, and Congress may weigh in on AI safeguards.

"Ukraine's Rapid Advancements in Drone Technology Reshape Battlefield Dynamics"
military-technology2 years ago

"Ukraine's Rapid Advancements in Drone Technology Reshape Battlefield Dynamics"

Ukrainian fundraiser Serhii Sternenko has showcased a new attack drone with autonomous target recognition technology, capable of locking onto and attacking targets without human intervention. The system is immune to radio-frequency jamming and is being developed for mass production at a cost of $1,000 per unit. While the ethical implications and reliability of autonomous weapons remain contentious, the technology represents a significant advancement in warfare capabilities, particularly in the ongoing conflict with Russia.

"Pentagon's Dilemma: Balancing AI's Lethal Potential"
defense2 years ago

"Pentagon's Dilemma: Balancing AI's Lethal Potential"

The Pentagon's "Replicator" program aims to accelerate the use of AI-run drones in the military, with the goal of having thousands of these weapons platforms by 2026. While officials and scientists agree that fully autonomous weapons are imminent, the challenge lies in determining when and if AI should be allowed to use lethal force. Governments are grappling with the need to regulate AI in warfare, with debates ranging from no regulation to extremely narrow limits. The US military has already extensively utilized robotic and AI-run weapons systems, but legal guidance for their use on an international scale remains unclear.

"Pentagon's AI Push Forces Tough Choices on Autonomous Weapons"
defense-technology2 years ago

"Pentagon's AI Push Forces Tough Choices on Autonomous Weapons"

The Pentagon is pushing forward with its ambitious AI initiative, Replicator, aiming to deploy thousands of AI-enabled autonomous vehicles by 2026 to keep up with China. While the funding and details of Replicator remain uncertain, it is expected to accelerate decisions on deploying AI technology, including weaponized systems. Experts predict that fully autonomous lethal weapons will be developed within the next few years, with humans taking on supervisory roles. The use of AI in military operations extends to tracking fitness, predicting maintenance needs, and monitoring rivals in space. However, concerns remain about the responsible use of military AI, as several countries, including China and Russia, have not signed a pledge to use it responsibly.

"AI-Drones: The Ethical Dilemma of Autonomous Killing in the US"
technology2 years ago

"AI-Drones: The Ethical Dilemma of Autonomous Killing in the US"

The deployment of AI-controlled drones that can autonomously decide to kill humans is becoming closer to reality, with countries like the US, China, and Israel developing lethal autonomous weapons. Critics argue that this development raises ethical concerns and removes human input from life and death battlefield decisions. Several governments are pushing for a binding UN resolution to restrict the use of AI killer drones, but the US, along with Russia, Australia, and Israel, is resisting and prefers a non-binding resolution. The Pentagon is actively working on deploying swarms of AI-enabled drones, aiming to offset China's military advantage. The capability for AI drones to make lethal decisions while under human supervision is seen as crucial by military officials.

The Perils of A.I. Drones and Weapons: Unveiling the Risks
technology2 years ago

The Perils of A.I. Drones and Weapons: Unveiling the Risks

The development of AI drones and autonomous weapons has sparked a debate on how to regulate their use and the potential risks of turning life-or-death decisions over to artificial intelligence programs. While autonomous weapons are not entirely new, recent advancements in AI technology have intensified the discussion around lethal autonomous weapons. From land mines to homing munitions and loitering munitions, various forms of automated weaponry have been used in the past. However, the Pentagon is now working on building swarms of AI-enhanced, autonomous drones that could carry surveillance equipment or weapons. Concerns have been raised about the risks associated with these new systems, and there is ongoing debate about the level of autonomy that should be allowed in military technology.

'Godfather of AI' Leaves Google Over Ethical Concerns and Warns of Dangers Ahead
technology2 years ago

'Godfather of AI' Leaves Google Over Ethical Concerns and Warns of Dangers Ahead

Geoffrey Hinton, a pioneer of artificial intelligence and former Google VP and engineering fellow, has resigned from the company to freely warn of the risks associated with AI technology. Hinton is concerned that Google is giving up its previous restraint on public AI releases in a bid to compete with ChatGPT, Bing Chat and similar models, opening the door to multiple ethical problems. He is worried that generative AI could lead to a wave of misinformation and outright replace some jobs. Hinton is also concerned about the possibility of fully autonomous weapons and the tendency of AI models to learn odd behavior from training data.