CA, US & World
Pentagon Struggles with AI Limits as Iran War Intensifies
WASHINGTON, D.C. — The Iran war has become the most AI-intensive conflict in history, with the U.S. military piping massive streams of satellite and signals data into software systems to flag targets faster than any human ever could. However, this reliance on speed is pushing the military into a legal and ethical gray area that lacks explicit limits.
Defense Secretary Pete Hegseth has moved to reassure critics, telling the Senate Armed Services Committee that AI agents are not the ones pulling the trigger.
"We follow the law and humans make decisions," Hegseth stated. "AI is not making lethal decisions."
The Sprint with Scissors
Despite these assurances, the "kill chain"—the process from finding a target to striking it—is moving at a velocity that tests the boundaries of human oversight. The Pentagon is currently embroiled in a messy legal battle with Anthropic, a leading AI firm that has demanded limitations on how its technology is used in warfare. Hegseth famously dismissed the company's concerns, calling their leadership "ideological lunatics" for trying to slow down the process.
Legal experts suggest that the military's current approach is essentially a sprint. Gary Corn, a former deputy legal counsel for the Joint Chiefs of Staff, describes it as a choice of "how fast you choose to—or can afford not to—run with scissors."
Navigating the OODA Loop
The strategic logic behind AI integration centers on the "OODA loop" (Observe, Orient, Decide, Act). In modern warfare, the side that navigates this cycle the quickest typically wins.
Target Identification: AI software like Palantir’s Maven Smart System identifies military targets and moves them into a digital workflow for leaders to review.
Human-in-the-Loop: Current policy requires an "appropriate level of human judgment," but as the AI moves thousands of detections into workflows simultaneously, the definition of "appropriate" is being stretched thin.
Automated Data Fusion: U.S. officials admit that tasks previously requiring humans to move data between eight or nine different systems are now handled instantly by AI.
The Cost of Automation Bias
The dangers of this high-speed environment were brought into sharp focus in February 2026, when a U.S. strike hit an Iranian elementary school, killing at least 168 children. While the Pentagon is still investigating, congressional Democrats are questioning if AI targeting errors or "automation bias"—the tendency for humans to trust an AI's suggestion without sufficient skepticism—played a role.
Under the Law of Armed Conflict, commanders remain legally responsible for minimizing civilian casualties, regardless of what the software suggests. However, as AI exponentially increases the pace of operations, the capacity for humans to perform a meaningful moral calculus on every strike is becoming a central point of contention in Washington.
Explore NBCPalmSprings.com, where we are connecting the valley.
By: CNN Newsource
May 7, 2026


