There are rumors that the Pentagon has been "building killer robots." The U.S. Department of Defense (DoD)plans to use autonomous systems, but some are concerned that things might go out of hand.
Pentagon Sparks Concern Amid Plans to Use AI-Powered Weapons
The deputy assistant defense secretary claims there is "a lot of confusion about" how the Department of Defense (DoD) intends to utilize self-decision-making computers on the battlefield, which is why the DoD changed its AI standards.
Michael Horowitz clarified at a monthly gathering that the order does not prevent the creation of any systems. Instead, it will make explicit what is and isn't authorized and maintain a "commitment to responsible behavior," even while it creates deadly autonomous systems.
Some have expressed that they are not convinced by the attempts, despite the Pentagon's assurance that the modifications should calm people down. There has been an online controversy following the release of the updated version of the Pentagon's 2012 "Autonomy in Weapon Systems," with many individuals stating, "If the Pentagon says they're not doing it, they're doing it."
The Department of Defense (DoD) has been pushing hard to update its arsenal with self-sufficient drones, tanks, and other weapons that can identify and strike a target without human assistance. However, Mark Brakel, director of the advocacy organization Future of Life Institute (FLI), warned the public about the risk of it going rogue.
"These weapons carry a massive risk of unintended escalation," Brakel said.
AI-powered weapons have the potential to strike foreign powers without justification if they misread something, like a beam of sunlight, and see it as a danger. According to Brakel, the outcome might be catastrophic since AI-powered weapons, in the absence of significant human oversight, are like the Norwegian rocket incident -- a near-nuclear apocalypse -- on steroids and might raise the possibility of mishaps in hotspots like the Taiwan Strait.
Pentagon Stricter Rules Don't Ease Public's Concern
During a panel discussion on January 9, Horowitz stated that the Department of Defense (DoD) is committed to upholding international humanitarian law and is laser-focused on fostering public trust and confidence in technology.
"The real test of any new directive should be in how many weapons it rules out or significantly modifies," Brakel said. "We have yet to see any evidence that changes in the directive have a meaningful impact on how weapons are developed."
Anna Hehir, who leads autonomous weapons system research for FLI shared Brake's concern. According to her, pushing through with AI-powered weapons or autonomous weapons would be opening a Pandora's Box where there's a a hard way to return.
"I would argue for the Pentagon to view the use of AI in military use as on par with the start of the nuclear era," she said. "So this is a novel technology that we don't understand. And if we view this in an arms raceway, which is what the Pentagon is doing, then we can head to global catastrophe."
Several countries are working on their defensive capabilities. For instance, Russia previously announced that it would create an uncrewed ground vehicle to support the military who were in Ukraine, a battle variant of its Marker reconnaissance robot.
RELATED ARTICLE: Iranian 'Suicide' Drones: How Russia's New Favorite Weapon Work?
Check out more news and information on Weapons in Science Times.