According to a report by the New York Times, the United States government is contemplating the implementation of AI-controlled drones capable of autonomous decision-making regarding the potential killing of human targets. China and Israel are concurrently engaged in the pursuit of AI-powered lethal autonomous weaponry development. Critics contend that the implementation of "killer robots" would be an alarming notion, given that it would delegate critical life and death choices to machines with minimal or no human supervision. In opposition to international calls for a legally binding resolution that would proscribe the utilization of AI-powered lethal drones, the United States, Russia, Australia, and Israel all maintain their stance. Alexander Kmentt, the primary negotiator for Austria, considers the involvement of human beings in the application of force to be a crucial matter of ethics, security, and law.
🚨 The Pentagon is on the Verge of Deploying AI Weapons That Autonomously Kill Humanshttps://t.co/4gjtEo998s
— 🇺🇸🇺🇸Josh Dunlap🇺🇲🇺🇲 ULTRA-MAGA (@JDunlap1974) November 24, 2023
US Deputy Secretary of Defense Kathleen Hicks stated that the deployment of clusters of AI-enabled drones by the Pentagon would assist in offsetting China's military superiority. Air Force Secretary Frank Kendall argued that for the United States to maintain a military advantage, AI drones must be capable of making lethal judgments under human supervision. According to reports, Ukraine has already employed AI-controlled drones in its conflict with Russia; however, it is unknown whether these drones caused human casualties.
Stuart Russell, a senior AI scientist and one of the critics, contends that the development and deployment of autonomous weapons would have catastrophic effects on human liberty and security. The Campaign to Ban Killer Robots cautions against the possibility that intelligent-powered weaponry could be mass-produced and acquired by renegade nations or terrorist organizations. They advocate for the establishment of professional codes of ethics to prevent machines from making critical life-or-death decisions and for a treaty to prohibit autonomous weapons. Killer robot canines menacingly pursue humans in the television series "Black Mirror," illustrating the potential dangers of AI technology.
Misapplied AI technology, despite its numerous practical implementations, poses a risk to human liberty. A balance between regulation and awareness is necessary to prevent the use of artificial intelligence by politicians to impose oppressive regimes, as this emergent threat does.