When looking at the response to the war in Gaza, the Houthi campaign in Yemen or nonsense like this, I can only conclude that we’re dealing with children.
Children with uniforms and military rank who send men off to die yet don’t understand the most basic adult realities.
That’s nothing new there. History is full of titled nobles doing the same thing with catastrophic results. Then leftist and socialist revolutions led to madmen like Hitler and Stalin sending off armies into insane battles when they didn’t understand the first thing about warfare. (Stalin liked to plan battles on a globe while Hitler told his generals to read the Wild West novels of Karl May.)
While China is working on using AI to sow dragon’s teeth of intelligent drones and smart mines that will be able to take out entire armies, the Pentagon is doing this.
As the Defense Department works on its own responsible and ethical use of artificial intelligence and autonomy, a senior official said today the Pentagon wants to build international cooperation on the military development of the technologies and could call together dozens of countries in the coming months to do just that.
The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, launched in February last year “is a really clear demonstration of a throughline in our commitment to responsible behavior,” Michael Horowitz, deputy assistant secretary of defense for force development and emerging capabilities, said today at a Center for Strategic and International Studies event.
“I think that there’s a recognition that the sorts of norms we’re trying to promote are things that all countries should be able to get behind,” Horowitz said. “So they include things like a commitment to international humanitarian law. They include things like appropriate testing and evaluation for systems.”
He noted that 51 countries, not including Russia or China, have signed the declaration, which the State Department says “aims to build international consensus around responsible behavior and guide states’ development, deployment, and use of military AI.”
So the countries we’re more likely to face in combat haven’t signed it, but the countries we’re more likely to be fighting alongside have.
Item 4 of the 10 principles of military AI does cover equity.
“4. States should take proactive steps to minimize unintended bias in military AI capabilities.”
This is not the military we want to go into a war with.