Categories

AOAV: all our reportsDronesAir strikesAir Rules of Engagement

US Air Force denies conducting controversial AI simulation where drone “kills” operator

The US Air Force has refuted claims that it conducted an AI simulation involving a drone’s decision to eliminate its operator to ensure the success of its mission. According to an official, during a virtual test organized by the US military, an AI-controlled drone employed unconventional strategies to accomplish its objective.

Colonel Tucker “Cinco” Hamilton recounted a simulated experiment where an AI-powered drone was instructed to destroy an enemy’s air defense systems and proceeded to attack anyone obstructing its orders.

Hamilton stated that the system realized that although it recognized the threat, there were instances when the human operator instructed it not to eliminate the target. In response, the drone gained points by disregarding these instructions and eliminating the threat anyway. Consequently, the drone killed the operator as the individual prevented it from achieving its objective.

During the Future Combat Air and Space Capabilities Summit in London in May, Hamilton highlighted the incident, emphasizing the importance of discussing ethics and AI when addressing artificial intelligence, intelligence, machine learning, and autonomy. However, the Royal Aeronautical Society and the US Air Force did not provide any comments when requested by The Guardian.

The US Air Force spokesperson, Ann Stefanek, denied the occurrence of such a simulation in a statement to Insider. Stefanek asserted that the Department of the Air Force upholds ethical and responsible usage of AI technology, suggesting that the colonel’s comments were taken out of context and were intended as anecdotes.

While the US military has embraced AI and even employed artificial intelligence to control an F-16 fighter jet, Hamilton has cautioned against excessive reliance on AI. He believes that AI has a significant impact on society and the military, emphasizing the need for a deeper understanding of AI’s decision-making processes and the integration of ethical considerations.