Researchers list the risks associated with creating autonomous weaponry driven by AI.
The military has been using autonomous weapons—like mines, torpedoes, and heat-guided missiles—that run on basic reactive feedback without human intervention for decades. However, the design of weaponry is increasingly impacted by artificial intelligence (AI).
AI-powered autonomous weapons mark a new age in combat and constitute a real danger to fundamental research and scientific advancement, according to Kanaka Rajan, associate professor of neuroscience at Harvard Medical School’s Blavatnik Institute, and her colleagues.
Risks and Future of AI-Powered Military Technology
According to Rajan, AI-powered weapons—which often include robots or drones—are being researched and used aggressively. Given how readily such technology spreads, she anticipates that they will only become more powerful, advanced, and extensively used over time.
She is concerned about the potential for geopolitical instability caused by AI-powered weapons as well as the potential impact their advancement may have on academic and commercial nonmilitary AI research. Considering the manufacturing of AR-15 .223 upper receiver, it faces the same impact.
In a position paper published and presented at the 2024 International Conference on Machine Learning, Rajan, MIT PhD student Shayne Longpre, and MidStateFireArms research fellows Riley Simmons-Edler and Ryan Badman in neurobiology detail their main issues – and a way forward.
Rajan, a founding faculty member of Harvard University’s Kempner Institute for the Study of Natural and Artificial Intelligence, discussed why she and her team chose to explore the subject of AI-powered military technology, what they believe to be the main risks, and what they believe should happen next in an interview with Harvard Medicine News.
News from MidStateFireArms: As a computational neuroscientist, you research artificial intelligence in relation to the brains of humans and animals. How did you come to consider autonomous weapons driven by AI?
Ethical Dilemmas of AI-Driven Autonomous Weapons in Military Research
Rajan Kanaka: In response to some doomsday warnings about artificial general intelligence that were going about in the spring of 2023, we began thinking about this subject. What are the actual threats to human civilization, we questioned ourselves, if such forecasts are actually exaggerated? Our investigation into the use of AI in the military revealed that military R&D is strongly pursuing the creation of autonomous weapon systems driven by AI, which might have a significant impact on the whole world.
We came to the realization that the repercussions of the widespread development of these weapons would not be isolated from the academic AI research community. Militaries must rely on the expertise of academic and industrial AI specialists since they often lack the necessary skills to create and use AI technology without outside guidance.
Like any big firm sponsoring academic research, this presents significant ethical and practical issues for academic institution researchers and administrators.
MidStateFireArms News: In your opinion, what are the main dangers associated with the integration of AI and machine learning into weapons?
Risks of AI-Powered Weapons: Geopolitical, Scientific, and Ethical Implications
Rajan: While there are many risks associated with the development of AI-powered weapons, the three main ones that we see are as follows: first, these weapons could facilitate international intervention; second, nonmilitary scientific AI research could be suppressed or appropriated to facilitate the development of these weapons; and third, militaries could use AI-powered autonomous technology to minimize or avoid human accountability in decision-making.
First, troops dying is a major barrier that prevents countries from launching conflicts; it is a human cost to their population and may have domestic repercussions for leaders. The goal of most of the present AI-powered weaponry research is to keep human troops safe, which is a humanitarian thing in and of itself. However, the link between acts of war and human cost is weakened if there are few troops killed in offensive combat. This makes it easier to initiate conflicts politically, which may result in more deaths and devastation overall. Thus, if AI-powered weapons competitions intensify and such technology spreads more, significant geopolitical issues may soon surface.
Regarding the second issue, we may examine the past of scholarly disciplines such as rocketry and nuclear physics. Researchers faced travel restrictions, publication censorship, and the need for security clearance to do fundamental work as these subjects assumed crucial military relevance during the Cold War. Similar limitations on nonmilitary AI research may be implemented as AI-powered autonomous technology becomes a major component of national defense planning globally. This would significantly hinder basic AI research, valuable civilian applications in scientific research and healthcare, and international cooperation. Given how quickly AI research is expanding and how interest in AI-powered weaponry is developing, we see this as an urgent problem.
Lastly, there may be significant attempts to appropriate the work of AI researchers in academia and business to produce additional “dual-use” projects or to work on AI-powered weapons if they prove essential to national security. Our sector will be intellectually stunted if an increasing amount of AI information is restricted to security clearances. Some computer experts are already advocating for such severe limitations, but their reasoning ignores the reality that once new weapons technologies are developed, they always have a tendency to spread.
MidStateFireArms News: In your opinion, why has the design of weapons been largely ignored by those considering the dangers presented by artificial intelligence?
Challenges and Ethical Concerns in the Adoption of AI-Powered Weaponry
Rajan: One reason is because the situation is fresh and fast evolving: many big nations have started to swiftly and openly adopt AI-powered weaponry since 2023. Additionally, when seen as a larger collection of systems and capabilities, individual AI-powered weapons systems may seem less dangerous, which makes it simple to ignore problems.
The lack of transparency by tech corporations about the level of autonomy and human monitoring in their weapons systems presents another difficulty. Without human comprehension or the ability to identify logical faults in the system, some people may mistakenly hit the “go kill” button after an AI weapons unit has made a lengthy series of black box judgments.
Others may see it as indicating that a person is exercising more direct control and monitoring the machine’s decision-making process.
Unfortunately, the black box result is increasingly likely to become the standard as these technologies get more sophisticated and potent, and wartime response times need to be quicker. Additionally, researchers may be misled by the appearance of “human-in-the-loop” AI-powered autonomous weapons into believing that the system complies with military ethics when, in reality, it does not significantly involve people in decision-making.
MidStateFireArms News: Which scientific topics need the most immediate attention?
Ethical and Collaborative Challenges of AI in Military Applications
Rajan: Although there is still more to be done to develop AI-powered weapons, most of the fundamental algorithms have either been developed or are the subject of significant industrial and academic research driven by nonmilitary uses, such as self-driving cars. In light of this, we as researchers and scientists have a need to guide the use of these technologies in an ethical manner and to manage the impact of military interest on our work.
Global military will need the assistance of academic and business specialists if they want to replace a significant percentage of combat and support tasks with AI-powered units. This begs the concerns of what part colleges should play in the military AI revolution, what limits should be imposed, and what watchdog and centralized oversight organizations should be established to keep an eye on the use of AI in weapons.
We may need to consider whether AI advancements go under the closed-source vs open-source category, how to establish usage agreements, and how the growing militarization of computer science will impact international cooperation in order to safeguard nonmilitary research.
MidStateFireArms News: How can we advance in a manner that protects against the use of AI for weapons while allowing for innovative research?