Teaming up with AI? The risks and global security implications of military AI

Robin Geiss, Mennatallah El-Assady, Marcel Baltzer, Joanna J. Bryson

Hier klicken, um das externe Video (YouTube) anzuzeigen. Mehr Informationen

Zusammenfassung
What are key concerns and risks arising from the use of AI in military decision-making and operations? What safeguards can ensure AI-supported systems consistently remain aligned with the user’s intentions, ethical standards and international law? Can human-AI teaming mitigate these risks and in turn, which specific risks arise from it?
Podiumsdiskussion
Englisch
Conference

Teaming up with AI? 

The risks and global security implications of military AI

Advances in the field of artificial intelligence are rapidly transforming all aspects of society. AI will also revolutionize how militaries operate and how future wars will be fought, and competitive pressures in today’s fraught global security environment will only accelerate this trend. With AI technology advancing in leaps and bounds, the time to act is now.

AI will help to increase the speed and accuracy of military decision-making, planning and operations. In combination with robotic platforms and next-generation sensor technology, it will open up a vast spectrum of military applications, from AI-enabled logistical support, early warning systems, and intelligence gathering to AI-supported command structures, cyber operations and autonomous weapons systems.

But an enabling technology as powerful and transformative as AI comes with a long list of significant risks, not least inadvertent escalation, misperception and malfunction, and concerns about lack of transparency, discrimination and bias. Crucially, however, military AI also raises profound ethical and legal questions about human agency and human control, especially in matters of life and death. The UN Secretary General has unequivocally stated that: “Human agency must be preserved at all cost”.

Human-AI teaming seeks to leverage the strengths of humans and AI systems while overcoming their respective limitations, but human control retains priority over AI control. The concept of teaming evokes the integration of humans and AI systems as coordinated units, working to achieve strategic and operational goals through an agile and dynamic distribution of tasks. But the use of AI systems as “teammates” still raises critical questions about the risks and challenges inherent in the use of AI in the military domain.

To explore these issues, the panel will discuss the following questions:

  • What are key concerns and risks arising from the use of AI in military decision-making and operations?
  • What safeguards can be used to ensure AI-supported systems consistently remain aligned with the user’s intentions, ethical standards and international law?
  • Can human-AI teaming mitigate these risks? And in turn, which specific risks arise from human-AI teaming itself?