Latest nationwide debates and multilateral forums have focused on the responsible use of artificial intelligence (AI). According to the Organisation for Economic Co-operation and Development (OECD), 60 nations have various programs and more than 30 nations have governmental AI programs that incorporate safe usage of AI. Nevertheless, the employment of AI for national defense has received little attention.
Members of the AI Partnership for Defense have all pledged to develop AI systems that are secure, dependable, and lawful thus far. However, the prospect of faster reactions and lower risk for their military troops continues to entice them.
The European Union (EU) and the United States both want to be at the forefront of developing guidelines for the safe use of AI in military applications. The AI Partners, like Canada, who are currently formulating national AI regulations, especially in the defense sector, will have to take US and EU norms into account.
Without any current deal and discussion essentially deadlocked, the Convention on Certain Conventional Weapons (CCW), which has concentrated on deadly weapons systems since 2014, may have been less successful in securing regulations than intended. The CCW, on the other hand, has permitted an even larger number of countries to gain a greater understanding of AI breakthroughs and possible problems.
Maybe an alternative arena for that global discourse is needed to ensure truly responsible usage of technology development.