Should AI Be Used In Warfare?
China and America have constantly been in competition with one another. The two nations compete over steel quotas, student visas and trade, among other things. However, one crucial branch of their race is often overlooked: the race towards AI-enabled weaponry. Both countries are investing heavy sums of money into militarised artificial intelligence, ranging from technologies such as autonomous robots to software that gives generals instant tactical advice in the battlegrounds.
Although AI-enabled weapons have the potential to offer much higher speed and precision, there is a huge possibility that it could lead to the redistribution of authority, wherein artificial intelligence advances beyond offering military advice and begins giving orders. The implications of such a scenario could be catastrophic. Being able to think substantially humans, an AI-enabled command system could simply launch missiles at the enemy at the slightest sign of threat, eradicating any chances of finding a diplomatic solution. Furthermore, increased reliance on AI may make the armed forces more vulnerable to a cyber-attack.
Historically, the use of highly lethal weapons such as nuclear weapons has been highly regulated through the combination of three approaches: deterrence, arms control and safety measure. However, such a framework would be quite inefficient with AI-enabled weaponry.
Deterrence relied heavily on an international consensus that nuclear weapons had the potential to cause a global catastrophe. However, the consequences of AI are less evident, the death tolls can vary from none to millions. The arms control approached was effective with tradition weapons due to transparency. Being able to monitor missile silos through satellites enables countries to know with some confidence as to what the other side was up to. However, this is not the case with AI, which is simply an algorithm. Countries would have no incentive to share their algorithm to the rest of the world as it would compromise its effectiveness. The final control is safety. Nuclear arsenals are usually buried in safety protocols that ensure that no weapons can be deployed without proper authorisation. If AI reaches a point where all decisions are computerised, there is a lack of accountability, hence resulting in a lack of safety.
In order to be effective in the battlefield, AI must be able to replicate human values, such as fairness, diplomacy, etc. This is a factor that can be easily ignored as the world’s two biggest super powers race towards AI-enabled weapons, making warfare more violent and the world more dangerous