By: BA Hamzah
In the ongoing Israeli-Palestinian conflict in Muslim holy month of Ramadan this May, Israeli forces used unmanned drones to drop tear gas canisters on Palestinians protesting around Al Aqsa Mosque, to avoid face-to-face confrontation. In Estonia and Georgia in the previous decade, cyber-attacks put both states out action for weeks when their computer network systems and the digital infrastructure containing critical national data were compromised by an external party. Exactly who was behind the attacks was never known. Both governments blamed Russia, but Moscow denied involvement.
AI can also be used by non-state actors, as was demonstrated last week when unknown forces calling themselves Darkside brought down Colonial Pipeline’s entire US east coast network of pipelines, causing fuel prices to skyrocket and forcing the company to pay what it acknowledged was a US$4.4 million ransom to get gasoline moving again.
Those are examples of the growing use of artificial intelligence to cripple potential antagonists. This potential for havoc has stirred concern in world capitals over cyber and lethal autonomous weapons. It is described in a chilling report by the United States National Security Commission on Artificial Intelligence (UNSCAI). Headed by former Google Chief Eric Schmidt, the bipartisan commission is comprised of 14 other technologists, academicians, business leaders and security professionals. They met over two years and formally acknowledged that China is ahead of the US in some critical AI applications.
Russia is not far behind.
Despite its dual use, the message from the Commission was clear: the US must win the AI competition to retain global leadership. It has unveiled a whole-of-nation strategy to win the competition “for the sake of security, prosperity and welfare.”
In short, the report is not only about how but what the US needs to do to be ahead of a re-assertive China and Russia in AI technology. The Commission calls for heavy investment in AI, nonpartisan leadership and the development of human resource talents in all scientific fields.
The report examines in great length the use of cyber as well as autonomous weapons in future inter-state conflicts. Together, cyber weapons and lethal autonomous weapons (LAWS) or artificial intelligence (AI)-enabled weapons pose a security threat to humanity and as force multipliers. among the most feared in the internet-dependent world i.e., cyberspace.
The commission notes that digital dependence has increased vulnerabilities of the US in every segment of the societies-corporations, government institutions as well private agencies- to cyber-based intrusions. Several AI-enabled weapons are already present in conventional battlefields. They include the older the automatic defensive versions like land, sea mines and now the banned anti-personnel mines.
The close-in-weapon-systems (CWIS) mounted on naval ships and other platforms, for example, can “autonomously identify and attack oncoming missiles, rockets, artillery fire, aircraft and surface vessels according to criteria set by the human operator.” Once programmed, these killing machines act intuitively, making it exceedingly difficult to stop unintended accidents. One challenge, according to the Commission, is to ensure human oversight over the use of such weapons. Although it calls for ethical practices, the commission is not in favor of a global treaty on AI.
Analysts have predicted a strong likelihood of malign actors using AI to enhance cyber-attacks as well as to create digital disinformation campaigns. For example, it is now possible for AI-generated malware to self-replicate and create new threat phenomena like deepfakes and AI-enabled nano swarms.
Today’s offensive LAWS include killer robots and some pre-programmed mini suicide or kamikaze drones that use artificial intelligence and facial recognition data to destroy targets and interfere with global navigational systems. These autonomous systems are referred to as slaughterbots – killer robots that can be used to slaughter innocent people. In the hands of malign actors, the slaughterbots can wreak security havoc.
During the most recent conflict (September- December 2020) between Armenia and Azerbaijan over the disputed Nagorno-Karabakh region, both sides employed kamikaze or suicide drones to great effectiveness. Kamikaze drones are “loitering munitions.” Fitted with cameras, these UAVs ‘loiter’ in the air for a specific amount of time, to gather and relay intelligence to troops on the ground. They can also act like modern-day suicide bombers by self-detonating to destroy targets. Like many other LAWS, these suicidal UAVs tend to favor the attackers more than the defenders.
Similarly, during the Russo-Georgian war in 2008, before the Russian forces bombed the villages and cities in South Ossetia, it was preceded by successful cyber-attacks. This is the first known case of military invasion, preceded by cyber-attacks to neutralize targets making it safer (i.e., less casualties) to put boots on the ground.
The cyber-attacks in Estonia and Georgia, like the recent Nagorno-Karabakh crisis, are a harbinger of future wars. Often acting as force multiplier, like conventional weapons, when used in the cyberspace, LAWS can have a chilling impact on the security of nation states. When used in cyber-space and conventional warfare setting, AI weapons become more effective when used together with striking missiles and rocket artillery weapons. Together, they can deliver optimal target destruction.
We often hear of computer malwares that destroy or compromise critical databases in the cyberspace by criminals, rouge states and their proxies. In future crisis, states which have been reluctant to show off their AI and cyber weapons in their arsenals, are more likely now to use cyber and LAWS to attack military targets.
For example, the US has confirmed it deployed cyber weapons to disable Iranian maritime operations in the Gulf of Hormuz in June 2019. The attack on the computer network of Iran was an immediate response to the downing of the US Global Hawk surveillance drone by an air-to-air missile in the Gulf of Hormuz.
The ever-changing threat landscape across battlefields means that the armed forces need to stand ready for anything in the information age. Arguably, the most important single proposition influencing contemporary military doctrine is the introduction of information technology like AI-enabled robots in military operations.
War fighting and war prevention is a task that is usually entrusted to the military. The primary role of the military is to protect the core national interests of the state. Defending the territorial integrity and political independence against the external enemy traditionally has been the primary function of the military. While this task remains the same, the digitalization of national power assets whose data often stored in the computer systems in the cyber domain has complicated the role of modern militaries.
While geography remains relevant to military operations in conventional warfare setting, it is less relevant in the borderless internet-connected cyberspace operations, where the concept of territorial integrity becomes blurry. Yet, foreign cyber forces – military or civilian – no longer need heavy tanks or fighter jets to invade a country to inflict damage or destroy national assets or undermine the elements of its national power. One small computer virus or worm, like Stuxnet in 2010 caused extensive damage to Iranian nuclear facilities without a bullet being fired. Evidently, US and Israeli hackers have “invaded” the Natanz nuclear facilities a few times, the most recent sabotage on April 13. According to some sources, the attack pushed back operations at Natanz by nine months.
Military doctrines dealing with war on land, sea, and air are no longer adequate to deal with operations in the cyberspace. Using new technologies like malwares and robots, the perpetrator can achieve the same result, compromising security and usurping the political independence of the state.
After all, the purpose of war, according to Carl von Clausewitz, is the achievement of political objective- “war is politics by other means”! In Clausewitzian dictum, war involves violence and holding of ground. In other words, the military occupation of enemy territory, as in the period of Napoleonic wars through WW1 and WW11 was critical to ensure political submission.
Today, however, technological developments in scientific, engineering, biological fields and in the information-based domain are making it possible for the perpetrators (states and proxies) to achieve their political objectives without resorting to open violence. While old-fashioned Clausewitzian violent warfare continues in many low-tech corners of the world, where soldiers with conventional weapons “are still pretty effective tools for beating enemies into political submission,” in today’s information war age, states with good hackers, intelligent scientists and a computer can wreak the same damage.
In an asymmetrical hybrid warfare setting, wars in cyberspace are likely to see states and non-state actors using AI technology. Robots and other types of autonomous weapons are likely to be used alongside malwares to destroy the national power. Failure to retain primacy in information technology and failure to protect national power like critical assets from cyber-attacks can devastate national security. Hence, the competition in the cyberspace for superior information to stay ahead of rivals is ongoing. According to one study, cyber-attacks happen every thirty-nine seconds on average in the US!
This rivalry can be very tense and destabilizing, made more challenging by the fluidity of events and the speedy development of the Internet- of -Things (IOTs) as predicted by Gordon Moore in 1965. Moore’s law states that the speed and capability of computers tend to double every couple of years.
AI technology is key. As cyber weapons by themselves, LAWS can also be used together with other cyber weapons, as force multipliers. More significantly, AI can be used to control autonomous tools like drones for surveillance 24/7 as well as for offensive purposes. Many scientists fear that the absence of regulations to curb the deployment of AI in the cyber domain can result in undesirable outcomes for human society.
Among those who feared the overdevelopment of artificial intelligence was the late Stephen Hawking. He was worried that artificial intelligence could surpass human intelligence, and that “thinking machines could one day take charge” could imperil humanity. The current debate revolves around the ethical standards in the application of AI in the hands of rogue states, for example.
Russian leader Vladimir Putin was so concerned with the exponential speed with which AI is taking shape. He warned in 2017 that “It (AI) comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become “the ruler of the world.”
Staying ahead in the space domain-both outer and cyber- calls for continuous updating of the systems’ capabilities, especially in AI. This requires agility and investments that favor those with deep pockets. While keeping up with the Joneses in the cyber world will be a real-time challenge, it also offers opportunities for cooperation in the region, especially in AI. Regulating the use of AI weapons and cyber-space weapons will go a long way in advancing global peace. New international rules on how governments use autonomous weapons including subjecting them to International Human Rights Law are urgently needed.
B A Hamzah is a professor and Head of the Center for Defense and International Studies, National Defense University of Malaysia, Kuala Lumpur.