|#1368||5283||April 21, 2015||By Dr Monika Chansoria|
Disclaimer: The following are the personal views of the author in her own capacity, and do not, in any way, reflect either the views of the Centre for Land Warfare Studies, Ministry of Defence, and/or the Government of India.
Fully autonomous weapons, in contemporary context, have frequently been characterised as systems with the programmed ability to select and fire upon targets on their own, without human intervention, by means of assessing the situational context on a battlefield. Also known as Lethal Autonomous Weapon Systems (LAWS), they consequently decide on the required attack in concurrence to the processed information. While artificial intelligence created by arithmetic computations and programming of systems forms the basis of LAWS, the oft-presented argument is that LAWS do not take into account, human intelligence and/or human judgment that make humans subject and accountable to global rules and norms. In this reference, the proponents of the above argument contend that artificial intelligence, when applied in armed conflict, poses a fundamental challenge to the protection of civilians in the larger context of international human rights and humanitarian law.
I was one among select experts invited to speak at the recently concluded Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), held within the Framework of the Convention on Certain Conventional Weapons (CCW) at the United Nations in Geneva, Switzerland. A fundamental issue that invited an informed debate was how development and deployment of LAWS would impact upon international peace and
security? Nations including the United Kingdom have opposed a total international ban on LAWS, with the UK Foreign Office stating, “At present, we do not see the need for a prohibition on the use of LAWS, as international humanitarian law already provides sufficient regulation for this area.”
In my opinion, however, perhaps the most intense, thought-provoking and complex question surrounding LAWS, that merited more attention, than what appeared, was that why there was no internationally agreed upon definition as to what constituted a LAWS ? The US Department of Defense defines Autonomous Weapons Systems as “a weapon system(s) that, once activated, can select and engage targets without further intervention by a human operator.” There is a more fundamental question - does a fully autonomous weapon constitute an entirely distinct category than remote-controlled Unmanned Combat Aerial Vehicles (UCAVs) commonly called drones) especially in the backdrop that the larger grouping of Unmanned Aerial Systems, too, follows an autonomously pre-programmed mission? With remote-piloted UAVs currently leading military robotics, my opinion is that this facet cannot be totally relegated to the background. It appears that despite the debate on LAWS building upon the current understanding associated with UCAVs, it is not being discussed in as many terms.
Taking the case of the Taliban, Al Qaeda, and affiliate groups whose cadres have spread out deep in areas all along the Durand Line and in safe havens within Pakistan, given the advantages of a mountainous and inhospitable terrain, coupled with the porous border of a pliant state, it has been a technological headway in form of UCAV strikes in sync with intelligence inputs that has managed to pierce through these obstacles and target terrorists and key leadership of the dreaded Pakistani Taliban. The key terrorists/commanders that have been targeted through drone strikes include prominent names such as Baitullah Mehsud, Ilyas Kashmiri, Nek Muhammad, Pakistan’s Taliban chief Hakimullah Mehsud, Ustad Ahmad Farooq, Abdullah Haqqani, Sheikh Imran Ali Siddiqi, Haji Gul, Abdul Rehman, Mufti Hamidullah Haqqani, Maulvi Ahmed Jan, Mullah Nazir, Badruddin Haqqani and Saifullah Haqqani. In a more recent strike in February 2015, former Taliban commander, Mullah Abdul Rauf Khadim was killed in Afghanistan’s southern Helmand province. Rauf, who had served as the deputy head of the Taliban military commission and as corps commander of Kabul and Herat was a key liaison between factions from Afghan and Pakistani Taliban. In a move that reportedly had incensed the Taliban, Rauf joined the Islamic State (IS) as its deputy chief for the region just about a month before being killed.
With rising clamour among international human rights and humanitarian law groups to totally ban LAWS, colloquially termed “killer robots”, the argument that I would submit is whether banning technology entirely, whose latent potential and upshot is yet to be deciphered completely, appropriate? While totally favouring the underlying principle of all weapons being subject to a desired degree of human control, there is pressing need for an exhaustive reflection on the technological advancements and their consequent advantages in the field.
In the very complex environs of unconventional, sub-conventional and asymmetric warfare, military technologies need to be viewed as enablers and enhancers to minimise the existential threat to peace and security of citizens and nation-states posed by non-state actors, and there should be an informed debate to ensure regulation of using lethal weapons, rather than banning the entire technology per se. Reining in the “lethality” aspect with appropriate regulatory framework and simultaneously focusing on effective programming of LAWS to avoid greater civilian casualties is the need of the hour.
 For a more detailed discussion on Fully Autonomous Weapons, see the report by Reaching Critical Will, Women’s International League for Peace and Freedom, 2015.
 As cited in, “UK opposes international ban on developing ‘killer robots’, The Guardian, April 13, 2015.
 “Islamic State commander killed in Afghanistan,” The Express Tribune, February 10, 2015.