RISE OF THE ROBOTS: WEAPONIZATION OF ARTIFICIAL INTELLIGENCE

The history of human civilisation is a complex tapestry of individual and collective human development in the humanities and sciences interwoven with an apparently insatiable quest for power.  This article concerns itself with some contemporary facets of the latter.  Violence — and the ability to exert it more efficiently, more effectively, and, to a greater degree, than one’s adversary — has been the hallmark of this competition for power.  This is probably as true for individuals and small social-units as it is for large nation-states.  To achieve dominance through power, humankind has been engaged in inventing ever more ingenious ways in which to cause and control violence.  The use of machines to kill or harm one’s adversary has a historical lineage that may well rival that of humankind itself.  However, recent developments in automation and artificial intelligence could, for the first time in recorded history, take the control of violence away from humans and place it in the hands of machines.  This possibility has led to an intense debate on ethics and morality at various national and international forums, related to the use of such machines.

The main purpose of this article is to sensitise the lay reader (rather than the expert or the practitioner) of the legal risk posed to global societies by the ongoing effort of nation-states to combine Artificial Intelligence (AI) with lethal autonomous weapon systems (LAWS).  Human armed inter-State conflict is sought to be regulated principally through International Humanitarian Law (IHL).  However, LAWS pose an exceptionally strong challenge to IHL largely because of the ability of the former to supplant the human being (as the administrator of violence) altogether, thereby bringing into question the validity of the foundational adjective “humanitarian” upon which the very edifice of IHL has been built.  That said, it is also important to bear in mind that LAWS are not the only manifestation of the danger that AI poses to human security.

 

Defining “LAWS”

The first legal challenge, as in many technology-driven issues is that there is not yet an internationally accepted definition of LAWS.  In the inaugural meeting of the UN Group of Governmental Experts (GGE) on LAWS, held in 2017 under the overarching umbrella of the “1980 UN Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Excessively Injurious or to Have Indiscriminate Effects” (CCW), State parties considered legal, ethical, military and technological aspects of LAWS.  Although no common definition was agreed upon, at least some States offered proposals for a working definition and consequent regulations of LAWS.[1]

The legal challenge of a lack of a universal definition is not a new one.  For instance, despite its global spread to level where it is nearly ubiquitous, terrorism, too, does not have a legal definition within international law.  On the other hand, the fact that there are similar infirmities in international law does not reduce the severity of the legal challenge of a lack of consensus in defining LAWS.  It is, nevertheless, a matter of intuitive (if not legal) agreement that all such weapon systems are characterised by varying degrees of autonomy in the critical functions of acquiring, tracking, selecting, and attacking targets;[2] and, the partial or complete removal of human involvement or ‘human central thinking activities’[3] from the decision-making process about the use of lethal force.

‘Autonomy’ reflects a degree of independent dynamic ability and activity.  LAWS may be placed in one of two general classifications that are based on the degree of autonomy.  The first category is what one might call ‘semi-autonomous’ (involving levels of mechanization and remotely controlled human input); while the second is ‘autonomous’ (including more elevated levels of freedom with regard COW to acquiring, tracking, selecting and attacking targets, without the requirement of human input).[4]  Within each category varying degrees of autonomy can be measured in terms of functionality[5] — the ability to observe, orient, decide, and act (OODA),[6] and, the ability to replicate human situational awareness.

Within the semi-autonomous category, the “MQ-9 Reaper” by General Atomics offers a typical example.  It is a remotely-controlled “unmanned combat aerial vehicle” (UCAV) that carries a lethal payload and has sophisticated intelligence, surveillance, and reconnaissance capabilities.  All targeting actions, however, are performed manually, by a human operator.  Autonomy exists in its ability to get and remain airborne without a pilot and to navigate in flight through automated GPS-based system, including for take-off and landing.[7]

Fortunately, the second category —fully autonomous weapon-systems that have lethal targeting capability but without any human input controlling what, when, and how the system goes about this targeting — is, thus far, unpopulated.  The portents, however, are grim.  For example, the US Navy’s X- 47B has autonomous capability in relation to take-off and landing, and completed its first autonomous aerial refuelling in 2015.  As a combat system, it could, over time, be provided with even more autonomy in the execution of critical functions.[8]

Yet robotics engineers, military personnel, and ethicists tend to disagree on which devices are merely automatic and which are autonomous.  ‘Automatic’ robots, for example, might, in an organised situation, perform a group of activities that have been planned in advance.  On the other hand, ‘autonomous’ robots will function under the control of a program, but will operate in open or unstructured environments and receive information from sensors to adjust speed and direction.[9]

 

Weaponization of AI

It is, perhaps, sufficient to describe an Autonomous Weapon System (AWS) as a weapon system with sensors, algorithms and effectors.[10]  Such a system could include stationary as well as mobile robotic components (e.g., unmanned air, ground, or naval vehicles) equipped with active or passive sensors to navigate and detect objects, motion or patterns.  These sensors could include electro-optical, infrared, radar or sonar detectors.[11]

There are two basic lines of argument advanced by those who support the proliferation of autonomous weapons systems:

The first is based upon the military advantages that such systems confer.  Here, emphasis is laid upon the fact that AWS act as a force multiplier to human beings.  That is, thanks to the presence and availability of AWS, fewer human beings are needed for the accomplishment of a successful mission and the efficacy of each such human being is significantly enhanced.  Moreover, AWS can expand “the battlefield, allowing combat to reach into areas that were previously inaccessible”[12] and can generate a high offensive tempo even in the face of a breakdown of communications with and between human beings.  Further, AWS can be advantageously deployed for ‘dull’ (e.g., long-duration sorties), ‘dirty’ (e.g., missions to be undertaken in areas contaminated by biological, chemical or nuclear agents), or ‘dangerous’ missions (e.g., explosive ordnance disposal [EOD]).[13]

The second line of argument is centred upon the belief that it is ethically preferable (morally more justifiable) to use AWS than it would be to use human beings.  Proponents even hold that

“autonomous robots in the future will be able to act more ‘humanely’ on the battlefield for a number of reasons, including that they do not need to be programmed with a self-preservation instinct, potentially eliminating the need for a ‘shoot-first, ask questions later’ attitude.  The judgments of autonomous weapons systems will not be clouded by emotions such as fear or hysteria, and the systems will be able to process much more incoming sensory information than humans without discarding or distorting it to fit preconceived notions…. in teams comprising human and robot soldiers, the robots could be more relied upon to report ethical infractions they observed than would a team of humans who might close ranks”.[14]

As AI develops, it brings in its wake the hugely tempting option of removing altogether need for the tenuous communication links that tie commanders to their troops.  Of course, killer robots have not yet become the norm; however, there are precursors that plainly show the pattern of expanding autonomy.  An example is that of the Israeli “Harpy” Loitering Bomb that can dally in the air for quite a long time, looking for adversary radar signals.  When these are identified, it assaults and destroys the enemy radar through a process of controlled self-destruction.[15]  Another example is that of the SGR-1, an AI-enabled robot ‘infantry guard’ that was developed in the early years of the present century and subjected to successful trials over a decade ago (in 2006).  It has been deployed on the border between North and South Korea and is touted as being an armed sentry that never sleeps and one whose concentration never wavers.  It is armed with an automatic rifle and a grenade launcher and can distinguish human beings via infra-red sensors, although it does need a human operator to give it the go-ahead to fire.[16]

In situations where human operators can be inserted into the loop, the future may continue to see autonomy only in the technical domain rather than in the actual decision-making process. This is more likely to happen in the aerospace and land domains where establishing communications is easier.  In the maritime domain however, communications pose a greater challenge and the temptation to deploy AI even for decision-making is very large.  Unlike on land, the vast oceans do not have permanent infrastructure to receive and send messages.  Surface ships, therefore, have extensive onboard communication suites, which enable them to talk to one another and with land-based authorities.  Much like on land or air, these surface combatants, too, use radio waves to communicate.  These radio waves travel well in the atmosphere but not so well in water. As a result, underwater vessels, such as submarines, are even more difficult to communicate with.  Subsurface vessels mostly use underwater acoustic systems to communicate, which cannot travel through air.  The air-water communication barrier is a formidable one and this makes the subsurface domain an ideal ground for the deployment of AI.  As a term, the “Unmanned Underwater Vessel” (UUV) has already become synonymous with the expression “Autonomous Underwater Vessel’ (AUV) and such vehicles are already being used by many countries both for scientific research as well as for purposes ranging from the gathering of intelligence, surveillance missions, reconnaissance, and mine-countermeasures. The secrecy associated with naval underwater systems makes it difficult to get a clear picture of how many States possess UUV capability and to what degree.  However, it is well known that the US, Russia, China, France, Germany, the UK, China, Israel, and India are among a rapidly growing list of countries that have robust UUV programmes, with AI being increasingly integrated into all such vessels.  UUVs/AUVs differ widely in shape and form, from miniature   vessels to very large ones, each with their own advantages and disadvantages.  The US, for instance, has ordered four “Extra Large Unmanned Undersea Vehicles” (XLUUVs) to be built by Boeing for its Navy.  These XLUUVs would operate independently for months underwater and cover up to 6,500 nautical miles on a single fuel-cycle.[17]  Boeing’s “Echo Voyager”, on which the new XLUUV, is based has been declared to be a “fully autonomous UUV that can be used for a variety of missions that were previously impossible due to traditional UUV limitations.”[18]  For all practical purposes, these vessels are fully-capable submarines albeit without human operators.   The US Navy is increasing the number and capacity of its unmanned vessels, both on and under the surface.  Its long-term plans for the construction of naval vessels include at least 21 medium and large sized drone boats over the next five years (2021-2026).[19]  China, too, displayed a large sized unmanned submarine, in the 2019 edition of its annual military parade.[20]  Considering the fact that the development of defence systems is, for the most part, far ahead of declared capabilities, one can assume that the development of LAWS programmes, including the use of artificial intelligence, is fairly advanced in these States.

A variety of other groups, too, are progressively tapping into 21st century technologies.  As AI-enabled LAWS proliferate beyond the relatively-strict accountability norms of nation-states, Malevolent non-State actors could develop the ability to automate killing on a massive scale.[21]  In 2018, Saudi Arabia destroyed two remote-controlled, explosives-filled vessels that were targeting the port of Jizan.[22]  More recently, in August of 2020, the Saudi-led coalition that was fighting in Yemen had intercepted and destroyed an explosive-laden drone over Saudi Arabia’s Abha International airport, which had purportedly been launched by Houthi rebels politically aligned to Iran.[23]

Israel’s huge new offshore gas infrastructure presents an obvious and tempting target for its enemies, leaving its Navy spread thin.  In 2012, the Lebanese Shi’ite militant group, Hezbollah, sent a drone deep into Israel, covering more than enough of the distance needed to reach some of these gas fields.[24]  Senior Israel Defence officers claim that Hezbollah, having acquired additional armament, now has the capability to attack these vital offshore installations.[25]

It is clear that the proliferation of AI provides terrorist groups with newer ways to threaten physical security, making the scope of protection and regulation even more challenging.[26]  Despite the threats being quite as obvious as they are, further weaponization of AI is inevitable in this age of galloping technological and scientific development.  It worrying to note the enthusiasm with which nations such as, China, Russia, Israel, the United States and the United Kingdom are engaged in the development of such weapon systems.[27]

 

AWS and Human Rights

The development, deployment and utilization of AWS raise grave concerns for human rights, compromising the right to life, the prohibition of torture and other merciless, cruel or debasing treatment or punishment and the right to security of individual, and possibly sabotaging other human rights.

The war between Armenia and Azerbaijan, which ended in November 2020, is a telling example of the use of autonomous systems for warfare.  Drone attacks, striking Armenian and Nagorno-Karabakh soldiers and destroying tanks, artillery and air defence systems, provided Azerbaijan with a major victory in the 44-day war.[28]

However, this new feature of military conflict between the two countries turned the hostilities from a bloody, bare-knuckled ground fight into a deadly but seductive game of hide-and-seek against an all-too-patient – and often unseen – airborne non-human enemy.  Hundreds died in less than two weeks, with extensive concomitant damage to more than 120 residential and administrative buildings in the town.  The drone strikes forced the evacuation of around 6,000 residents, with most women and children seeking refuge outdoors.[29]

It is a key standard of universal human rights law that nobody should be discretionarily deprived of life.[30]  This is an arrangement of international human rights law that can never be suspended nor dissuaded, even “in time of public emergency that threatens the life of the nation”.  The right to liberty and security of the person “insures people against deliberate infliction of bodily or mental injury, whether or not the victim is kept or non-confined.  For instance, authorities of State parties disregard the right to personal security when they unjustifiably cause bodily injury.”  States “ought to also prevent and change unmerited utilization of force in law enforcement, and ensure their populaces against maltreatment by private security forces, and against the dangers presented by excessive accessibility of firearms.”[31]

The UN Code of Conduct for Law Enforcement Officials (UNCCLEO) establishes the overall principle that “law enforcement officials may use force only when strictly necessary and to the extent required for the performance of their duty”.[32]  Therefore, it lays down the basic principle that no greater force than necessary ought to be utilized to accomplish the military aim.

In order to have at least the option to carry out policing and law implementation tasks in a legitimate manner, AWS would have to viably evaluate the degree of threat of death or serious injury, effectively determine who is causing the danger, consider whether force is important to diffuse the threat, have the option to recognize and utilize means other than force, have the ability to set up various methods of communications and policing weapons and equipment to take into consideration a graduated reaction, and, have accessible back up means and assets.  To add to this complexity, every circumstance would require an alternate and unique response, which would make it be incredibly difficult for all this to be reduced to a progression of mathematically based algorithms and probabilistic calculations.

Despite the impressive and often incredible technological advances that have been witnessed in recent years, it does not seem possible that AWS, without meaningful and effective human control and judgment, would be able to comply with these provisions, especially in unpredictable and ever-evolving environments.  It is important to note that arguments for prohibiting AWS are even more weighty when extended to ‘lethal’ AWS (LAWS).  This does not, in any way, diminish the need to voice concerns over the significant threats to peace and global stability that arise even from AWS which have no direct lethal or sub lethal effect on human beings.  A critical and relevant example concerns the utilisation of ‘swarm intelligence’ technologies, which may empower a proponent to launch significant assaults upon potentially uninhabited enemy infrastructure.[33]

The Nagorno-Karabakh crisis has forced the international community to question the decision-making processes of AWS.  These weapon systems due to their autonomous nature lack the human skill to differentiate between an armed soldier and a mere civilian.  An aerial drone-onslaught killed five civilians and injured ten others in the town of Martuni.  Residents had been forced to tape up headlights or smear mud on their cars to obscure any markings that could make them a target.  Public gatherings were discouraged, with people being urged not to spend too much time in any one place.[34]  The high civilian-toll of such attacks forced a 12-year-old to poignantly state, “I no longer love blue skies.  In fact, I now prefer grey skies.  The drones do not fly when the skies are grey.”[35]  The uncertainty and shock of drone strikes has resulted in an outpouring of immense hatred from civilians towards the deployment of armed drones.

This leads to another critical issue relevant to the deployment of autonomous weapons.  A fundamental feature of the application and implementation of IHL is its predication on the individual with gradual progress towards a “homo-centred instead of state-centred”[36] approach, as demonstrated by concepts such as individual criminal responsibility, and, command responsibility.[37]  This extends to the specialised area of weapons law, including human involvement in the design, development, and employment of LAWS.  Several IHL provisions reflect the need for human involvement.  Among the legitimate issues distinguished was whether IHL, in view of individual obligation, could continue to apply to autonomous machines.[38]  At the GGE, there was general consensus among States that whatever it is, however it is defined, the human agency aspect of IHL needs to be maintained in relation to LAWS.  States and NGOs alike have referred-to and/or supported concepts such as “meaningful human control”,[39] “human judgement”,[40] “human involvement”,[41] and “human supervision”.[42]  These concepts are used interchangeably and generally without definition.

A 2017 report jointly produced by the “World Commission on the Ethics of Science and Technology” (COMEST) and the “UNESCO Ethics Committee” examined “armed military robotic systems (armed drones)” and “autonomous weapons” in relation to their mobility, interactivity, communication, and autonomy capacity to take decisions without external intervention.  The report considered that legal norms and engineering codes of conduct may apply, and that a cognitive robot, wherein decision-making is delegated to a machine, engages the responsibility of designers and manufacturers, and application of the precautionary principle.  The report emphasised that as a legal issue, the deployment of AWS “would violate IHL.  Ethically, they break the guiding principle that machines should not be making life or death decisions about humans”.  It went on to add, “With respect to their technical capability, autonomous robotic weapons lack the main components required to ensure compliance with the principles of distinction and proportionality. Though it might be argued that compliance may be possible in the future, such speculations are dangerous in the face of killing machines whose behaviour in a particular circumstance is stochastic and hence inherently unpredictable.”  Effectively demolishing the clever but nevertheless fallacious ‘moral’ arguments advanced by proponents of AWS that were alluded to earlier in this article, the report unequivocally stated that “The moral argument that the authority to use lethal force cannot be legitimately delegated to a machine – however efficient – is included in international law: killing must remain the responsibility of an accountable human with the duty to make a considered decision”.  The report strongly recommended that “for legal, ethical and military-operational reasons, human control over weapon systems and the use of force must be retained”.[43]

 

Conclusion

In the ultimate analysis, only human beings can be held responsible for taking life, and autonomous robots are not able to comply with ethical, legal, and military norms.

James Cameron’s cult film The Terminator depicted a dystopian future in which Skynet, a malevolent Artificial Intelligence (AI), initiates a nuclear war against humans to ensure its own survival.  The film was released in 1984, well before the advent of modern forms of AI, but was prescient in foreshadowing some of the concerns that have come to dominate debates about intelligent computer systems.  One of the most renowned of the world’s contemporary scientists, the late Stephen Hawking, described AI as the single greatest threat to human civilization.[44]  This is not a view limited to scientists alone.  Henry Kissinger, too, has warned that AI will change human thought and human values.[45]

The technology that The Terminator films depicts is not yet with us, and a form of self-aware artificial intelligence described as ‘general AI’ is, according to most analysts, some decades away.  Yet, AI will probably continue to be integrated into weapons systems and used to enhance the precision, lethality and destructiveness of the use of military force.  Concomitantly, constant attention will need to be given to the legal, ethical and strategic debates around human enhancement — including the physical and cognitive development and evolution of military forces, and how psychical and cognitive processes might change and evolve as weaponized AI is increasingly integrated into war fighting.

This leaves us with some really big questions.  Is weaponization desirable?  Should the international community be seeking to control and stop these processes, and what effect might that have on non-military uses of AI?  In this respect this author believes that the hyperbole-filled debate about “killer robots” misses the point rather widely.  AI is already being weaponized and the debate about banning fully autonomous weapons systems ignores much of the weaponization processes pertaining to AI that are already in full swing.  A final point for further reflection is the role that AI may play in multilateral fora such as NATO, and how the use of AI within multilateral security missions will be shared and harnessed among contributing nations.  Developing common operational standards, requirements and ethical guidelines for AI-enabled capabilities, will be both necessary and challenging.[46]

We must also remember that autonomous systems per se are not necessarily bad.  In fact, autonomy and AI in machines have enabled us to reach and explore Mars and depths the of the oceans on our own planet.  Even in warfare, such machines could provide a way to avoid placing human beings in situations that could put them in danger of life or limb.  The debate then boils down to how humans use this capability.  Although the world may be late in recognising the dangers of LAWS, we may still have time to ensure that the development of AI moves in constructive rather than destructive direction.

********************

 

About the Author:

*Ms Shweta Nair is a fourth year Law Student pursuing an BA LLB degree from the Army Institute of Law, Mohali.  She is currently undergoing an extended, four-month internship at the National Maritime Foundation, where her interest in various unexplored facets of Public International Maritime Law has led her on several voyages of discovery as a legal scholar.  Shweta can be contacted at nair.shweta339@gmail.com.  

 

Endnotes:

[1] Finland referred to the lack of a definition of ‘terrorism’ not preventing the international community from establishing an international legal framework; Egypt, too, referred to the lack of a definition of blinding laser weapons in Protocol IV not preventing them from being banned.

See:

(a) “Exchange of views of Egypt”, UN Digital Recordings Portal, 15 November 2017)

(b) “Exchange of Views of Finland”, UN Digital Recordings Portal, 15 November 2017 http://conf.unog.ch/digital-recordings/#

[2] “Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons”, Report of the ICRC Expert Meeting (2016), https://www.icrc.org/en/publication/4283-autonomous-weapons-systems#

[3]Ozlem Ulgen, “Human Dignity in an Age of Autonomous Weapons: Are We in Danger of Losing an ‘Elementary Consideration of Humanity’?”, ESIL Conference Paper Series, 8, No. 9 (2016): 7-8.

[4] Ibid

[5] See for example, the ten levels of autonomy devised by the USA, US Department of Defense, Office of the Secretary of Defense, “Unmanned Aircraft Systems Roadmap 2005-2030” (2005) (‘US Roadmap’), para 4.0- (1) remotely guided; (2) real time health/diagnosis; (3) adapt to failures and flight conditions; (4) onboard route replay; (5) group coordination; (6) group tactical re-plan; (7) group tactical goals; (8) distributed control; (9) group strategic goals; (10) fully autonomous swarms.

[6] Colin Wills, “Unmanned Combat Air Systems in Future Warfare- Gaining Control of the Air”, Palgrave Macmillan, 2015, 43-46.

[7] US Roadmap. Supra 5

[8] “X-47B UCAS Makes Aviation History…Again”, Northrop Grumman, http://www.northropgrumman.com/Capabilities/x47bucas/Pages/default.aspx

See also: US Roadmap, Supra 5

[9] Noel Sharkey, “Automating Warfare: Lessons Learned from the Drones”, Journal of Law, Information and Science 21 (2011/2012): 141.

[10] “Summer Study on Autonomy”, Defence Science Board (DSB) Report, US Department of Defense, June 2016, http://www.acq.osd.mil/dsb/reports/2010s/DSBSS15.pdf

[11] J Fraden, “Handbook of Modern Sensors: Physics, Designs, and Applications”, Springer International Publishing, 2016, 271–333.

[12] Dr Amitai Etzioni and Dr Oren Etzioni, “Pros and Cons of Autonomous Weapons Systems”, Military Review, May-June 2017, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/

[13] Jeffrey S Thurnher, “Legal Implications of Fully Autonomous Targeting”, Joint Force Quarterly 67 (4th Quarter, October 2012): 83, http://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-67/JFQ-67_77-84_Thurnher.pdf

[14] Ronald C Arkin, “The Case for Ethical Autonomy in Unmanned Systems,” Journal of Military Ethics 9, Number 4 (2010), 332–41

[15] “Harpy”, Israel Aerospace Industries, https://www.iai.co.il/p/harpy

[16] Mark Prigg, “Who goes there? Samsung unveils robot sentry that can kill from two miles away”, Dailymail Online, 16 September 2014, https://www.dailymail.co.uk/sciencetech/article-2756847/Who-goes-Samsung-reveals-robot-sentry-set-eye-North-Korea.html

[17] John Keller, “Boeing to develop new payloads, capabilities, and missions for Orca large long-range unmanned submarines”, Military & Aerospace Electronics, 15 October, 2020, https://www.militaryaerospace.com/unmanned/article/14185349/unmanned-payloads-submarines

[18] “Echo Voyager”, Boeing, https://www.boeing.com/defense/autonomous-systems/echo-voyager/index.page

[19] “Report to Congress on the Annual Long-Range Plan for Construction of Naval Vessels”, US Department of Defense, 09 December 2020, https://media.defense.gov/2020/Dec/10/2002549918/-1/-1/1/SHIPBUILDING%20PLAN%20DEC%2020_NAVY_OSD_OMB_FINAL.PDF

[20] David R Strachan, “China Enters the UUV Fray”, The Diplomat, 22 November 2019, https://thediplomat.com/2019/11/china-enters-the-uuv-fray/

[21] Jacob Ware, “Terrorist Groups, Artificial Intelligence, and Killer Drones”, War on the Rocks, 24 September 2019, https://warontherocks.com/2019/09/terrorist-groups-artificial-intelligence-and-killer-drones/

[22] “Saudi Navy Intercepts Two Explosives-Filled Drone Boats”, The Maritime Executive, 01 October 2018, https://www.maritime-executive.com/article/saudi-navy-intercepts-two-explosives-filled-drone-boats

[23] “Saudi-led coalition destroys explosive-laden drones, boat launched by Yemen’s Houthis: SPA”, Reuters, 31 August 2020, https://www.reuters.com/article/us-saudi-security-yemen/saudi-led-coalition-destroys-explosive-laden-drone-boat-launched-by-yemens-houthis-spa-idUSKBN25Q0T7

[24] “Israel’s Navy gears up for new job of protecting gas fields”, Reuters, 01 April 2013, https://www.reuters.com/article/israel-navy-natgas/israels-navy-gears-up-for-new-job-of-protecting-gas-fields-idUKL5N0CN01P20130401?edition-redirect=in

[25] “Hezbollah issues fresh threat against Israel’s offshore gas rigs”, The Times of Israel, 18 February, 2018. https://www.timesofisrael.com/hezbollah-threatens-to-strike-israels-offshore-gas-platforms/

[26] War on the Rocks, “Terrorist Groups, Artificial Intelligence, and Killer Drones”

[27] Daan Kayser, “Killer Robots”, Pax for Peace, https://www.paxforpeace.nl/our-work/programmes/killer-robots

[28] Robyn Dixon, “Azerbaijan’s drones owned the battlefield in Nagorno-Karabakh- and showed future off warfare”, The Washington Post, November 12, 2020. https://www.washingtonpost.com/world/europe/nagorno-karabkah-drones-azerbaijan-aremenia/2020/11/11/441bcbd2-193d-11eb-8bda-814ca56e138b_story.html

[29] Naibh Bulos, Marcus Yam, “A new weapon complicates an old war in Nagorno-Karabakh”, Los Angeles Times, 15 October 2020, https://www.latimes.com/world-nation/story/2020-10-15/drones-complicates-war-armenia-azerbaijan-nagorno-karabakh .

[30] Universal Declaration of Human Rights, 1948 [UDHR], art. 3, upholds the right of everyone “to life, liberty and security of person.”; International Covenant on Civil and Political Rights, 1966 [ICCPR], art. 6(1), “Every human being has the inherent right to life.  This right shall be protected by law.  No one shall be arbitrarily deprived of his life.

[31] General Comment no. 35 on liberty and security of person, Human Rights Committee, UN Doc CCPR/C/GC/35 (2014), Para 9

[32] UN Code of Conduct for law Enforcement Officials, 1979 [UNCCLEO], Art. 3

[33] “Department of Defence Announces Successful Micro-Drone Demonstration”, US Department of Defence, https://www.defense.gov/News/News-Releases/News-Release-View/Article/1044811/department-of-defense-announces-successful-micro-drone-demonstration/see

[34] Naibh Bulos, Marcus Yam, “A new weapon complicates an old war in Nagorno-Karabakh”, Los Angeles Times, 15 October 2020, https://www.latimes.com/world-nation/story/2020-10-15/drones-complicates-war-armenia-azerbaijan-nagorno-karabakh

[35] Alexander Abad-Santos, “This 13-year-old is scared when the sky is blue because of our drones”, The Atlantic, 29 October 2013, https://www.theatlantic.com/politics/archive/2013/10/saddest-words-congresss-briefing-drone-strikes/354548/

[36] Kjetil Mujezinović Larsen et al, “Searching for a ‘Principle of Humanity’ in International Humanitarian Law”, Cambridge University Press, 2012

[37] ICTY Statute, 1993, art. 7; ICTR Statute, 1994, art. 6; Rome Statute of the ICC, 1998, art. 25 and 28; Second Hague Protocol on the Protection of Cultural Property in the Event of Armed Conflict, 1999, art. 15.

[38] “Food-for-thought”, GGE Chairperson paper, CCW/GGE.1/2017/Wp.1 (2017)

[39] UN Digital Recordings Portal, “Exchange of views of Pakistan, Switzerland, New Zealand, Korea” (November 13, 2017); “Finland, Ireland, Russia, Turkey” (November 15, 2017); “the Netherlands, Sierra Leone” (November 16, 2017).

[40] UN Digital Recordings Portal, “Exchange of views of Zambia, Norway” (November 15, 2017); “the Netherlands, Estonia, Ireland” (November 16, 2017)

[41] UN Digital Recordings Portal, “Exchange of views of ICRC expert panel” (November 14, 2017); China (November 15, 2017); “Chair, Austria” (November 16, 2017)

[42] UN Digital Recordings Portal, “Exchange of views of the USA” (November 16, 2017); “Towards a definition of lethal autonomous weapons systems”, Belgian paper, CCW/GGE.1/2017/WP.3 (2017); ICRC Statement (2017)

[43] “Robotics Ethics”, World Commission on the Ethics of Science and Technology (COMEST) and UNESCO Ethics Committee Report, 2017: SHS/YES/COMEST-10/17/2/REV, Paris, 14 September 2017, https://unesdoc.unesco.org/ark:/48223/pf0000253952_eng

[44] Arjun Kharpal, “Stephen Hawking says AI could be ‘worst event in the history of our civilization’”, CNBC, March 08, 2020. https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html .

[45] HA Kissinger, “How the Enlightenment Ends”, The Atlantic, March 08, 2018 https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-histroy/559124/

[46] “Ethical Guidelines for Trustworthy AI”, Independent High-level Expert group on Artificial Intelligence, European Commission, https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *