Alexa, Whose Fault Is It? Autonomous Weapon Systems Investigations and the Importance of a Deliberate Accountability Process
Volume 228 Issue 2 2020
For unto whomsoever much is given, of him shall be much required: and to whom men have committed much, of him they will ask for more.1
I. Introduction
A. Hypothetical
It is sometime in the future and the United States (U.S.) military is engaged in a combat operation. During this operation, an Army brigade commander deems it prudent to utilize an autonomous weapon system (AWS)—known as “Weapon X”—to target enemy troops. Weapon X is an aerial platform designed to loiter in a given location while searching for targets, and it is pre-loaded with data to identify and target enemy vehicles, to include armored personnel carriers.2 On the day in question, the commander authorizes Weapon X to deploy to an area where enemy troops may be operating. Although operated in a “human on the loop” capacity, enemy utilization of electromagnetic warfare has greatly restricted the ability of Weapon X to transmit video feed to the command center.3 As a result, the Soldiers monitoring Weapon X are only able to receive written target analysis conclusions from Weapon X.
At some point after deployment, Weapon X submits a message to the command center: Weapon X has identified an armored personnel carrier and is prepared to strike the target. The Commander has reason to believe armored personnel carriers may be present in the area and, based on this information, allows Weapon X to continue its strike. The target is destroyed. The team later learns the target was a civilian van, and ten children were killed.
In the aftermath, the higher command initiates an administrative investigation into the incident in accordance with Army Regulation 15-6.4 This investigation examines the commander and those working with the AWS on the date of the incident. It finds that their actions were appropriate based on the information provided by the AWS. Having looked at their actions, the investigation next turns to the AWS itself.
It is at this point that the investigating officer (IO) has difficulty. Despite valiant efforts, the IO has limited experience in computer programming. No individuals within the combat division have the in-depth experience necessary to examine the AWS’s designs. Moreover, the system was developed in a collaborative effort between the United States Defense Advanced Research Projects Agency (DARPA) and a private corporation and, although helpful, neither seems particularly motivated to expeditiously provide assistance, as the investigation is coming from well outside their organizational chains of command.5 With nowhere to turn and the deadline approaching, the IO is forced to conclude that although the commander is not responsible for the deaths of the children, she is unable to determine who—or what—is.
B. Background
The idea of artificial intelligence (AI) has existed in popular culture since as early as 1920.6 While some fictional accounts place AI as a great boon to society, others explore its darker side.7 Today, what was once reserved for the realm of science fiction has entered our everyday lives. Autonomous robotic vacuums clean our houses,8 and “smart” thermostats control our living environments.9 Robotic personal assistants, such as Amazon’s “Alexa,” listen to our day-to-day lives in order to answer questions, play music, or place orders with online retailers,10 and AI is being tested to drive our cars and pilot commercial airlines.11 At the same time, the potential of AI has not escaped the watchful eye of militaries throughout the world.
According to Russian President Vladimir Putin, “The one who becomes the leader in [the AI] sphere will be the ruler of the world. When one party’s drones are destroyed by drones of another, [that party] will have no other choice but to surrender.”12 Other world powers have taken notice of the huge potential of AI as a warfighting tool and are exploring the role autonomous systems will have in the future of combat. This exploration is not merely conceptual. The United States has developed and implemented the Phalanx series of active defense systems (to include the Counter-Rocket Artillery and Mortar or C-RAM) that demonstrate autonomous capabilities.13 Israel has operationalized the Harpy autonomous drone utilized to hunt and destroy enemy radar stations.14 Likewise, Russia has publicized their cultivation of autonomous tanks,15 and China has recently indicated their intent to explore autonomous drone swarms.16
While many have recognized the military advantages offered by AWS, many government and non-governmental organizations have taken a negative view of this emerging technology. This has led to a spirited debate on the morality and legality of AWS, with many organizations calling for outright bans.17 Although many concerns have not stood up to scrutiny, the concern regarding potential inability to assign human blame for collateral damage remains a primary argument for the ban of AWS.18 As policies are developed on the national and international levels, this concern over lack of human accountability could severely limit the United States’ ability to develop autonomous weapon systems and creates the potential to restrict our ability to compete in an ever-changing military environment.19
Our ability to use and develop unencumbered AWS requires us to address concerns related to a lack of human accountability in AWS. In order to establish human accountability, we must create a system that allows for efficient and effective investigations into incidents involving AWS and allows for assignment of human responsibility for AWS actions when necessary. After providing a basic understanding of AWS, this article discusses the necessity of accountability within AWS and provides an outline for a deliberate system of responsibility within AWS creation and utilization. This article also identifies the requirement to conduct investigations into AWS incidents and concludes with recommendations for the design and implementation of an AWS investigative system designed to properly assign accountability for AWS incidents.
II. Understanding Artificial Intelligence and Autonomous Weapon Systems
A. Artificial Intelligence and Deep Learning
In order to understand issues within AWS investigations, one must first understand some key facets of programming AI. Generally, programing methodologies for AI fall somewhere within a spectrum of practices.20 On one side of the spectrum, human programmers manually enter code to create a system of logical “decision trees” that a machine must follow. These designers “thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code.”21 On the other side of the spectrum are programs that:
[take] inspiration from biology, and [learn] by observing and experiencing. This mean[s] turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the problem generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.22
These machine-learning techniques, known as “neural networks” and “deep learning,” present serious considerations in investigations of AWS, centering on the idea that “[n]o one really knows how the most advanced algorithms do what they do.”23 “The computers...have programmed themselves, and they do it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.”24 Thus, while “[a]lgorithmic transparency means you can see how the decision is reached...you can’t with [machine-learning] systems because it’s not rule-based software.”25 Indeed, this method of programing is unique enough that some experts take effort to distinguish these machine-learning techniques from other AI systems.26
B. Autonomous Weapon Systems
In addition to a fundamental understanding of AI, it is important for one to have a basic definition for and understanding of AWS. While the Department of Defense (DoD) defines AWS as “[a] weapon system that, once activated, can select and engage targets without further intervention by a human operator,”27 this definition is overly simplistic as it fails to adequately distinguish AWS from automated weapons.28 For example, anti-tank land mines or naval mines that identify appropriate targets based on weight, infra-red, magnetic, or acoustic signature would be in included in this definition of AWS, despite the fact that they have existed for decades.29 In fact, the DoD recognizes the weakness in its classification by excluding certain items—including mines—from the definition.30 This is proper because “[i]n contrast to these purely reactive systems, autonomous weapon systems gather and process data from their environment to reach independent conclusions about how to act.”31 As a result, instead of the DoD definition, a better definition of AWS is “a weapon system that, based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging targets.”32
Many authorities further the discussion of autonomous systems by considering three sub-categories of weapons with varying levels of autonomous characteristics.33 First, “semiautonomous weapon systems” utilize automation for many tasks but still require human interface in the target decision process. Thus, while the weapon system itself may identify and classify targets, a human operator remains in the “kill chain” and human authorization is required prior to firing of the weapon. For this reason, semiautonomous weapon systems are often referred to as “human in the loop” systems.34 Importantly, many experts on AWS, including the DoD, do not include semiautonomous weapon systems in their definition of AWS.35
The next category refers to systems that involve human supervision of the weapon but do not require human permission to act. Known as “human on the loop” systems, or “supervised autonomous weapon systems,” these systems act largely of their own accord, but in a supervised manner. Although humans monitor these systems and remain available to react in real time should a mishap be identified, their permission is not needed for the AWS to act.36
Finally, “fully autonomous weapon systems,” or “human off the loop” systems, operate in a manner entirely without human intervention.37 These systems would be deployed and have the ability to search for, identify, categorize, and carry out an attack without further human involvement.38
III. Accountable Artificial Intelligence
A. Accountability Concerns
When fused with deep-learning AI, the concept of AWS leads to many concerns regarding lack of accountability. As the Campaign to Stop Killer Robots contends:
The use of fully autonomous weapons would create an accountability gap as there is no clarity on who would be legally responsible for a robot’s actions: the commander, programmer, manufacturer, or robot itself? Without accountability, these parties would have less incentive to ensure robots did not endanger civilians, and victims would be left unsatisfied that someone was punished for the harm they experienced.39
While this potential lack of transparency causes distrust for some, those concerns are misplaced. To understand this, one must briefly dissect how the concepts of explainability and responsibility relate to accountability of AWS.
Explainability in AI seeks to solve the problem that “[c]ertain algorithms act as a ‘black box,’ where it is impossible to determine how the output was produced...”40 The argument holds that “[b]y exposing the logic behind a decision, explanation can be used to prevent errors and increase trust.”41 Nevertheless, while explainability in AI is an important feature (and one that considerable resources are being leveraged to solve),42 it is not required to establish accountability. An illustration of this is provided by the widespread use of animals in the military, such as working dogs.43
B. (Un)Explainable AI
In many ways, military working dogs act in a semiautonomous or fully autonomous manner.44 Like AWS, military working dogs possess a significant amount of autonomy but “[t]heir independence is tempered through extensive training; [and] their propensity for unpredictable action is addressed through limited use.”45 Despite their autonomous characteristics, the legal analysis of animals in armed conflict is limited to Protocol II of the Convention on Certain Conventional Weapons, which prohibits the use of animal-borne booby-traps or other devices.46 This should lead one to consider “[w]hat then, would happen if an animal combatant were to take an action that resulted in what seemed to be a serious violation of international humanitarian law?”47
To remedy this, some remove explainability from the equation and suggest an analysis based on the responsibility of the human handlers.48 Indeed, as there are no requirements under international law to attribute explainability for the actions of animals in warfare, examining responsibility of associated humans is a logical method of ensuring accountability.
C. Human Responsibility
Likewise, accountability in AWS should focus less on explainability and more on human responsibility. The assignment of human responsibility can be premised on the fact that just as military working dogs are not truly autonomous since they rely on a handler to operate, AI will never be completely autonomous. Indeed, “[n]o entity—and for that matter, no person—is capable enough to be able to perform competently in every task and situation. On the other hand, even the simplest machine can seem to function ‘autonomously’ if the task and context are sufficiently constrained.”49 Put differently, “there exist no fully autonomous systems, just as there are no fully autonomous soldiers, sailors, airmen or Marines.”50 Given this understanding, one can begin to envision how AWS responsibility can be established. Much like military parachute riggers annotate responsibility for each phase of the parachute packing and inspection process,51 the AWS design and implementation process should annotate and designate human responsibility for the phases of AWS creation and use.52 In other words, human responsibility for AWS must be traceable.53
To determine when and where traceable human responsibility may be interjected in AWS, it is helpful to consider the defense acquisition framework, which is utilized for the procurement of defense materials.54 Under this framework, acquisition of an item follows one of six acquisition pathways, based on the particular item to be procured and the urgency of the need.55 Although the terminology used for the phases of various acquisition pathways differs, two of the phases discussed in the Major Capability Acquisition pathway provide an outline to discuss traceable human responsibility in AWS that can be translated to other acquisition strategies.
To begin, the Engineering and Manufacturing Development phase of the Major Capability Acquisition pathway offers three opportunities for establishment of responsibility. The first opportunity is when program requirements are set, evaluated, and approved. While establishing formal responsibility during this phase of an acquisition may be unnecessary for traditional weapon systems,56 AWS program requirements will require much greater detail as they encroach on decisions that have been traditionally made on the battlefield. Specifically, requirements must include the ability for an AWS to comply with law of war principles, such as distinction,57 proportionality,58 and military necessity59 during operations.60 Because this ability to comply with law of war principles is an essential task, forming the backbone of lawful AWS use, it is critical that responsibility is established for this portion of the AWS procurement process.
The second opportunity for responsibility within the Engineering and Manufacturing Development phase is found in the design and production of the item.61 At this time of the acquisition process, a designated individual should attest to the accuracy of the computer programming utilized to achieve the specific AWS requirement. As these requirements will include compliance with law of war principles, this person must be able to attest to the accuracy with which the AWS complies with these requirements.
Third, responsibility should be designated in the testing and validation portion of the Engineering and Manufacturing Development phase of the Major Capability Acquisition pathway.62 While methods of testing weapons systems are generally well established, designating responsibility at this stage will ensure testing and validation utilize the best available efforts to examine the unique characteristics of an AWS prior to its validation as a weapons system.63
Lastly, a system of responsibility must include the final stage of the procurement process: Deployment of the AWS.64 As with conventional weapons, this phase must assign responsibility for utilization of an AWS to commanders and individual end-users of the item. Although establishing a chain of responsibility along these constructs is arduous, it is necessary to take these deliberate actions in order to ultimately provide the structure to allow accountability of AWS through investigations.
IV. Investigative Considerations
A. Requirement to Investigate
It can be expected that accountability for AWS will be established through investigations, as inquiries into use of force by the U.S. military take place in formal and informal manners on a regular basis. By policy, U.S. military forces must evaluate “the overall effectiveness of employing joint force targeting capabilities during military operations.”65 Known as a “Combat Assessment,” these inquiries into the effects of a targeting operation include conducting a Battle Damage Assessment (BDA) which determines, among other things, if a strike resulted in “unintentional or incidental injury or damage to persons or objects that would not be lawful military targets in the circumstances ruling at the time.”66 Unwarranted or unexpected collateral damage identified in the BDA (or identified by other sources such as reports from media) often becomes the driver of follow-on investigations.67
Although “[u]nder the current state of IHL (International Humanitarian Law), there is no express requirement placing states under a duty to investigate all strikes resulting in civilian losses,”68 it is widely accepted that states are required to prevent and prosecute grave breaches of IHL.69 “In order to discharge the obligation to prosecute those who commit grave breaches, a state must ipso facto conduct credible investigations that could, if warranted, lead to prosecutions.”70 Further, some argue that investigations into breaches that amount to less than grave breaches of IHL can “be deduced from articles 1 and 146 of [the Fourth Geneva Convention] as well as from articles 1 and 87(3) of [Additional Protocol] I.”71 This theory is based on the assertion that “IHL creates an obligation to penalize all kinds of breaches and not only those which qualify as grave.”72 The obligation to penalize, when combined with the requirement that “[i]n all circumstances the accused person shall benefit by safeguards of proper trial and defence,”73 suggests some form of proper and credible investigation must be carried out to account for other than grave breaches of IHL.
In this regard, U.S. policy is clear. The DoD requires all “possible, suspected, or alleged violation[s] of the law of war, for which there is credible information...[be] reported promptly, investigated thoroughly, and, where appropriate, remedied by corrective action.”74 Analysis must also determine if incidents are classified as war crimes.75 Indications of war crimes typically “[require] that higher authorities receiving an initial report request a formal investigation by the cognizant military criminal investigative organization.”76 These organizations consist of trained professional investigators, such as Army Criminal Investigative Command (CID) or Navy Crime Scene Investigators (NCIS), who operate under unique authorities and regulations.77 In situations that may not rise to the level of war crimes, investigation of reportable incidents is commonly accomplished through the military departments’ and services’ administrative investigative processes.78 Both administrative investigations and criminal investigations face unique issues when investigating AWS incidents.
B. Centrally Managed Investigations
To account for unique considerations in AWS investigations, information sharing must be improved. Under current methods of conducting administrative investigations, IOs are appointed, conduct investigations, and their findings and recommendations are approved by an authority who also considers any recommendations they may have.79 The investigation is then maintained on file for a period of years.80 While this technique of categorizing and storing information is useful for the less complex situations that might give rise to an administrative investigation, it does not offer the ability for units to readily share problems that are experienced across military formations—let alone amongst military branches.81 Similarly, military criminal investigations are managed at localized levels, and while information sharing is much more efficient than in administrative investigations,82 it can be improved upon for purposes of managing information related to AWS investigations.
With AWS platforms likely to become ubiquitous across military formations,83 central management of AWS is key to identifying common issues that may manifest within individual AWS platforms. In turn, this will assist in AWS accountability and traceability by allowing compilation of data from AWS across the military.84 For example: analysis of multiple false identifications of weather radar stations as anti-aircraft batteries may help AWS designers to explain, and solve, the problem of AWS returning false identifications. While this input- and output-based analysis of AWS is not the single answer, allowing this form of examination is a step toward ensuring accountability of AWS.85
Luckily, the concept of centrally managed investigations is not foreign to the U.S. military. While not as technologically in depth as AWS, airdrop operations routinely involve coordination between multiple branches of the military, utilizing aircraft and complex parachute delivery systems.86 By ensuring “proper analysis to improve existing procedures and technology as rapidly as possible,” 87 the services maintain a joint regulation laying out combined duties and responsibilities. Under this joint regulation, the individual services are required to conduct an internal malfunction investigation in the event of a malfunction during an airborne operation.88 Once complete, these investigations are forwarded to a centralized directorate who publishes “all reported malfunction/incident activity data for review and analysis during the triannual airdrop malfunction and safety analysis review board meeting.”89
Investigations into AWS incidents should follow a format similar to airborne malfunction operations. While there is no need for micromanagement of individual service or command investigations, it is important that data on AWS incidents be compiled in a centralized location where it can be appropriately analyzed to allow improvements in AWS design. In addition to improving AWS and increasing explainability of AWS, centrally managed investigations will solve another issue present in AWS investigations by allowing subsequent investigations and incorporation of experts into the AWS investigation process.
C. Incorporating Experts
As demonstrated by the hypothetical at the beginning of this article, traditional investigative methods are not well positioned to examine the complex technology and multiple levels of government and private organizations that will have interplay in AWS incidents. Although current administrative investigative regulations require appointment of IOs “best qualified by reason of their education, training [and] experience...[and allow for appointing authorities to designate] assistant IOs...to provide special technical knowledge...”90 the sheer complexity of AWS will likely result in the inability of anyone other than a true expert to understand technological questions posed by AWS. For this reason, AWS investigations must allow for the incorporation of technological experts into the investigative process to ensure results are credible and can support accountability by providing a reliable basis for necessary criminal or adverse administrative actions.91
While criminal investigations have successfully integrated experts into the investigative process for some time,92 incorporation of experts into administrative investigations is less common.93 Fortunately, best practices can be derived from time-tested methods that allow for integration of technically complex concerns into investigative processes such as aircraft accident investigations.
With the invention of powered flight in 1903, complex mechanical and engineering issues quickly became apparent to the public.94 By 1928, the need for aeronautic accident investigations was recognized, and Congress passed the Air Commerce Act giving the U.S. Department of Commerce the mandate to investigate the causes of aircraft accidents.95 They do so through the present-day National Transportation Safety Board (NTSB).96 Today, the NTSB employs approximately 400 full-time employees between its headquarters in Washington, D.C., and four regional field offices.97 Through combined efforts with the Federal Aviation Administration, the NTSB has successfully conducted more than 132,000 investigations into the complex issues presented by aircraft accidents.98
To effectively conduct investigations of aviation incidents (and other public transportation incidents), the NTSB utilizes investigators in “Go Teams” who remain “[o]n call 24 hours a day, 365 days a year...[and are prepared to] travel through the country and to every corner of the world to investigate significant accidents.”99 Importantly, due to the fact that “[a]viation accidents are...usually the culmination of a sequence of events, mistakes, and failures,”100 the NTSB supplements their own internal experts with a “party system” of investigations.
Under this methodology, the NTSB designates federal, state, or local government agencies, as well as organizations or corporations with expertise, to actively participate in the investigation.101 This results in the NTSB investigative process including smaller working groups comprised of true subject matter experts in various fields relevant to the given investigation.102 Through the use of internal and external experts, the NTSB is able to effectively investigate complex accident scenarios and arrive at scientifically accurate results.
In order to ensure scientifically sound investigations into complex situations, AWS investigations should incorporate experts into the investigative process in a manner similar to the NTSB. While expert integration may be feasible at the local level in certain situations,103 the ability to employ and contract with experts in the AI field is best handled at a central location. By establishing central management of AWS investigations, the DoD can build the structure necessary to employ internal experts and coordinate for outside expertise when needed. This, in turn, will inform investigations that comply with international and DoD requirements and provide human accountability for AWS actions.
V. Bringing It Together: An AWS Investigative Model
While there is no need to reinvent the time-tested methods utilized by military services to conduct administrative investigations, the unique factors that present themselves in AWS investigations require a modified process to ensure accountability for AWS is properly established. Adopting the Joint Airdrop Malfunction/Incident Investigation methodology, individual services should be allowed to conduct initial AWS investigations utilizing their respective investigative methods.104 However, like Joint Airdrop Investigations, the DoD should direct that specific questions be answered at this phase.105
First, initial unit-level investigations should address responsibility at the command and end-user level to determine if the utilization of AWS was in compliance with law of war requirements. Because a key driver of this analysis includes the command’s understanding of what the AWS should have done, documentation of this expectation is key. Having established the command’s expectation of the AWS, initial unit-level investigations should next document the actual actions of the AWS, highlighting any deviation from the expected action. Finally, the initial unit-level investigation should document the outcome from the AWS actions.
Utilizing the hypothetical scenario presented at the beginning of this article as an example, a unit-level investigation would determine the commander appropriately used the AWS, as he believed the AWS had properly identified an enemy vehicle. Investigation would also determine that the AWS misidentified a school bus as an enemy vehicle resulting in the death of civilians. Having reached this conclusion, the AWS investigation would be forwarded to the centrally managed AWS investigation database.
With the end-user analysis complete by the unit, experts at the centrally managed location would then begin to analyze the other stages of responsibility in the AWS creation process. By adopting the NTSB model for utilization and incorporation of experts, AWS investigators would have access to experts from other government agencies and private business to assist with the investigation as needed. Utilizing the facts provided in the unit-level investigation and by conducting analysis of the AWS in question, the experts would attempt to identify the point of failure within the AWS and, if identified, examine why testing and evaluation did not predict and prevent the AWS failure.
With a scientifically accurate investigation complete, investigators would then examine the actions of individuals in designated positions of responsibility during the creation of the AWS. Finally, investigators and commanders would be able to examine the accountability of individual persons and, if necessary, take appropriate punitive or administrative actions utilizing existing methods and command structures.
VI. Conclusion
By allowing assignment of human responsibility for AWS actions through efficient and effective investigations, the U.S. military can ensure its ability to use and develop AWS without unnecessary restrictions. Designing actionable solutions to AWS accountability issues will allow the United States to remain competitive in an ever-changing military environment, while simultaneously ensuring that the moral and legal concerns surrounding AWS use are addressed. Although it remains to be seen whether “[t]he one who becomes the leader in this sphere will be the ruler of the world,”106 one can be certain that AI and AWS offer great power. And “[i]n this world, with great power there must also come—great responsibility.”107