The body cannot live without the mind.
Militaries have long recognized the importance of influencing human perception and decision making in warfare. These activities, categorized as information warfare under current United States (U.S.) military doctrine, aim in part at affecting the cognitive processes within the human mind. Yet, activities in information warfare are limited in their ability to have a direct effect on the human brain; instead information warfare aims to influence or manipulate the information environment or cyberspace with the goal of having an impact on the human end user.
But consider a situation where the intermediary technology between the influencer and the human consumer allows for direct access to the consumer’s brain and cognitive process. Here, information warfare could be conducted directly on the human target. Going a step further, if there was a direct interface between man and machine, would it be possible to do more than simply manipulate information or perception? What if it were possible to cause physical harm, or even kill, through the information environment? One piece of science fiction-feeling technology in existence today that could make this possible is the brain-computer interface (BCI), which enables the human brain to directly interact with a computer or information system.
In his 2014 article on applying international humanitarian law (IHL), otherwise known as the law of armed conflict, to future technology, Eric Jensen notes the importance of anticipating legal stress created by new technology. Jensen then highlights the vital role IHL plays in signaling acceptable state practice in relation to new capabilities and technology. He does this through review of a new weapon’s compliance with IHL, both as it is developed and as it is employed during warfare. While such signaling certainly addresses the use of the new technology, it also raises a separate, bedeviling question pertinent to BCI: how do we apply IHL in the other direction to target this technology once it is militarized and attached to a soldier’s brain? On its face, the question appears straightforward—a BCI used by an adversary to further military operations during hostilities should be targetable under IHL. But looking deeper, the incredible vulnerability of the human brain demands a more nuanced discussion.
Highlighting part of the targeting challenge presented by BCI, consider the direct connection and interaction it creates between the brain and a computer. A networked computer, what we understand to be part of cyberspace, has been identified as the BCI’s greatest vulnerability. Certainly, such vulnerability would be exploited in a military context. Thus, it is natural to consider how the BCI, and by extension the human brain, fits into our understanding of the man-made domain of cyberspace. Further, after peeling back how a BCI is designed, its different variations, and its battlefield functions, we are presented with several variables affecting application of IHL targeting principles in countering the technology.
Therefore, in the spirit of the forward thinking advocated by Jensen, this article anticipates and assesses the challenges of targeting BCI. Several factors, both external to the IHL regime and within IHL itself, apply to this assessment. These include our conception of cyberspace, consideration of whether a BCI-enhanced brain remains a person or becomes an object for purposes of IHL targeting, and arguments for the expansion of weapons treaties or international human rights law (IHRL) to address BCI. The article concludes that despite BCI furthering the convergence of man and machine and philosophical discomfort over the brain’s place in cyberspace, current application of IHL to the cyber domain offers the most effective model to handling the challenge of BCI.
To accomplish the analysis, this article first provides a general discussion and overview of some existing BCI technology, potential military applications, and BCI vulnerabilities. Next, it describes concerns raised in the newer academic field of neuroethics over the development of BCI, including suggestions that international law be modified in response to this technology. Addressing these concerns, the article then argues that our current understanding of IHL’s application to targeting through cyberspace applies effectively to BCI. This argument is buttressed by an exploration of BCI’s place in the current conception of the warfighting domain of cyberspace, focusing on whether the brain remains a biological system or whether its function in a cyber system changes the brain’s status to an object for the purpose of applying IHL targeting principles. Concluding that the best approach is to treat the brain as what it is, a biological portion of human body, allows IHL to apply to targeting BCI without the additional developments in international law advocated by some neuroethicists.
II. Brain-Computer Interface (BCI)
While BCI technology is very real—like many other newer technological breakthroughs—science fiction artists offer insight to the potential, and peril, of the technology as its capability increases and becomes more ubiquitous. For example, consider a world where everyone is equipped with a BCI implanted into their brains that enables access to a pervasive cloud database. This database would be capable of storing recordings of everything that a person sees or hears. In addition, the implant could access and provide unlimited data directly to the brain and be utilized to have a conversation or transact business simply by thinking it. This type of technology forms the background of a recent movie called Anon.
While many would see this capability as wonderful, Anon provides a glimpse of the dangers this type of technology creates in granting direct access to a person’s brain and—by extension—their conscious experience. In the movie, a hacker learns how to manipulate the database and, more importantly, the minds of those who are connected to it. The hacker is able to change what individuals see and hear, at one point causing the protagonist in the film to pull his car into busy traffic after making him perceive the road to be clear. The hacker is also able to manipulate memory—not just in the database, but also what is replayed in people’s consciousness. Again, in an effort to harm the protagonist, the hacker accesses the database, erases the good memories of the protagonist’s dead son, and then replays the protagonist’s memory of the day his son was hit by a car in front of him over and over in the protagonist’s mind, causing severe mental anguish. The human mind is manipulated through the BCI to alter temporal and spatial perception, to cause mental suffering, and ultimately to commit murder. Thus the movie raises disturbing questions about privacy, the sanctity of the human mind, and malicious use of this technology.
While Anon takes place in a distant, cyberpunk future, BCI technology exists today. The technology is nowhere near the point of the seamless, on-demand, bi-directional interface seen in Anon, but that has not stopped the Defense Advanced Research Projects Agency (DARPA), academia, and private industry from pursuing this goal. While some of these pursuits simply seek to create the ability for the brain to interface with the internet, many projects have the potential for military application, including remotely controlling military aircraft or robots, mental communication between individuals, and enhanced situational awareness through direct access to data. As this technology is perfected and becomes commonplace, there is little doubt it will be exploited for military advantage.
Against the backdrop of rapidly advancing BCI technology, several moral and ethical questions have been raised in the nascent academic field of neuroethics. Some concerns address the ethical and moral dilemmas faced by researchers and neuroscientists as they develop technology that may have dual-use military application. Other neuroethicists have gone further, offering commentary on the adequacy of international law to address their concerns over BCI and other neuroweapons. Neuroethicists taking this approach have raised two specific concerns: whether the existing IHRL regime is adequate in an age where a brain may be directly accessed through the internet or computer, with some advocating for new rights under IHRL, and whether existing weapons treaties are adequate to limit or prevent states from weaponizing this technology.
If adopted as state practice or formalized in international law, this second line of neuroethical advocacy—which directly relates to the application of international law to this technology—has the potential to limit military use of BCI, thus inviting commentary and response from international legal practitioners. To date, the discussion of how militarized BCI—whether utilized for data access and communication or incorporated into weapon systems—will comply with IHL has been limited. Brain-computer interfaces offer their own, stand-alone advantages to militaries and, from unmanned systems to artificial intelligence, may have complementary functions once incorporated into other future weapons. As BCIs’ march towards the battlefield appears inevitable, the time is ripe to begin addressing BCI under the lens of IHL.
A. Brain-Computer Interface Technology Generally
As with any new battlefield innovation, we must first have a basic understanding of the underlying technology prior to considering how IHL applies. First emerging in 1964 when Dr. Grey Walter connected wires to a human brain during surgery, the BCI has made steady advances in conjunction with breakthroughs in neuroscience. The technology has found its primary application within the medical field, but it also harbors great potential in robotics, prosthetics, and interfacing with information systems. A fully capable brain interface with an information system is a goal being pursued by the U.S. Government, other countries, and private industry; and, there are those that believe such technology is inevitable.
Simplistically, a BCI is a device that enables the brain to directly interact with an external information system or computer through technology implanted into a person’s brain or worn externally on a person’s skull. A BCI reads the electrical signals in a person’s brain associated with different functions, which are then communicated to a computer where the signals are decoded and utilized by that computer to accomplish a task or produce a specific output. The output could be the transfer of information or communication, or it could be utilized to control a mechanism—such as a prosthetic or robotic system. It is important to note that BCI should not be confused with voice or muscle-activated devices—BCI are a mechanism allowing for direct communication between the human brain and computer.
A BCI utilizes a cycle allowing for the brain to input information to the system and later receive feedback. The generation phase of the cycle refers to the brain’s creation of electrical signals associated with different tasks or actions. These signals are then read in the second, measurement phase of the cycle, which is facilitated either by an implanted intracranial device or sensors worn externally on the skull. Next is the decoding phase, where the measured input from the brain is decoded and classified by a connected computer. Finally, once decoded, the BCI completes the output phase of the cycle. In this phase, the computer executes the brain’s intent, whether it be to communicate information or to cause a machine to move. This final phase also provides feedback to the brain on the action.
Neuroscientists are researching both externally worn and implanted devices to facilitate the measurement and output phases of the BCI cycle. Externally worn devices include electroencephalography (EEG) caps which measure the brain’s electrical activity through the skull. Internally implanted devices include wired nodes attached directly to the brain and experimental technology like “neural lace.” While each allows the BCI cycle to function, internally implanted devices currently have greater capability.
Brain-computer interfaces first saw application in treatment of various medical conditions. Initial iterations were aimed at helping patients who were “locked-in” paraplegics, then moved to treating patients suffering from epilepsy and Parkinson’s disease. These earliest BCI worked in one direction, from the patient’s brain to translation by the computer, but the table was set for future innovation.
Brain-computer interfaces have seen application and rapid development in the field of prosthetics. Doctors and neuroscientists have been successful for years in isolating brain patterns associated with movement, enabling the creation of BCI used to control a prosthetic limb. As the technology has been refined, bi-directional communication between a brain and BCI has enabled users to feel sensations, such as heat and texture, on the objects the prosthetic limb touches.
Beyond the BCI allowing for interaction between man and machine, BCI has also begun enabling direct communication between human brains as well as cooperative problem solving. It has also shown success in enabling physical control over the movement of laboratory animals, and has recently demonstrated the ability for one human to physically control the movement of another through thought.
The above are but a few highlights of the progress neuroscientists have made in developing BCI technology. Researchers have demonstrated success in electrical interaction with the brain, brain-to-brain communication, collaborative problem solving, and physical control over external systems, animals, and people. Such developments have clear application in military contexts. But, along with military application, BCI carry inherent vulnerabilities in their systems, exposing the human brains to which they are attached.
B. Military Applications
Neuroscience’s potential to impact the future of warfighting and national security has been recognized and invested in for years in the United States. While other government entities—such as the intelligence community—have invested in this research, DARPA has led the charge in defense research into BCI. Invested in heavily during the Obama Administration era, DARPA seeks to expand our understanding of technology utilized to interact directly with the brain through the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. Several DARPA sub-projects under the umbrella of the BRAIN Initiative aim to further the military integration of this technology by leveraging partnerships with academia. These projects include seeking to expand the capability and data rate for implantable BCI devices, utilizing BCI to control vehicles such as drone swarms, restoring and enhancing memory, and cooperative intelligence analysis and target selection.
Beyond its stand-alone capabilities, BCI offers complementary capability to developments in artificial intelligence (AI), allowing humans to directly interact with AI systems instead of simply being in or on the loop. Such convergence blurs the line between man and computer, potentially leading to weapons or weapon systems incorporating the unconscious abilities of the brain to maximize the effectiveness and reactiveness of a military system. Such systems could leverage the human brain’s superior ability to unconsciously recognize threats, melding them with an AI computer’s superior ability to calculate a response. In these weapon systems, the BCI would function by picking up the brain’s unconscious recognition of a threat, passing on that information for an automated response from the AI. A conscious human decision would be left out of the equation.
Significant issues still exist in pursuit of this technology; neuroscience strives to fully understand the way the brain communicates—in essence, its code. Until neuroscientists are able to fully understand this code, the type of BCI that will allow for full integration with AI, computers, and information systems will not be possible. Despite this limitation, the quest for ever more capable BCI drives ahead, opening the door to dangerous vulnerabilities to the BCI and human brain alike.
C. Human Danger Created Through BCI
The direct risk to the human brain created by BCI is caused by BCI’s vulnerability to manipulation via cyber means. In essence, once integrated with an information system, a BCI becomes just another node in that system. As P.W. Singer warns, new networked technology rarely incorporates security into its design, and BCI is no different in this regard. Evidence already exists that BCI can be subjected to a cyber-effect or manipulation.
The ability to manipulate implantable medical technology through cyberspace has already been identified as a significant vulnerability. For instance, the Tallinn Manual discusses manipulation of a networked pacemaker using cyber means, causing an effect on an individual’s heart. As troubling as it is to be able to manipulate an individual’s heart, it is equally—if not more—troubling to be able to manipulate a human brain. This risk is real and has already been demonstrated. A recent Kaspersky Labs report on BCI details vulnerabilities in the systems that interact with and control them. The report highlights the ability to interfere with the software used to control the BCI hardware, creating the ability to steal or manipulate memory, and allowing for direct harm to the individual equipped with the BCI by manipulating the electrical signals sent to their brains.
Additional concerns over this type of manipulation have been growing, leading to speculation on the dire risks possible through manipulation of BCI through cyberspace. For instance, “brain-hacking” encompasses BCI vulnerabilities at several points in the cycle. Such activity has the potential for third-parties to access the private information in an individual’s brain and to wrest control of the system or machine the BCI is interacting with from the user. This activity could potentially lead to physical and psychological harm, as well as the user losing their sense of agency or self-determination of their own life.
Similarly, the concept of “brainjacking,” raised in 2016, concerns itself with malicious cyber actors gaining access to implanted BCI and causing effects within the brain. The risks are associated with implanted medical devices, and the authors who coined the term are quick to note that it does not refer to any form of mind-control. What brainjacking does conceptualize, however, is a change in the implant’s settings, throwing off the electrical signals sent to the brain. This, in turn, could lead to several adverse effects to the individual, including tissue damage, impairment of motor function, modification of impulse control, emotions, or affect, and induction of pain.
Additional threats to this technology include cyber manipulation of BCI code or hardware at any point in the BCI cycle. For example, should a hacker or other cyber actor gain access to the input portion of the cycle, they may be able to extract sensitive or personal information about that individual. If the other phases of the cycle (measurement, decoding, and output) are compromised, more than data is at risk. The intended output or action can be disrupted or terminated, potentially leaving the individual helpless. In the extreme, the BCI cycle can be hijacked, resulting in physical harm to the individual.
These risks highlight several nightmarish, but entirely plausible, scenarios if BCI reaches its full potential. Imagine the ability to manipulate the motor functions of an individual driving a car, causing them to drive off the road. Further, what if the individual is utilizing a BCI to control a weapon system. Could the physical system be hijacked and turned against the individual or their allies? What if there was potential to disrupt the decision making or personalities of individuals in power? Is it possible to send a signal through the internet to a BCI that causes it to damage an individual’s brain to the point of permanently disabling or killing them? These are just a few of the possiblities in a future filled with BCI; spawning a nascent ethical discussion concerning the use of this technology and the role the law will have in its regulation.
III. Neuroethics and Proposals for Regulation
The “mind is surely the most salient feature of Homo sapiens.” It is not surprising then that neuroethicists are alarmed by the prospect of linking man and machine. Most of the neuroethical discussion centers on the moral and ethical dilemmas presented by BCI; but some neuroethicists push further, advocating for modification or expansion of international law protections in response to advances in neurotechnology. The theme across this discussion is the need to protect the brain and—by extension—mind, consciousness, and human agency.
As a relatively new field in academia, neuroethics aims to advance the discussion of the consequences of new neuroscientific breakthroughs. Identifying the issues presented by BCI, some neuroethicists have focused their attention on government funded dual-use neuroscientific research that furthers BCI and other brain technology, intending to inform scientists that their latest breakthroughs could have military applications. This portion of neuroethics—relating to research and development—bears directly on moral and ethical questions, with the tangential effect of informing development and review of neuroweapons for IHL compliance. While development of IHL compliant neuroweapons will be essential, this branch of neuroethics does not directly address targeting these weapons once they are deemed compliant and make their way to the battlefield.
Others in the field, viewing the incorporation of this technology into everyday life as inevitable, explore the need for additional laws or expansion of our understanding of human rights protections against abuses of this technology. Some have argued for expansion of IHRL in order to address the threats to the brain created by BCI. Others have highlighted the inapplicability of existing treaties, laws, and regulations to neuroweapons.
The primary driver of neuroethicists’ concerns regarding BCI is the potential for the technology to be abused; it could be used to physically damage people’s brains—for example, to manipulate individual personality, self-determination, and free will. In response, neuroethicists have identified numerous areas that challenge the ethical use of neurotechnology. First and foremost is the concept of informed consent, which deals with whether an individual has adequately been made aware of the risks associated with the technology.
Informed consent takes on a different dimension when discussing the implantation of BCI or other enhancement technology within service members. The question becomes whether a service member actually has a choice. Given the advances in BCI technology, and the risks to the mental livelihood of the individual highlighted earlier in this article, individuals equipped with BCI may assume significant risk. Consider further that some of the technology highlighted previously allows for the manipulation of the mental state of individuals, or even physical control over them. It is not unreasonable to consider BCI being utilized to manipulate service members’ personalities or instincts to make them more efficient at carrying out their duties. Informed consent, while not a protection from the potential manipulation of this technology, still offers some human agency and decision making to individuals in allowing this technology to be connected to their bodies.
Once connected, neuroethicists warn abuses of BCI can lead to degradation of a person’s privacy, the ability to be secure in their thoughts, and their mental and physical safety. Neuroethicists have discussed protection of “[a]utonomy, agency, and personhood.” Autonomy and agency are essential aspects of being a human being. Brain-computer interfaces or other technology that can be utilized to restrict or even overcome human autonomy or agency strike at this core. Compromise of autonomy and agency can lead to three major ethical issues: removal of the “intention-action” link resulting in psychological distress, generation of “uncertainty about voluntary character” of the individual equipped with the BCI, and risk to Western jurisprudence which is based in the voluntary control over an individual’s own actions. The first two issues are risks to the individual, while the third has societal consequences that may challenge our ability to reach accountability for illegal acts perpetrated by individuals not in control of their own minds or bodies.
It is against this backdrop that neuroethicists have begun suggesting approaches to mitigate against the risks posed by neurotechnology and, specifically, BCI. These approaches include moral and ethical discussions as well as suggested expansion of international law and regulatory regimes that would govern the development and use of the technology.
A. Ethical and Legal Proposals to Address BCI’s Risks
Neuroethicists have begun expanding their discussions into areas of international law, to include IHRL and other regulatory regimes such as weapons treaties. Neurotechnology’s impact on IHRL “largely remains a terra incognita.” Yet, as new neurotechnology—including perfected BCI—becomes more ubiquitous, adaptive developments in IHRL are possible. Failure to recognize the concerns presented by BCI, and the possible expansion of IHRL, has the potential to create a gap in the law where arguments can be made for greater application of IHRL to BCI, regardless of context. Further, adaptation of or additional weapons treaties may restrict otherwise IHL-compliant operations against BCI.
1. Neuroethical Approaches
In concluding his book Mind Wars, Dr. Jonathan Moreno advocates a role for advisory boards made up of scientists and ethicists to provide input on the development of new neurological dual use technology. The goal of this committee would not be to stifle development of this technology, but rather to highlight the human risks the technology will create—including potential military applications. The goal of this approach is for neuroscientists and other researchers to be completely aware that their latest breakthrough could also be used for purposes they never thought of or intended.
This approach is one shared by many other neuroethicists. Highlighting the reality that government funded research into neurotechnology will lead to dual use applications, ethicists aim to ensure scientists and researchers operating in this field have been fully informed of the consequences of their work. Going further, others have suggested an even more expansive “neurosecurity framework.” This framework would consist of three levels: “regulatory intervention, codes of ethical conduct, and awareness-raising activities.” The first is a legal consideration and will be discussed later, but the latter two fall into the realm of ethical consideration. The ethical code of conduct would aim to maximize benefit of government- or military-sponsored neuroscientific development while minimizing the risks to individuals and communities. This would include protections like informed consent and the ability to refuse the implantation of neurotechnology without legal repercussions. It would also aim to ensure security measures were incorporated into the technology to provide protection for individuals. The last prong of the neurosecurity framework would take the educational component advocated by Moreno further, to include scientists, researchers, and the public.
2. Advocacy for Legal and Regulatory Expansion
Neuroethicists have also begun openly speaking about expansion of international law and regulatory regimes to protect individuals from the misuse of BCI. These arguments fall under the first prong of the proposed neurosecurity framework discussed above. In spirit, as they highlight many of the horrible possibilities of BCI while noting that the law is inadequate to address these dangers, neuroethical positions reflect the appeal to the “public conscious” found in the Marten’s Clause. Although these proposals include both international and domestic regulation, the discussion here will be limited to two areas of neuroethical advocacy in international law: the application of IHRL and existing international weapons treaties to neurotechnology. In advocating their positions, neuroethicists’ focus is on the threat to the brain, not the use of neurotechnology such as BCI. Thus, as their positions are reviewed, it is pertinent to ask whether neuroethicists seek to ban the technology or to simply outlaw actions or operations that may affect the BCI and—by extension—the brain.
First, in the area of IHRL, many of the neuroethical concerns align with the motivations and protections found in existing customary and IHRL treaty law. However, according to Marcello Ienca and Roberto Andorno, the fit under existing IHRL is not exact. In 2017, they proposed a human rights “normative upgrade” in which they describe, in light of developments in neuroscience, why a series of human rights should be added to existing IHRL. First, and fundamental to Ienca and Andorno, is the right to cognitive liberty. Cognitive liberty is viewed as fundamental and underlying all other mental rights. It includes the right to utilize, or choose not to utilize, neurotechnologies. Cognitive liberty allows for individuals to be free to make “choices about one’s own cognitive domain in absence of governmental or non-governmental obstacles, barriers, or prohibitions,” to exercise “one’s own right to mental integrity,” and to have “the possibility of acting in such a way as to take control of one’s mental life.”
Serving as the foundation for other proposed rights, cognitive liberty supports other additions to IHRL proposed by Ienca and Andorno. These include the rights to mental privacy, mental integrity, and psychological continuity. Mental privacy aims to protect information gleaned from the brain through a BCI. This may include data on an individual from their brain activity to thoughts and memory. Mental integrity references mental and physical damage that can be created through the compromise of the brain through a BCI. Psychological continuity describes behavioral or psychological changes or issues that may result from misuse of BCI. In closing, Ienca and Andorno argue that these rights should be incorporated into the current IHRL regime or become new IHRL rights.
Beyond IHRL, neuroethicists have also been quick to point out that neuroscience and neurotechnology are not contemplated by existing weapons treaties, specifically the Biological Weapons Convention (BWC) or Chemical Weapons Convention (CWC). Since BCI and other neuroweapons use technology and electronic signaling rather than biologic or chemical means, neuroethicists have noted the BWC and CWC are inapplicable to BCI.
Brain-computer interfaces are also not contemplated under the Convention on Certain Conventional Weapons (CCW). In consideration of the CCW, it is important to note a potential link between BCI and the ongoing discussions regarding a possible sixth additional protocol to the convention relating to Lawful Autonomous Weapon Systems (LAWS). A stated goal of some BCI development projects is to enable direct interaction between a human brain and AI, the centerpiece technology of LAWS. While beyond the scope of this article, if BCI technology continues on this trajectory, future consideration of its relationship with LAWS may warrant further exploration.
Regardless, in viewing neuroweapons, to include BCI systems, as items requiring international regulation, some have advocated for expansion of the above treaties to include neuroweapons. Others have noted a new treaty may be necessary. Neuroethicists are clearly not confining their discussion to the moral and ethical issues raised by the technology, but they are openly advocating for expansion of international law to regulate the technology. Such expansion, if it occurs and depending on how it develops, could significantly impact the ability to utilize BCI systems or target them during hostilities. Obviously, if weapons treaties are expanded or a new treaty was agreed upon to ban or limit the use of BCI or weapons used against them, the restriction would be apparent to all signatories. More delicate, however, is the interaction between IHRL and IHL during warfare and how expansion of IHRL could also limit options in targeting BCI.
B. Expanded IHRL for BCI and Its Interaction with IHL
Traditionally, IHRL is the body of law addressing how humans are protected from deprivation of their rights by their state and “how the individual might encounter other private actors within the State.” Therefore, IHRL allows for the “notion that the individual has rights on the international stage” and that international law can regulate how a state and an individual interact. Differing from most international law, “IHRL recognizes rights based on an individual’s personhood rather than on one’s status as a citizen or subject of a State party to a treaty.” IHRL covers a multitude of subject areas, including education, parenting, labor, politics, and religion. IHRL’s influence on the relationship between an individual and the state is only limited by the scope of how it develops.
IHRL and IHL have been traditionally understood to apply separately of each other. IHRL applies territorially during peacetime, governing the conduct of a state towards its own citizens and individuals under the state’s control. IHL applies during wartime, governing the responsibilities states have toward each other in the conduct of hostilities. This position, known as displacement, reflected the long-held international law doctrine of lex specialis, which dictates the more specific area of law governs a given situation. Under displacement, IHL is the lex specialis governing armed conflict.
However, recent international jurisprudence, opinions of numerous commentators, and burgeoning state practice has shifted the understanding of how IHRL and IHL interact. The current consensus has shifted to a position of convergence where IHRL and IHL apply contemporaneously, even during armed conflict. In this position, IHL would retain its position as the lex specialis governing hostilities; but other areas where IHL may not be specific to the situation, or is inadequate to address the question presented, would possibly allow for IHRL’s application during armed conflict.
Convergence’s mainstream role in the current understanding of how IHRL and IHL interact has raised questions of how to determine when IHRL’s application would be triggered during armed conflict. Several authors have noted the impracticality of asking commanders or service members to make a case-by-case determination of which legal regime applies during a given activity. A more practical suggestion is to divide functions or “broad handfuls” of activities associated with warfare—such as combat operations, logistics, and detention operations—and then make a determination as to which body of law applies to each function. These determinations would apply both in international armed conflict and non-international armed conflict.
Since this article aims to address targeting and engaging BCI, we would appear to be safely in the category of military activities governed by IHL under legal frameworks outlined above. Targeting individuals and military equipment is governed by long established principles for armed conflict under IHL. But BCI offers several other possibilities, such as information operations and intelligence activities, that may not directly implicate IHL’s application. Further, some potential capabilities of BCI-enabled weapons, such as a state weaponizing its own citizens or soldiers, have raised questions regarding the applicability of IHL to a state’s use of these systems vis-à-vis IHRL. This discussion centers on the applicable law to the creation or use of BCI-enabled weapons by one state, not an adversary’s targeting of these weapons or the individuals wielding them.
Care should be taken when considering the arguments of proponents of IHRL or other restrictions on the use of neuroweapons, such as neuroethicists, as to the extent of IHRL’s applicability to the problem. A clear articulation of IHL’s applicability to targeting BCI, addressing the inherent risks to the human brain highlighted by neuroethicists, is imperative to maintaining the distinction between when IHRL’s applicability should end and when IHL’s should begin.
IV. BCI, the Brain, and Cyberspace
Before addressing the applicability of IHL to BCI, we must first consider its place on the battlefield. While discussing BCI and other neuroweapons, neuroethicists focus on the dangers to the human brain; however, another consistent thread is present in their discussions: the threat is mainly resident in cyberspace. A BCI is part of a networked computer system that happens to incorporate the brain. Further, the brain can function similarly to a computer in a BCI system, raising the question of whether it maintains its status as part of a person or is it now an incorporated object due to its function in the man-made cyber domain. While some recent work has explored this question, this approach may serve to complicate the application of IHL to targeting technology such as BCI. This section explores these questions in the context of the brain’s place and status during armed conflict.
A. Cyberspace and the Brain, Briefly
Cyberspace consists of the collection of information nodes (computers, servers, routers, etc.) that allow information systems to communicate with each other. First established as a way for academics to communicate and share research data via computer, the internet has exploded into an indispensable part of human life. Improvements in telecommunications and processing technology has allowed the cyber domain to extend beyond traditional computers and into many other everyday devices. Our phones, cars, watches, televisions, and even our refrigerators can be connected to the internet, becoming part of the ever increasing cyber domain. The ubiquity of objects connected to the internet makes up what has been referred to as the “Internet of Things (IoT).”
Data flows through the internet in accordance with Transmission Control Protocol/Internet Protocol (“TCP/IP”), the common language of cyberspace. As nodes are added, data is able to flow utilizing TCP/IP to an astoundingly diverse group of devices across the entire globe. A BCI that is attached to a network is designed to utilize this same language, incorporating the technology into IoT.
Traditionally, when discussing cyberspace, a distinction has been made between the natural world and the man-made realm. For example, the Tallinn Manual discusses cyberspace as consisting of three, man-made layers: physical (network components and infrastructure), logical (applications, data, and protocols allowing for connections between devices), and social (individuals and groups engaged in activities within cyberspace). Department of Defense Joint Doctrine contains a similar description of cyberspace, declaring it exists wholly within the information realm and consists of three layers: physical network, logical network, and cyber-persona.
These descriptions confine cyberspace to a man-made construct, and, therefore, a gap exists between humanity and cyberspace. This gap is currently bridged by the typing of our fingers on a keyboard, the information displayed on a screen that is taken in and processed by our brains, or other current technology allowing humans to interact with cyberspace. In each, human agency and conscious decision making result in the use of an input device or consumption of information produced by cyberspace. There is a clear separation between man and machine.
Humanity’s desire to have greater access to the internet, and the data it contains, will make BCI an attractive option to many. Individuals are looking for ways to do away with external devices, with many implanting chips into their bodies already. Humans are already able to wear cyber nodes and hold them in the palms of their hands in the form of smart phones, watches, and other devices. The next logical step is to take away the intermediate technology and to link the human body directly to the cyber domain. It is likely that individuals will be willing to allow their brains to become accessible to cyberspace in exchange for the convenience and access to the internet made possible by BCI. This is where individuals could suddenly find themselves as part of the IoT.
This future will consist of single actions to interact with an information system—brain to computer. There will be no need to move muscles, type, move a mouse, or give a voice command because the BCI will interpret your intent directly from your brain and input it into the information system. The information system could also send data directly back to the individual’s brain without even having to bother with a computer display or other output device. Additionally, the brain itself can be incorporated into the information system to enhance its performance or computing power. In each instance, the interface is direct and, based on the definitions of cyberspace above, could arguably incorporate the brain into the physical and logical layers of cyberspace.
Such incorporation of the brain as a cyber-node immediately creates difficulty. As cyberspace is currently understood to be entirely man-made, any addition of a biological system would be a dramatic shift. Brain-computer interfaces offer the ability for the brain to act both as a cyber-node and human user; these functions can occur exclusive to each other or simultaneously. At a minimal level, the brain is providing signals unconsciously through the BCI to the computer it is interacting with in order to facilitate the function of the interface. From a purely functional analysis, there are many aspects of the brain’s purpose in a BCI system that are associated with data collection and processing, functions that are traditionally considered part of a computer. This line of thinking has led to some speculation on whether BCI, as a human enhancement, objectifies the brain to which it is attached. In a military context, such a transformation could cause the brain to become a means of warfare or weapon— in other words, affecting the application of IHL.
1. Means, Weapon, or Human?
Consideration that the brain could somehow become objectified due to its function in a BCI system is certainly a dramatic shift. This line of thinking is contrary to the humanitarian spirit of IHL and would base the legal analysis of the application of IHL targeting principles as if the brain in a BCI system had become an object. Such a modification would reduce the protections for persons under IHL, in turn supporting the positions of neuroethicists concerned with the human costs surrounding this technology. From a moral and ethical standpoint, this position does not make much sense. But, considering the question through a purely functional standpoint under IHL, analyzing the brain’s purpose and function in a BCI system does illuminate instances where it may act as little more than an object. Therefore, consideration of whether a brain could ever become objectified through its function in a BCI system is warranted.
This question revolves around the brain’s function in a given BCI system. For purposes of this analysis, BCI can be broken into two categories: those designed to enable information flow to and from the human equipped with BCI, and those designed to be integrated into a physical system. To the first category, the discussion is fairly straightforward. A BCI designed to simply provide information or data to its host, or to store data for later use from its host, is analogous to our understanding of current computer or information systems.
The brain in this first category of systems retains its human agency and intention. The human’s intention to access or provide inputs to the information system is the same in current technology, the utility and direct interaction between the brain and information system offered by the BCI is the only distinguishing factor. Similarly, communication with other individuals through a BCI also requires conscious decisions, which would be undertaken non-verbally and facilitated by the BCI technology. Therefore, a brain connected to BCI in this first category, utilized simply for informational and communication purposes, would clearly retain human qualities.
The second category of BCI presents a more significant challenge, as these BCI are designed to control physical systems from a distance. The likely incorporation of BCI into future weapon systems will enable direct control and quicker reaction to potential threats. The military advantages of weapon systems that can move and react more quickly and take decisive action are obvious. The pertinent question under IHL becomes how the brain is designed to interact with such a system.
In a recent paper, Gregor Noll analyzes the role of consciousness and human agency in future weapon systems. The clear advantages of such weapon systems are highlighted, including human superiority to machine in unconsciously recognizing a threat and machine superiority in speed of response. Thus, the potential decisive nature of incorporating both into a BCI weapon systems is laid bare in Noll’s discussion by incorporating the best capabilities of man and machine into an automated response. But this decisiveness is only achieved through utilization of the unconscious recognition of the threat by the brain. Noll argues that such weapon systems present a pressing issue for IHL, namely that IHL is built on the conscious human judgment of commanders and those employing weapon systems. Noll highlights that the advantage of BCI weapon systems is lost if a conscious human decision is built into the loop, as it adds time to the decision making chain. Thus, he concludes that excluding a conscious human decision from the loop of these systems is incompatible with IHL as it removes human agency and judgment.
Noll highlights several challenges that will occur when evaluating future BCI weapon systems for compliance under IHL. He also, indirectly, raises the question of what becomes of the brain’s status in a weapon system like Noll describes. If the brain is simply there to unconsciously enable the weapon system in execution of its automated or pre-programed function, how is the brain any different from a computer?
Two other recent articles have broached the question of whether the brain in such a BCI could be considered an object. In the first, Heather Dinniss and Jann Kleffner articulate that certain systems, such as prosthetics, could be weapons if they were designed to cause physical harm or damage. The key feature of this argument is the prosthetic weapon being incorporated into the body of an individual, rather than simply being held or being machinery that is operated through physical manipulation by that individual. By extension, this reasoning could apply to the man-made portions of a BCI, especially if the BCI is designed to control a weapon or weapon system. But this analysis stops short of allowing the brain to be considered part of the weapon, instead focusing on the hardware of the prosthetic as the potential weapon.
A complementary article by Rain Livoja and Luke Chircop expands on this analysis, evaluating whether human enhancement technology could cause a warfighter to become a mean, method, or weapon. The article concludes that the BCI equipped individual is not a method of warfare. It does allow for the man-made portions of the BCI system to be considered a mean of warfare, but again does not include the brain. Interestingly, however, when discussing weapons, the authors make a distinction between weapons and weapon systems. Weapons are defined objects designed to cause physical harm or damage, while weapon systems are considered to be all portions of the system allowing for the function of the weapon. The authors conclude with the possibility that a BCI as a whole can be considered a weapon system, leaving the door open for the brain’s inclusion as part of the system. This in turn raises the specter that a brain integrated into a weapon system can be treated as an object instead of part of a person.
Although the door is open to considering the brain as part of a weapon system, this line of thought still requires analysis of the brain’s role in the weapon system itself. As Noll articulates, the role of the brain can include either unconscious incorporation or allow for conscious human intervention and decision making. A BCI weapon system that incorporates conscious human agency would be similar to a human pulling a trigger or pushing a firing button in a different weapon system. It is not logical to consider the brain in such a system to be part of that weapon system or an object.
But consider systems that utilize the brain unconsciously with no human agency involved. There appears to be some tenuous analysis allowing for consideration of the brain as an object in such weapon systems, since the brain would act like a computer or processor. Taking such a position would be a dramatic shift as part of the human body would become objectified due to its function.
Such an analysis is understandable in an era of human enhancement and convergence between man and machine, but is also radical under the traditional place of a person when applying IHL. Even when BCI technology reaches the point of allowing for such capability, taking the approach of assessing the brain’s function in a system to determine its status as a person or object for IHL targeting purposes departs from existing norms of simply treating all humans, and their associated parts, as persons. The very basis of IHL is to mitigate human suffering caused by warfare, so any analysis removing an individual’s personhood runs contrary to the spirit of IHL.
Persons, whether they are non-combatant civilians or members of an armed force, are clearly different from buildings, vehicles, weapons, and equipment. This difference between people and objects affects the application of the IHL principles of distinction, proportionality, and humanity. Undergoing a functional analysis of a brain in a BCI system to determine whether it is a person or object serves to overcomplicate the matter and is akin to trying to fit a square peg into a round hole. The role of the brain, and conscious human decision making and agency, is a consideration in whether a BCI-enabled weapon system would comply with IHL during a weapons review process. But, for purposes of targeting, the better approach is to treat the brain—conscious or unconscious—as a part of a person, allowing for consistent application of IHL and its targeting principles.
V. Applying IHL to Targeting BCI in the Cyber Domain
Beginning from a position that always treats the brain as part of a person for targeting purposes allows for a clearer step-by-step analysis of targeting BCI through cyberspace. IHL is understood to apply in cyberspace. Adversaries utilizing BCI to interact with cyberspace, as they would use a computer or other device, may be legally targeted under IHL through cyberspace; but, significant analysis is required prior to undertaking such an operation. The analysis begins with the threshold question of whether the contemplated operation against a BCI meets the definition of an attack. International Humanitarian Law and its targeting principles apply to attacks against BCI, but operations that fall below the threshold of this definition will require separate consideration. Once an operation is deemed to meet the definition of attack, the next portion of the analysis considers what the target actually is in the BCI system. Is it the BCI hardware, the computer or servers the BCI interacts with, the brain of the individual, or any or all the above? Once the scale of expected effects to a BCI are understood, IHL targeting principles can be applied to determine the legality of the operation. Thus, this framework allows for effects on adversary BCI while also offering protections to the brains of individuals incorporated into the BCI.
A. Cyber Attacks and BCI
The Tallinn Manual offers substantial guidance in determining whether an operation against a BCI could be considered an attack for purposes of applying IHL. The Manual defines an attack as “a cyber operation, whether offensive or defensive, that is reasonably expected to cause injury or death to persons or damage or destruction to objects.” The distinction of whether a cyber operation is deemed to be an attack is violence—which is not required to be kinetic violence—expected to cause the effects listed in the definition. The Manual specifically notes non-violent operations, such as psychological operations or espionage, do not qualify as attacks.
By excepting non-violent operations from its definition of attack, the Tallinn Manual creates a category of potential operations against BCI that do not have associated protections under IHL. Espionage, whether through cyberspace or other means, is certainly an area of concern created by BCI’s access to the brain. Neuroethicists highlight these concerns in their discussions of mental privacy. Such concerns are certainly valid, but they are beyond the scope of this paper. Subsequent consideration of legal and regulatory regimes to address espionage activities against BCI is certainly warranted.
The second non-violent category cited by the Tallinn Manual also requires further consideration. The Manual refers to psychological operations as not rising to the level of an attack for the purposes of applying IHL. Psychological operations, also known as Military Information Support Operations in U.S. doctrine, are “operations to convey selected information and indicators to foreign audiences to influence their emotions, motives, objective reasoning, and ultimately the behavior of foreign governments, organizations, groups, and individuals.” These operations focus on target audiences, including adversaries as well as friendly and neutral populations. Thus psychological operations allow for actions to influence the thoughts of large groups of individuals who may not be participants in hostilities.
Brain-computer interfaces offer a direct avenue to individual minds while conducting psychological operations. Again, this exposure is reflected in the concerns of neuroethicists, who discuss IHRL freedoms of thought, expression, and political independence. While psychological operations contemplated by the Tallinn Manual are not regulated under IHL principles, they are not specifically prohibited under international law and are viewed as a permissible means of warfare. It is also important to note that psychological operations are aimed to influence a population, not control them. Target audiences of psychological operations maintain the ability to digest the information provided to them and to reach their own conclusion, therefore retaining self-determination and agency over their decisions. Again, as BCI will offer a direct path into the thoughts and minds of individuals, revisiting psychological operations enabled by ever more capable BCI may be warranted.
It is important to note, however, that psychological operations discussed in the Tallinn Manual do not include operations that would result in “mental suffering.” The Tallinn Manual specifically includes such operations as attacks, requiring the application of IHL targeting principles. Individualized effects manipulating BCI, such as manipulating memory to create mental anguish or affecting the psychology of the individual, could be counted as an attack for purposes of the Tallinn Manual due to the resultant mental suffering.
So, too, would many of the other conceivable operations against BCI, including actions aimed at killing or injuring the individual connected to the BCI, damage to the BCI hardware, or disabling or hijacking the function of the physical system connected to the BCI. These categories focus on effects of destruction, injury, or damage that manifest themselves outside of cyberspace in the natural world. For operations intended to create such effects, IHL would clearly apply.
But one final category of operations, those solely against data, provides an additional layer of difficulty when considering BCI. Per the Tallinn Manual, operations against data are not per se attacks unless such operations also affect the functionality of a system or cause other effects tantamount to an attack. State practice has yet to establish positions on the status of data, so a potential gap exists in our understanding of IHL’s application to cyber operations against data. Brain-computer interface technology may exacerbate the existence of this gap. Humans equipped with BCI will likely become assimilators of information as the BCI grants a person immediate access to data. However, making this data inaccessible—or corrupting it in some way—may not rise to the level of impairing the function of a BCI, but it will certainly impact a human who is accustomed to this data being readily available. As humans become more accustomed to this data access, depriving individuals’ access or corrupting the data could result in the negative mental and psychological effects detailed by neuroethicists. Some have suggested solutions to the status of data in cyber operations, including Peter Pascucci’s suggestion of allowing data that offers a “definitive military advantage or demonstrable military purpose to qualify as a military objective.” Such an approach would resolve the matter for operations against data accessed and utilized by BCI during armed conflict, but this matter has yet to be settled.
Despite certain cyber operations or activities not fitting under the definition of attack, the vast majority of potential operations against BCI through cyberspace would be considered attacks for purposes of applying IHL. The function of a BCI makes it more likely that a cyber operation against the BCI system would be considered an attack due to the brain’s incorporation into the system. The brain’s incorporation into the system brings it into closer proximity to the cyber effects created by a given operation, increasing the likelihood that such effects could harm the brain or affect the function of the system the brain is interacting with. Therefore, it may be more likely that cyber operations against BCI are deemed attacks, triggering the application of IHL and the protections found in the IHL targeting principles.
B. A Framework for Cyber Operations Against BCI
As highlighted by the neuroethicists, neuroscientists, and computer security professionals, BCI contain cyber vulnerabilities that can be exploited in several ways. William Boothby, addressing how cyber weapons can be employed, notes that any given cyber weapon will have “numerous orders or levels of effect and these must all be considered when weapons law advice is being prepared.” Boothby goes on to describe four layers of effects that build on each other: effects on the data contained in the node, network, or computer; the impact the data affects or manipulation has on the computer system; how the performance of the computer system affects the object or facility the computer is attached to; and any injury, damage, or destruction suffered by the persons or objects that rely on the facility. The key to Boothby’s framework is that the initial effect that the cyber weapon will actually create is on data. The subsequent effects will be consequent of this initial effect and can be tailored to create the desired end state—whether it simply be data manipulation or physical damage. Finally, Boothby states each cyber weapon must be evaluated separate from the framework to see if it will be indiscriminate.
Boothby’s framework is well applied to cyber operations against current information nodes in the cyber domain. However, this framework applied to BCI—while still very usable—may require combining the analysis of the third and fourth layers of effects. This is due to the incorporation—or convergence—of the brain into the information node created by the BCI, making the third and fourth layers indistinguishable from each other. Therefore, for consideration of effects on BCI, it may be more useful to simply consider the effects the cyber weapon would have on the data and hardware in a BCI system, and then any effects on the brain.
Such a framework allows for consideration of both the function and employment of the cyber weapon for compliance under IHL. This, in turn, will allow for specific application of the principle of distinction as the weapon is employed—as the effects will either be targeted at the machine portion of the BCI or at the human brain. It will also allow for easier application of the principle of humanity and, in limited cases, the principle of proportionality.
C. Answering Neuroethical Concerns Through the BCI Targeting Framework
The above BCI targeting framework complements our understanding of the BCI cycle and its components, both machine and human. From our earlier discussion of the BCI cycle, we know that the measurement, decoding, and output phases are associated with machine or computer systems, while the generation and feedback portions of the cycle are associated with the brain. Starting here, we can apply the framework adapted from Boothby to the BCI targeting problem by examining the intended effects of a given operation and how achievement of these effects will impact each portion of a BCI system.
The threshold question will be what effect a commander is hoping to achieve. Once understood, the cyber weapon can be designed and narrowly tailored to create an effect in specific BCI, as well as specific portions of that BCI’s cycle. New cyber weapons designed and employed against BCI will require analysis of whether the weapon is designed to cause undue suffering or superfluous injury, and whether the weapon is indiscriminate. Once deemed compliant, the weapon can be fielded and utilized by the military forces of a state. When the weapon is utilized, it will also require separate analysis under IHL for adherence to all IHL targeting principles to ensure it is being employed lawfully. Additionally, due to the fleeting nature of code and vulnerabilities in the cyber domain, cyber weapons, including those that could eventually be employed against BCI, may require ad hoc or just-in-time development prior to employment. To provide cyber weapons capabilities in fleeting circumstances, cyber weapons may very well be employed against BCI and simultaneously evaluated for compliance with IHL and lawful employment.
The requirement to assess a weapon’s compliance with IHL provides initial protections to persons under IHL. This would include the brain in the BCI system, as this analysis would prohibit weapons designed to cause undue suffering or superfluous injury from being fielded. Here, the concerns of neuroethicists regarding the physical and psychological effects of attacks on BCI can be incorporated into the analysis of the weapon’s design, highlighting the potential dangers of weapons aimed at creating effects in BCI, aiding in the development of more refined and legally compliant weapons.
Further, a particular attribute of cyber weapons is the ability to scale and tailor effects to individual systems. As cyber weapons will be utilized to target BCI, this same ability to tailor weapons and effects will also be possible, satisfying requirements that these weapons not be indiscriminate. Tailoring a cyber weapon for use against a BCI in a way also provides additional distinction from biological and chemical weapons, which neuroethicists point to as comparable to future neuroweapons. Some methods used to employ biological and chemical weapons, such as simply releasing biological or chemical agents into the atmosphere, are unlawful due to their indiscriminate nature. A tailored cyber weapon directed against a lawfully targetable BCI system does not share this indiscriminate quality.
Turning to employment of a cyber weapon against BCI, recall the discussion of the components of the BCI cycle and how each can be associated with a person or object. Effects aimed at the measurement, decoding, and output phases can be assessed as targeting objects for purposes of the IHL principles, where effects aimed at the generation or feedback phases can be considered operations against a person. The BCI targeting framework could then be applied, evaluating each layer of effects—including those on the human brain. The difference in assessing the human effect, whether intended as a direct effect or a collateral result of the operation, would be dictated by what part of the BCI cycle was targeted. This approach has several advantages. First, it will not require a commander to conduct an analysis of the brain’s function in a BCI system, adding an additional layer of complication. Second, it allows for clear application of IHL targeting principles to cyber operations against BCI, reinforcing IHL as the lex specialis for military operations during armed conflict and utilizing legal concepts commanders are familiar with. Finally, as the IHL targeting principles incorporate protections for both combatants and non-combatants, application of these principles provide additional mitigation of the concerns raised by neuroethicists in the context of targeting BCI during hostilities.
This final advantage is reinforced by the framework’s emphasis on the principle of humanity in operations against BCI. Reviewing the concerns of neuroethicists, all center on the physical and psychological damage that can be done to the human brain by manipulation of BCI. Clearly, the long-term effects of a damaged brain or loss of psychological well-being are horrific. To arbitrarily inflict such injuries would be cruel and would meet the standard of undue suffering or superfluous injury. While the principle of humanity does not guarantee these injuries would not occur, it does aim to require that these types of injuries would only occur in conjunction with a legitimate military operation and use of a weapon in compliance with IHL. This advantage, and the application of the corresponding protections offered by IHL targeting principles, is discussed below.
1. Military Necessity
First formally articulated in the Lieber Code, military necessity has long been recognized as a principle of IHL. Military necessity justifies the use of all measures necessary, not otherwise prohibited by IHL, to bring about the defeat of an enemy. This would include the use of cyber operations or attacks against adversaries equipped with BCI. Such operations would have to be linked to a military requirement, benefit, or objective in order to comply with this principle. This requirement applies to any planned operations against BCI, encompassing both attacks and non-attacks such as psychological operations. During armed conflict, linking operations to military requirements, benefits, or objectives serves as additional mitigation of the concerns raised by neuroscientists. Many of these concerns pertain to hackers violating mental privacy by stealing information from BCI-equipped individuals, cyber actors hijacking the function of BCI, or effects resulting in harm to individuals. Military necessity would allow for these types of effects to take place during armed conflict, but not in an arbitrary manner. A commander intending to conduct such an operation would have to define their purpose or objective, adding a layer of consideration and protection for individuals equipped with BCI. While not an absolute prohibition, military necessity would require an IHL-compliant justification for all contemplated cyber operations against BCI.
Distinction is a bedrock principle in IHL, providing additional protection to civilians during hostilities by requiring that attacks only be directed at combatant persons or military objects. Distinguishing between a combatant and non-combatant person is different from distinguishing between military and civilian objects, facilities, or equipment. Generally, when applying the IHL principle of distinction to people, the status of the individual’s affiliation with an armed service or group is the primary consideration, with consideration of conduct reserved for determining whether a civilian is directly participating in hostilities. Objects, however, are examined under a separate test, evaluating whether they make an effective contribution to military action based on their nature, use, location, or purpose, and then considering the military advantage of destroying, capturing, or neutralizing the object. Additionally, dual use objects, utilized for both military and civilian purposes, are also targetable.
Recall the earlier discussion of the brain’s status in a BCI, and the conclusion that the brain should always be treated as a person. This conclusion allows for a clearer analysis of the distinction principle. Effects directed at the measurement, decoding, and output phases of BCI cycle, which are part of the computer or machine portions of the BCI cycle, would be analyzed under the object test for distinction detailed above, while effects directed at the generation and feedback portions of the cycle involving the brain would be analyzed under the person test. Brain-computer interfaces incorporated into adversary military means or weapon systems would be distinguishable as military objects, and the brains connected to, interacting with, and operating these BCI would be distinguishable as combatants, making both targetable. But consider a situation where a civilian BCI is being utilized to carry out an operation, with the civilian unaware that it is taking place or not in control of the activity. This scenario is similar in nature to one involving potential future abilities to tailor biological weapons outlined by Eric Jensen. In that scenario, an unwitting carrier of a biological weapon, known to have access to the eventual target of the pathogen, is infected. The biological weapon is genetically engineered to only affect the target of the attack. The pathogen in the person’s system is clearly a weapon and is being utilized to carry out an attack, but the individual carrying the weapon has no idea the weapon is even in their system and, due to the narrow tailoring of the weapon, it has no effect on the individual. This scenario creates significant issues under IHL, including how to treat the unwitting carrier of the weapon. A similar type of latent attack is envisioned in a cyber context in the novel Ghost Fleet.
Here, a Chinese government hacker gains access to multiple digital devices owned by civilians in the United States, to include government contractors, to move portions of malicious code into the Defense Intelligence Agency for the purpose of collecting intelligence. While this is not an example of an attack as defined by the Tallinn Manual, since it is a cyber espionage activity, it does highlight the possibility to utilize devices carried by human beings to carry malicious code. Ubiquitous BCI utilized by the public would be the ultimate human-portable technological device. A pervasive BCI technology, such as neural lace, would make it impossible to discount that adversaries would take advantage of its vulnerabilities. Adversaries could embed malicious code on these devices without the individual’s awareness, using these individuals to carry the malicious code or cyber attack payload to its target in a combination of the scenarios outlined above. In this particular scenario, care would be required to distinguish between the status of the malicious code riding on the hijacked BCI, the BCI hardware, and the connected brain when undertaking an operation to counter the attack. Distinguishing the human whose BCI had been hijacked as a civilian invokes the protections of the separate IHL principle of proportionality.
The principle of proportionality prohibits “an attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.” Thus, proportionality requires an attacker to first consider two specific factors related to incidental harm to civilians: causation and foreseeability. Causation relates to whether the expected incidental harm would be caused by the attack. Unlike the requirement that the anticipated military advantage be directly related to the attack, there is no corresponding requirement under causation for incidental harm to civilians or civilian objects. Incidental harm can be caused either as a direct result of an attack or “as a result of a series of steps.” Foreseeability considers whether incidental harm to civilians or civilian objects could have been expected as the attack was planned or launched. When applying foreseeability in assessing a potential attack, the legal standard is one of reasonableness. In other words, “what should have been foreseen” based on the information on hand, or that could be reasonably expected to be on hand. Once an attacker determines incidental harm is foreseeable, they must then also consider the likelihood such harm would occur. The likelihood of whether incidental harm will occur assists the attacker in considering the weight to place on the incidental harm in the larger proportionality analysis. After causation and foreseeability have been fully considered, to complete the proportionality analysis, these considerations must be weighed against the anticipated military advantage to be gained by the attack. “[P]roportionality prohibits attacks expected to cause incidental harm that would be ‘excessive’ in relation to the anticipated concrete and direct military advantage.”
Proportionality would initially appear to provide little difficulty in application to BCI. Operations against BCI distinguished as military objects connected to brains belonging to adversaries could be tailored to limit effects solely to these military targets, essentially making proportionality moot. Further, operations exclusively against military BCI hardware would also seem to leave civilian or collateral effects out of the calculus. But, as detailed in the discussion of distinction, scenarios such as brain-hacking, brainjacking, or involuntary manipulation of civilian BCI could lead to otherwise-civilian BCI hardware being utilized for military purposes. Defending against this threat may require disabling the BCI implanted within the individual or interfering with its functionality—either temporarily or permanently. These effects could create detrimental psychological effects in these civilians envisioned by neuroethicists.
Such a scenario adds to the difficulty in applying the principle of proportionality in cyberspace. While the Tallinn Manual allows that effects resulting in mental suffering can be considered attacks, the suffering in this scenario would be a collateral effect on a civilian brain caused by taking action against malicious code within their BCI. But what manipulation or effect in the hijacked BCI would be required to counter the malicious code? Following the above framework for operations against BCI, the hijacked BCI could be considered a military target as a dual-use object. But, as Peter Pascucci highlights and per the Tallinn Manual operations, that would affect the functionality of the BCI and would be considered attacks; however, open questions remain as to whether simply manipulating data would rise to this standard. This creates a potential scenario where data is manipulated in a civilian’s BCI hardware to a level not meeting a clear standard of attack, yet still causing a collateral effect of mental suffering in a civilian brain connected to a BCI.
Mental suffering has traditionally not seen the same level of consideration as loss of civilian life or physical injuries to civilians when considering proportionality. Certainly, if an attack on a BCI would cause an incidental civilian death or injury, it would require consideration under proportionality. Yet, due to the challenge in applying causation and foreseeability—as well as difficulty of assessing and quantifying mental harm—mental suffering has not enjoyed the same level of consideration. It is worth consideration that BCI making its way to the battlefield may accelerate concern of mental suffering as an incidental harm under proportionality. There is no severity requirement attached to injuries when considering incidental harm under proportionality. Recognizing that an attack on a BCI could cause harm and mental injury described by neuroethicists could lead to mental suffering and harm taking greater prominence in the proportionality analysis, highlighting the clear applicability of the principle during armed conflict, when non-combatant effects are anticipated. Further, it shows that as circumstances warrant, great care must be given to analyzing the effects a given operation may have on civilians prior to its execution.
Finally, the principle of humanity serves as the bedrock underlying several other IHL principles. Humanity is also the complementary principle to military necessity, tempering the extent to which military necessity can be utilized to justify military operations. The modern articulation of humanity is found in Article 35 of Protocol I. Specifically, Article 35 notes that a state’s ability to employ methods and means of warfare is not unlimited and prohibits the use of “weapons, projectiles, and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering.”
Adherence to the principle of humanity occurs through two main lines of effort. First, consistent with Article 36 of Protocol I, new weapon systems may be subject to review for compliance with IHL, specifically its compliance with humanity. Next, when employed, there is an obligation to not cause undue suffering or superfluous injury.
Whether the target is a person or object also affects the application of the principle of humanity. This is due to the nature of the principle and its interaction with the principle of necessity. If it is necessary to engage a target, humanity would only prevent doing so if it was done in a way specifically designed to bring about undue suffering or superfluous injury. Simply engaging a legitimate military target out of military necessity, which may result in the injury or death of combatants, does not violate the principle of humanity.
If a person is a lawful target, it is not a violation of IHL or the principle of humanity to engage and kill them. To illustrate this point in a cyber context, consider the pacemaker scenario described in the Tallinn Manual. Cyber manipulation of a pacemaker to induce cardiac arrest in a lawful target would not be a violation of humanity, but causing a series of heart attacks in order to induce pain and suffering in the target prior to killing them would be a violation of humanity.
Targeting an object creates different considerations under humanity. As an illustration, should a commander determine it necessary to engage a tank, a larger munition would be required than what would be necessary to engage personnel. It is possible, or even likely, that personnel will be inside and operating the tank at the time it was struck. The larger munition could cause the adversaries inside the tank to suffer; but, because it was militarily necessary to engage the tank—and the weapon utilized was designed to destroy the tank, not to cause undue suffering or superfluous injury to the people inside—it would not violate the principle of humanity.
Brain-computer interface hardware presents unique issues in the application of humanity. While applying humanity to implanted technology was contemplated by the Tallinn Manual, the example of the pacemaker did not encompass the type of technology that allows for a biological system to directly interact with the cyber domain, transmit and receive data, or control military objects. Further, the potential for enduring physical, neurological, and psychological effects caused by operations against BCI presents a different dimension to the application of humanity. William Boothby indicates, as time and technological advances move forward, “[c]ultural appreciations as to which injuring mechanisms are respectively acceptable, undesirable, or abhorrent may change, affected in part by medical advance.” Boothby’s observation is currently manifesting itself through the neuroethical discussions and advocacy surrounding BCI that highlight several of the dangers and damage to individuals’ mental well-being that can be caused by attacks on BCI.
It is here that the principle of humanity will both garner outsized consideration in operations against BCI and serve an enabling function for operations against this technology. Humanity’s animation of other IHL principles has already been noted in requirements and protections afforded by the principles of military necessity, distinction, and proportionality, affording protection to the brains of individuals connected to BCI. Beyond these IHL requirements directly rooted in humanity, one last layer of protection to the brains of adversaries connected to BCI is added; attacks against BCI, and by extension brains, will not be conducted in a manner designed to cause undue suffering or superfluous injury.
The concept of preventing undue suffering or superfluous injury in the conduct of operations serves the purpose of eliminating unnecessary actions to achieving a necessary military objective. Thus humanity serves as a mechanism to enhance military efficiency and effectiveness. Applying this concept to operations against BCI, a series of examples would serve to illustrate this interplay between military necessity and humanity.
First, consider effects on an adversary’s BCI designed to gather data, share information, communicate, or exercise command and control. During armed conflict, denial or disruption of the system would serve a military purpose and would likely have the same effect as denying information to adversaries would have today. Even if effects on this BCI would result in a psychological effect in an adversary, such as loss of confidence, these effects would— arguably— not rise to the level of undue suffering or superfluous injury. Even if they did, the valid military purpose for the operation directed at the BCI would still make the operation compliant with humanity.
But consider scenarios where an operation against BCI erases or manipulates data. Putting aside the debate on the status of data as an object of attack, if the data was associated with a military function during hostilities, an operation against this data would likely not violate humanity for similar reasons as above. If the operation targeted personal data, however, the analysis could shift. Targeting personal data, such as medical records, could manifest in unnecessarily painful physical harm if the wrong treatment was administered. Consider also the Anon scenario of manipulating data of painful memories, causing them to be ever present in a person’s mind. The result could be significant personal anguish in the targeted individual, which in turn could be considered an operation conducted simply to cause undue suffering.
Now, consider the ability to manipulate the feedback portion of the BCI cycle and how it could affect the electrical signal returning to the brain. As highlighted, this could potentially be utilized to cause physical damage to the brain or create changes in mood and personality. The military necessity of disrupting a commander’s ability to make decisions or to exercise control over the battle space is certainly legitimate, but are the lasting effects of such an operation in conformity with humanity if the damage to the commander’s mental well-being is permanent?
Moving to BCI designed to exert control over physical systems or individuals, potential to take actions out of conformance with the principle of humanity grow due to the physical dangers an individual may experience. Consider the example of manipulation of a person’s bodily movements highlighted earlier. Imagine intelligence exists that an adversary is driving a vehicle and is equipped with a functioning BCI. Would manipulating that adversary to jerk the wheel to drive off a cliff violate humanity? Certainly, the individual would face the dual terror of loss of self-control and impending death due to the manipulation; but, they are a legitimate target.
Similar scenarios endangering individuals can also be envisioned by manipulating weapon systems incorporated into the BCI. An individual utilizing a prosthetic or exoskeleton could have control seized from them, leaving them helpless and along for the ride as the new masters of the machine carry out their will. Targeting these weapon systems would serve some level of military necessity; but, an operation designed to carry out the envisaged effects would certainly have lasting effects on individuals in these systems.
The point of exploring these scenarios is to highlight the balance of military necessity and humanity. Cruelty and wonton violence are not permissible on the battlefield, only operations based on a military necessity that adhere to the other protections under IHL are permissible. Operations against BCI, including those that could result in damage to the brain, can be legally permissible; but, tempered by the ever-vital principle of humanity’s protection of the brain, it will require careful application of all IHL principles.
There is no luxury to wait for new technology to come into being before thinking about the challenges the technology will present. This article addresses one of the myriad challenges presented by BCI, fully recognizing that other open questions exist. These include the potential for intelligence collection and activity through BCI, as well as activities outside of armed conflict. While these challenges will require answers, targeting BCI during armed conflict in a manner consistent with existing IHL appears possible through a systemic evaluation of a given operation.
Brain-computer interfaces present the possibility for human beings to become more integrated with machines and computers. While this article approached this integration—or convergence—from the perspective of finding the brain’s place in the cyber world, perhaps the better approach would have been to acknowledge that—as some authors contend—cyberspace is not a real place. Focusing simply on operations, effects, and how they manifest in the physical world allows for clearer analysis of the application of IHL and consideration of the concerns of neuroscientists and neuroethicists.
The concerns of neuroethicists reflect in many ways how convergence with technology, and envisioning a separate cyber or technical world, seems to be slowly stripping our humanness away. Our brains are the last great step in this integration, and our neuroethicists have—rightly—sounded the alarm on possible repercussions on the path ahead. The alarm is all about the human, not the machine, a point that should be central in any discussions about such technology. We should therefore be sensitive in our legal analysis to preserving the humanness of persons connected to machines, which will naturally allow for IHL principles—specifically the principle of humanity—to provide protection from the dangers created by man-machine convergence technologies such as BCI.