Brain-computer interfaces (BCIs) could allow troops to control weapon systems with their minds.
Imagine that a scientist has developed a tiny computer device that can be guided by a magnetic field to specific parts of the body. With training, soldiers could use their minds to control weapons hundreds of miles away from them.
If we embed a similar type of computing device into soldiers’ brains, they may be able to perform better during combat missions.
A device that has been programmed with an artificial intelligence system can predict what choices soldiers might make in certain situations.
These examples may seem like science fiction but the science behind developing technologies like these is already in existence.
A BCI decodes and transmits neural activity from the human body to an external computer system to perform the desired action.
A user would just need to tell their computer what they wanted to do, and a robot would do it for them automatically.
Currently, BCIs are being used to help people who suffer from severe neuromuscular diseases regain some of their lost abilities. For example, they can use a BCI to control lights or appliances.
Similarly, patients can focus on certain parts of their brain activity when using a BCI to control a cursor on a computer monitor.
Ethical concerns have not kept up with the science. While ethics experts have pushed for more ethical consideration of neural modification in general, there has been less attention paid to the specific issues surrounding brain-computer interface technology.
For example, do you think the benefits of BCI technology outweigh its potential risks? Do you think BCI technology could be useful for controlling certain emotions? Would using BCI technology affect the moral autonomy, personal identity, and mental well-being of its user?
We’re interested in these questions because we’re philosophers and neurosurgeons who study the ethics and sciences of current and future brain-computer interfaces (BCIs).
Before implementing BCIs, we must consider their ethical implications. We argue that responsible implementation of BCIs requires safeguarding people’s abilities to function in a wide variety of ways that are considered essential to being human.
Expanding BCI Beyond the Clinic
Brain-machine interfaces (BMIs) are being explored by researchers for many different purposes, including gaming, virtual realities, artistic performances, warfare, and even controlling aircraft.
For example, NeuralLink, a company co‑created by Elon Musk, is working on an implant for healthy people to possibly be able to connect wirelessly with anyone else who has a similar implant and computer setup.
DARPA has developed a device that allows soldiers to read from and write to multiple parts of their brains simultaneously.
It aims to develop noninvasive brain-computer interfaces (BCIs) for military personnel by 2050. For instance, soldiers in a special forces team could use BCI technology to communicate directly with each other and their commanders, enabling them to share real-time information and respond faster to potential dangers.
We know of no project that has publicly discussed the potential for negative public perception regarding brain-computer interfaces (BCIs).
However, we believe that there needs to be an open discussion about the potential for negative public and social perceptions regarding BCIs before they are deployed.
Utilitarianism
One way to tackle the ethical issues raised by brain computer interfaces is through utilitarian ethics. Utilitarianism is a moral philosophy that aims to increase the overall good for society.
Enhancing soldiers may improve a country’s ability to fight wars, protect its military resources, and maintain its military readiness.
Neuroenhancers like BCIs are morally equivalent to other commonly used forms of brain enhancement, such as stimulant drugs.
However, some people worry that utilitarian approaches to brain-computer interfaces (BCIs) have moral blind spots.
For example, whereas medical BCIs are intended to help individuals who suffer from mental illness, military BCIs are intended to enable soldiers to kill their enemies. In the process, they might disregard individual rights, such as our right to be mentally and physically healthy.
For example, people who operate drones remotely in war zones report higher rates of PTSD and marital breakdown than those on the front lines.
However, they often choose to risk their lives for the greater good. If neuroenhancement becomes a required part of employment, it may create new ethical issues regarding coercion.
Neurorights
A different approach to the ethics of brain-computer interfaces (BCIs) emphasizes certain ethical values even if they don’t maximize overall well-‐ness.
Neurorights proponents argue for people’s rights to cognitive freedom, mental integrity, and psychological continuity. They believe that these rights protect against unreasonable interferences with one’s mental state.
A human rights framework for protecting people from harmful practices requires three things: (1) a legal obligation to respect certain basic freedoms; (2) an understanding of what these freedoms entail; and (3) a mechanism by which they may be enforced.
Brain-computer interfaces (BCIs) could potentially disrupt neuro right by interfering with how the world appears to someone. For example, if an interface alters how the world feels to a person, they might not be aware of their own thoughts or feelings. This could violate neuro right like mental privacy or mental integrity.
Soldiers already forfeit similar rights. They’re allowed to restrict soldiers’ free expression and free exercise of their religious beliefs in a way that isn’t usually applied to civilians. Would infringing neuro right be any different?
Human Capabilities
A human rights perspective focuses on the importance of protecting specific human rights, whereas a human capabilities approach emphasizes the importance of protecting a broad set of human capabilities.
We think that an ability-based perspective is compelling because it provides a richer understanding of human nature than traditional approaches.
From this perspective, we argue that any application that enhances abilities above average human levels needs to be developed in ways that allow the user to achieve their own goals, not simply others’ goals.
For example, a BCI that both extracts and analyzes neural activity and delivers sensory feedback (such as sensations of touch) back to the subject would be dangerous if it disrupted the subject’s sense of self.
Similarly, any device that controls the subject’s movement would infringe upon his/her dignity if it did not give him/her some control over it.
A limitation of a capacity approach is that it can be hard to determine what constitutes a “threshold” capacity. The approach doesn’t specify which new capacities should be pursued.
Neuroenhancements might change what is considered a standard capacity, and may eventually introduce entirely new human capacities. Answering this question requires supplementing an approach with a fuller ethical assessment designed to address these issues.
Yet, neuroenhancement could alter what is considered a standard threshold, and could eventually introduce entirely new human capabilities. Addressing this requires supplementing a capability approach with a fuller ethical analysis designed to answer these questions.