Sun. Dec 22nd, 2024

DoD Needs New Policies, Ethics For Brain-Computer Links (Jacked-In Troops?)

7 min read

WASHINGTON: In cyberpunk fiction, people routinely “jack in” to cyberspace — linking their electrode-enhanced brains, or “wetware”, directly to computers. That future is near, the RAND Corp. says, with  “brain-computer interface (BCI)” technology poised to begin moving from labs to operational military applications.

“In general, with emerging technology there is always a struggle to predict the future — and that’s just not possible. BCI is not just science fiction; it has viable practical applications, but there is much more work that needs to be completed before it becomes mainstream and commercial,” Tim Marler, one of the co-authors of the study, Brain-Computer Interfaces: U.S. Military Applications and Implications, An Initial Assessment, told Breaking D in an interview.

Marler and study co-authors Anika Binnendijk and Elizabeth Bartels say that while the emergence of BCI comes with a whole host of potential benefits for military operations, it also opens up huge technical, ethical and legal risks.

“It’s good to put attention on this emerging subject, since what’s been science fiction is now becoming reality,” Patrick Lin, a philosophy professor at California Polytechnic State University, and a member of the  Center for a New American Security’s Task Force on AI and National Security, said in an email.

“The ethics of brain-computer interfaces is a huge subject, given the nature of the technology and its potential,” he added. “It’s not just a merger of hardware and wetware, but also the collision of technology ethics and traditional bioethics that hasn’t been handled well so far by industry or regulators.”

The RAND study is based on the “premise that human-machine teaming will play a major role in future combat and that BCI may provide a competitive advantage in future warfare.” BCI could enable seamless human-machine teaming; increased speed of decision-making for command and control of a hyperconnected battlefield; enhanced human endurance during combat; and improved medical care for wounded vets, says the study, released today.

In particular, it could help forward human-machine teaming by helping humans:

  • “Digest and synthesize large amounts of data from an extensive network of humans and machines;
  • Make decisions more rapidly due to advances in AI, enhanced connectivity, and autonomous weaponry;
  • Oversee a greater number and types of robotics, including swarms.”

And just as the seminal cyberpunk novel Neuromancer by William Gibson predicted in 1984, BCI could even allow humans to directly connect to the Internet — in effect becoming another node in the Internet of Things (IoT).

“DoD believes IoT can contribute to improved readiness by allowing one to monitor the status of materiel and weapons systems in real time, and it is thus becoming pervasive. IoT has tactical applications, including giving warfighters access to sensors and data, and BCI could enhance this ability,” the study says.

Risks from the use of BCI are monumental. Technical risks could range from the ability to jam human-machine networks; to enemies hacking into the “jacked in” brains of US soldiers via cyber attacks causing confusion or even physical harm.

“As with many new technological developments, BCI may create new military operational vulnerabilities, new areas of ethical and legal risk, and potentially profound implications for existing military organizational structures,” the study says.

Potential vulnerabilities could include “the potential for new points of failure, adversary access to new information, and new areas of exposure to harm or avenues of influence of service members,” the study explains. Institutional risks include “challenges surrounding a deficit of trust in BCI technologies, as well as the potential erosion of unit cohesion, unit leadership, and other critical interpersonal military relationships.”

Ethical questions are myriad. In extremis, to use another example from science fiction, militaries or intelligence agencies could create ‘mind controlled’ suicide bombers. But more routine ethical challenges are also serious, the study says, such as how can DoD ensure against any long-term mental or physical side effects from such technology? Who is accountable is something goes wrong?

Another issue is what happens when an BCI-enhanced warfighter retires? Does the technology, asks Jonathan Moreno, a professor of bioethics at the University of Pennsylvania, have to be removed after a soldier retires or leaves the service? How might that affect veterans?

“To what extent should the military be worried about removing something, taking something away, when you are no longer the force?” he asked. “Do Super Soldiers ever die, or just fade away?”

The RAND study sums up:

“BCI can likely be useful for future military operations, even in the most difficult test case: infantry ground force combat. This utility may become particularly pronounced once technology for military applications of AI and robotics develops further, and once adversaries have access to these capabilities. Nonetheless, the application of BCI would support ongoing DoD technological initiatives, including human-machine collaboration for improved decision-making, assisted-human operations, and advanced manned and unmanned combat teaming.

Of course, as with most significant technological advances, there are potential risks. BCI falls subject to the capability-vulnerability paradox, with counterweighted benefits and risks, and, as development efforts and eventual acquisition efforts progress, requirements will need to account for such risks.”

RAND recommends that the Pentagon move forward now to develop a research and development strategy that integrates ethical considerations up-front. The study recommends:

  • “Assess the operational risks and benefits of BCI technology in combat.
  • “Address a potential lack of trust in BCI technologies prior to adoption by the armed services.
  • “Work with academic and private-sector laboratories to leverage private-sector advances in BCI and improve trust gaps within the military.
  • “Plan for institutional implications and address new ethical and policy issues at each stage of the process, from research and development, to operational application, to veteran care.”

As Breaking D readers know, the Air Force in particular is enthusiastic about human-machine teaming whereby piloted aircraft and AI-driven drones are seamlessly linked for air combat operations. Development projects include Air Force acquisition czar Will Roper’s high-priority Next-Generation Air Dominance (NGAD) effort to rethink a sixth-generation fighter, and the Skyborg program to develop an artificial intelligence-base brain for “loyal wingman” drone being managed by Air Force Research Laboratory.

DARPA and the Army also have funded research at University of Delaware’s Human-Oriented Robotics and Control laboratory to enable a user to control a swarm of drones. According to RAND, the lab’s “researchers suggest the technology could be used practically in the military within five to ten years. Applications also include delivery of medical help, search and rescue, and exploration, all in remote or inaccessible environments.”

Up to now work by the military services has largely focused on the machine aspect, rather than on the human one. This includes on the question of ethics. As Breaking D readers know, DoD’s Defense Innovation Advisory Board in October 2019 put out a report listing five ethical principles applying to military AI research. And just last month, as Kelsey reported, the Intelligence Community issued a similar set of principles for ethical AI development.

DARPA, on the other hand, has been increasingly focused on the human side of manned-machine teaming, as well as other potential uses for BCI. As Sydney wrote way back in 2017, the Biological Technologies Office is exploring whether a weapon system could respond to a human’s brain cells forming the intention to control it. In probably the most famous DARPA experiment so far, in 2015 a quadriplegic woman flew an F-35 in a simulator using only her brain.

As the RAND study notes, DARPA is also a key player in the 2013 National Institutes of Health (NIH) initiative started by the Obama Administration called Brain Research through Advancing Innovative Neurotechnologies (BRAIN). BRAIN, designed to improve understanding of the human brain, also includes the National Science Foundation, the Food and Drug Administration (FDA), and the Intelligence Advanced Research Projects Agency (IARPA), as well as foundations, institutes, universities, and biotech industries.

Related to BRAIN is Neural Engineering System Design (NESD), which “aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the brain and the digital world.” This DARPA program has two sub-projects: “Towards a High-Resolution, Implantable Neural Interface,” and “Bridging the Bio-Electronic Divide.” The planned budget for NESD is NESD is $88.469 million, DARPA spokesperson Jared Adams told Breaking D today

“DARPA is committed to exploring the ethical, legal, and societal implications of potential, future human applications of Brain-Computer Interface (BCI) technologies,” he added.

The RAND study was based in large part on a July 2018 tabletop exercise involving military warfighters and neuroscientists.

“We found that our initial discussions on BCI technologies were overly abstract. So we projected forward to consider what these breakthroughs in the laboratory setting might tangibly mean for the future warfighter.  With this notional “toolbox,” we could then have a more grounded analysis of the potential risks and benefits of the future technology,” Binnendijk told Breaking D.
The exercises and research led RAND to focus on several areas where BCI tools could be useful for military applications:
  • Human-machine decision-making “involves transferring data to the human brain from sensor input and from the brain to machines. … This kind of tool allows a warfighter to digest more information faster, to be used, for example, with theater assessment or risk and threat assessment. Warfighters ultimately can increase over- all reaction time, thus collapsing the OODA loop.”
  • Human-machine direct system control “involves allowing warfighters to control systems with their thoughts wirelessly, as well as to supervise semiautonomous and AI systems, including robots, drones, drone swarms, or jets. … This, in turn, provides the warfighter increased situational awareness and again helps collapse the OODA loop.”
  • Human-to-human communication and management “entails wirelessly transmitting commands or basic ideas among warfighters and commanders, lightening the load of communications systems. It could facilitate immediate and silent communication of plans or tactics on the battlefield, or improve communication with headquarters to enhance commanders’ awareness of in-theater conditions.”
  • Monitoring performance “would enable awareness of group or individual emotional, cognitive, and physical states. … thus detecting when a person is fatigued, paying attention, has high or low cognitive workload, or is significantly stressed.”
  • Enhancement of cognitive and physical performance “includes improving a warfighter’s cognitive and physical states on the battlefield.” Cognitively, it could “yield enhanced focus, alertness for rapid and improved situational awareness and decisionmaking.”  Physically, it could enhance the senses, such as hearing; enable pain mitigation; or improve strength “through more efficient integration with mechanical exoskeletons, which are natural extensions of the work on prosthetics.”
  • Training via BCI “could improve operator learning and memory processing, allowing warfighters to retain more information.”

Leave a Reply

Follow by Email