Elon Musk's Neuralink dilemma: Decoding minds, challenging ethics
Tech billionaire’s brain implant start-up's leap into human trials sparks ethical concerns, casting a spotlight on the uncharted territory of this groundbreaking technology.
Elon Musk’s vision for his start-up Neuralink, a brain-implant chip innovation, can be summed up in his own words: “A backup drive for your non-physical being, your digital soul.”
While this concept may appear straight out of a dystopian science fiction streaming hits like Netflix’s Black Mirror, the reality is that Elon Musk is resolutely driving forward with this brain-computer interface (BCI) project.
The company announced its readiness to start human trials after receiving the green light from the US Food and Drug Administration (FDA) in late May. The trial’s primary objective is to assess the safety and functionality of Neuralink's revolutionary tool, designed to empower individuals to control external devices using their thoughts.
The company has also secured approval from a hospital institutional review board, an independent body responsible for overseeing biomedical research involving human subjects.
Musk’s ultimate goal is to create a “general population device” that can directly link human minds to supercomputers, potentially bridging the gap with artificial intelligence.
This momentous step brings us closer to the realisation of science fiction-inspired technology that facilitates direct communication between the human brain and technological devices.
However, as the company embarks on this audacious journey, ethical dilemmas loom large, casting shadows of uncertainty over the future of brain-computer interfaces.
What sets Musk's Neuralink apart?
Neuralink is not treading this path alone. While experiments with brain interface devices date back to the 1960s, no commercial product has emerged thus far.
Other research endeavours have allowed paralysed individuals to interact with computers and control prosthetic limbs through their thoughts, primarily within controlled laboratory settings. Researchers worldwide have explored the potential of implants and devices to treat conditions such as paralysis and depression.
Elon Musk’s approach to BCIs distinguishes Neuralink from other companies working in the same field, which have primarily focused on using their devices to address specific medical conditions such as seizures, Parkinson’s tremors, or paralysis.
Neuralink entered the industry in 2016 and introduced a brain-computer interface known as the Link - a computer chip adorned with electrodes that can be surgically implanted onto the brain’s surface, connecting it to external electronic devices. Additionally, Neuralink has developed a robotic implantation device for this chip.
Musk envisions a multitude of therapeutic applications for Neuralink’s device, including the treatment of conditions like blindness, paralysis, and depression.
However, his ultimate goal goes beyond medical purposes, aiming to create a device that can link human minds to external technological devices.
He argues that the Neuralink device would empower humans to compete with emerging sentient AI, stating “I established neuralink specifically to address the AI symbiosis problem, which I believe poses an existential threat.”
The dark side of human-AI symbiosis, according to Musk, is that a hypothetical AI dictator can emerge out of it. He argues that the most effective way to mitigate this threat is by democratising AI technology, a goal he aims to achieve through Neuralink.
By developing this interface connecting the human brain and AI, Nueralink aspires to augment human capabilities, democratising access to this technology and addressing Musk’s concerns about the possibility of an immortal, evil and dictatorial AI.
Exploring ethical questions surrounding Neuralink and other BCIs
The emergence of Neuralink’s human trials raises profound ethical concerns. These ethical concerns encompass various aspects of Neuralink’s endeavours and the broader implications of brain-computer interfaces.
Firstly, the timing of Neuralink’s FDA approval in May coincided with growing scrutiny of its testing practices and disturbing allegations of animal cruelty. Reuters reported that over 1,500 animals have died during Neuralink’s experience since its beginning in 2018.
The mortality rate, employees alleged, has exceeded what could be considered normal. This alarming rate is attributed to the speed research demands imposed by Elon Musk, resulting in more errors and failed procedures.
Some former employees have even described certain experiments as “hack jobs,” citing instances like the incorrect sizing of devices in numerous test pigs and the accidental implantation of Neuralink’s device into the wrong vertebra, leading to the euthanasia of the affected animals due to severe pain and suffering.
Long-term impact of BCIs
On the other hand, while informed consent is currently obtained from clinical participants, the long-term impact of brain-computer interface technology on the human body remains uncertain.
Therefore, for academics studying in this field, establishing ethical guidelines is imperative for those involved in this field to ensure their commitment to responsible practices. As advancements in brain technology accelerate, ethical standards must keep pace with this progress.
Dr. Nancy Jecker, a professor at the University of Washington School of Medicine, Department of Bioethics and Humanities, emphasised the significance of such guidelines in an interview, stating, "Without ethics guidelines, the ethics is going to be hit and miss and we're going to be reacting to problems rather than preventing them. And in the process, we might end up irreversibly damaging people in ways we could have avoided."
The concerns do not stem from the technology itself but from the potential long-term consequences it may hold, says Dr. Andrew Ko, a neurosurgeon and professor at the University of Washington School of Medicine.
The departure of key scientists from the founding team, along with Elon Musk's insistence on an accelerated timeline for new product development heightens the ethical concerns surrounding Neuralink's human trials.
These departures, the pressure for swift progress and the uncertainty regarding long-term consequences of this technology on human body raise valid questions about whether the company can maintain the necessary rigour, safety, and ethical standards during this critical phase of its research and development.
Safeguarding sensitive medical data
Data privacy presents yet another ethical dilemma. Twitter’s past mishandling of user data and breach of the commitment to protect user data give rise to legitimate concerns regarding the ability of Elon Musk’s Neuralink to effectively safeguard sensitive data acquired from clinical trial participants.
With the potential of BCI technologies to provide direct access to the human brain, it must be carefully considered the implications for the privacy and personal freedom of clinical trial participants. This concern extends to cognitive liberty, which is intricately linked to an organ that shapes human identity, as defined by a bioethicist.
In response to these complex challenges, ethicists have put forward a set of emerging rights: the right to cognitive liberty, the right to mental privacy, the right to mental integrity, and the right to psychological continuity. Amidst this backdrop, the protection of this exceptionally sensitive medical information takes centre stage.
Furthermore, there's an uncertainty about how Musk's company, with objectives extending beyond mere medical applications, will navigate these critical issues concerning the safeguarding of patients' sensitive data. Ensuring that the company prioritises the well-being of trial participants and refrains from exploiting their data for profit remains a paramount concern in this evolving landscape.
US military’s so-called “constructive role”
Beyond these specific concerns, BCIs like Neuralink have the potential for dual-use applications, including military and surveillance purposes.
The U.S. Department of Defense, known for its extensive history of military interventions and covert surveillance, has invested significantly in the development of these technologies over an extended period. They have even published a report in 2020 discussing potential risks while simultaneously crafting a methodology for assessing their applications.
In the report it boldly suggests, “The U.S. government thus has an opportunity to play a constructive role in the coming decades in supporting elements of BCI technology that benefit U.S. national security and seeking to mitigate risks.”
However, given the undeniable track record of the U.S. military, riddled with human rights violations and the indiscriminate use of advanced weaponry on civilian populations in numerous conflicts the notion of a "constructive role" in deploying emerging technologies for military purposes appears to be quite hypocritical.
This underscores the necessity for the international community to thoroughly evaluate comprehensive policies, safety measures, legal frameworks, and ethical considerations before widespread deployment of this technology occurs.
Such evaluations are crucial to ensuring responsible and equitable use of brain-computer interfaces on a global scale.
A prudent stance on BCIs
Elon Musk's bold vision for brain-computer interfaces prompts us to consider not just the technical possibilities but also the ethical dimensions of this groundbreaking technology.
To address these concerns, it is imperative that Neuralink and other organisations in this field uphold a steadfast commitment to transparency, safety, and ethical standards.
While the path ahead may be riddled with uncertainties, it is paramount that a deliberate and thoughtful approach is taken. Guided by the ethical compass, a future where brain-computer interfaces elevate human potential while steadfastly upholding the fundamental values and principles that underpin humanity can be aspired to.