Progress in mind-controlled devices raise questions of privacy, free will

Advancements in brain-computer interface technologies are raising profound ethical concerns about human agency and moral responsibility.

A recent report estimates the global brain computer interface (BCI) market was worth $1.74 billion in 2022, expected to surge to $6.2 billion by 2030. / Photo: Reuters
Reuters

A recent report estimates the global brain computer interface (BCI) market was worth $1.74 billion in 2022, expected to surge to $6.2 billion by 2030. / Photo: Reuters

Imagine a world where our thoughts control everyday-use devices – from typing a message on a phone to operating a wheelchair.

This is not the stuff of science fiction anymore but an emerging reality as several companies, including state-funded groups, push the boundaries of brain-computer interface (BCI) technology.

These inventions promise breakthroughs in treating paralysis and enhancing cognitive capabilities, yet they also raise some thorny questions.

What happens to our privacy when our minds are directly linked to machines and how do these technologies reshape the fundamental ideas of free will and moral responsibility?

And while the idea is for the mind to control the devices, can the devices themselves one day come to take control of the mind?

Brain-Computer Interfaces (BRIs)

In early 2024, Elon Musk's brain technology startup Neuralink implanted its first chip into a 29-year-old user, Noland Arbaugh.

"Progress is good, and the patient seems to have made a full recovery, with neural effects that we are aware of. Patient is able to move a mouse around the screen by just thinking," Musk said in a Spaces event on social media platform X.

The study involved a robot to surgically place a brain-computer interface implant in a region of the brain that controls the intention to move, Neuralink has said, adding that the initial goal is to enable people to control a computer cursor or keyboard using their thoughts.

Musk has emphasised that Neuralink’s ultimate goal is to “help humanity keep pace with artificial intelligence” while addressing neurological disorders such as paralysis and epilepsy.

Following Neuralink’s breakthrough, Beijing Xinzhida Neurotechnology, a Chinese state-backed company, developed its own brain-computer interface (BCI) implant called Neucyber.

According to the state-run Xinhua News Agency, the device was tested on a monkey and enabled it to control a robotic arm with only its thoughts. The agency also highlighted that Neucyber was "independently developed" and represents China's first "high-performance invasive BCI".

Most recently, Shanghai-based NeuroXess announced its success in implanting patients with the Chinese startup's self-developed invasive devices, allowing them to carry out "dialogue" with and control smart devices using their minds.

In recognition of its significance, the country's Ministry of Industry and Information Technology classified BCI technology as an important "cutting-edge emerging technology".

This growing recognition is mirrored in a recent report by American multinational investment bank Morgan Stanley which estimates the BCI market in the US alone at $400 billion, proving its immense global potential.

Since then, the debate around BCIs has grown, leading to questions about their feasibility and long-term implications.

While these technologies offer life-changing possibilities, particularly for individuals with disabilities, they are also accompanied by regulatory issues and moral dilemmas concerning their broader societal impact.

Read More
Read More

Elon Musk's Neuralink dilemma: Decoding minds, challenging ethics

What about “free will”?

Prof. Ahmet Dag, a scholar from Bursa Uludag University specialising in the philosophy of religion and its intersection with technology, provides a critical perspective on the implications of these technologies.

"Both (AI scientist) Ray Kurzweil and Elon Musk claim that their envisioned work on the fusion of the human mind and machines, referred to as 'technological singularity,' is not intended to control the human mind but to enhance it,” he tells TRT World.

“Their rationale is based on the possibility that artificial intelligence could surpass human intelligence. However, by its very nature, such a technology could also lead to controlling the human mind,” he adds.

Technological singularity, as Dag describes, refers to a theoretical point where machine intelligence surpasses human intelligence, leading to irreversible changes in society and human identity.

“The fusion of humans with machines will primarily raise issues of agency, particularly concerning the human domains of 'free will' and 'responsibility',” says Prof. Dag.

This integration prompts further questions about where willpower resides—in the machine or the human—and which entity bears responsibility for thoughts and actions.

Throughout history, Dag notes, the quality of free will has shifted from spiritual to biological functions, as posited by thinkers like Charles Darwin and Francis Galton, pioneers in evolutionary and hereditary studies.

"Later, neurologists argued that free will is a chemical and electrical function of the brain. With advancements in artificial intelligence, free will has been reduced to an algorithmic level."

He explains further, "We are currently in a process where biological and computational/algorithmic domains intersect. Consequently, we are evolving toward a phase where free will and responsibility are transitioning from human-centric paradigms to mechanical ones."

This change, he warns, "weakens the essence of being 'human.' Rather than beings characterised by free will, humans are increasingly becoming entities governed by algorithms or living by choosing from options presented by algorithms. Humans tethered to machines could be even more dramatically isolated from free will and responsibility with singularity technologies."

To provide deeper context, Dag draws parallels across various historical eras.

"During the classical era, humans lived in a world determined by divine will,” he says, referring to the strong influence of religions on human actions. “In the modern era, humanity established itself as the central power of determination.”

However, he feels that algorithms are taking a central role in shaping decisions during today’s cybernetic era.

“Technological advancements could lead to a merging of humans and machines, creating a reality where both divine will and human autonomy are diminished," he adds.

Privacy, equity, and redefining humanity

Brain-computer interfaces also raise serious privacy concerns. Devices capable of accessing neural data, if misused, could lead to manipulation or exploitation of deeply personal information.

These risks are further heightened by regulatory gaps in protecting brain data, leaving vulnerabilities unaddressed.

American scientist William A. Haseltine explains that the ability of these devices to access a person's neural data opens the door to privacy violations and manipulation.

A review from The Cureus Journal of Medical Science reflects the need for stronger frameworks to address these risks, including unauthorised data exploitation and ensuring data integrity.

Read More
Read More

The AI conundrum: From ‘should we regulate’ to ‘how should we regulate’

The high costs associated with BCIs also bring accessibility and equity problems.

Haseltine cautions that these devices, despite their promise, risk becoming tools accessible only to the wealthy, thereby widening the gap between socioeconomic classes.

Nature Science Journal examines how this technology could deepen societal divides by offering cognitive and physical enhancements to those with resources while excluding marginalised populations.

Moreover, these technologies challenge long-held views of human identity.

"Technologies aimed at altering human nature introduce humanity to a new phase of ontological, cosmological, and theological discussions. It is a reality that radical interventions in the God-Human-Nature equation often bring about destructive processes.," says Prof. Dag.

Read More
Read More

What's AGI, the AI frontier that promises super-intelligence in machines?

Protection of ‘neurorights’

To address these challenges, neuroethics experts emphasise the importance of ethical oversight. They point to countries like Chile, which made constitutional amendments or Spain, which has taken the lead by enacting legislation to protect neurorights, a vital step in ensuring mental autonomy.

In Spain, the introduction of the "Digital Rights Charter" establishes clear guidelines requiring neural technology to preserve individual control over identity, sovereignty, and self-determination.

The charter emphasises the importance of protecting the security and confidentiality of neural data, promoting ethical practices in its development, and regulating technologies that may impact physical or mental integrity.

Collaborative approaches among policymakers, ethicists, and technologists are deemed essential to address challenges such as data misuse, equitable access, and preserving human dignity.

“We must manage the trajectory of new technologies in a healthy and ethical manner. A framework must be established to preserve essential human qualities such as free will, responsibility, and privacy. New technologies should not be developed in isolation from ethical considerations,” Prof Dag says.

Loading...
Route 6