Imagine a future in which a wearable device tells advertisers when you’re in the mood for chocolate, or lets your employer know when you’re not paying attention at work. Or where a medical implant that’s supposed to save your life ends up being used against you in court.
These are some of the scenarios that people in the emerging field of neural privacy are worried about — and they say some are already closer to reality than you might think.
As tech companies and scientists invest in technology that interacts with our brains, some experts say these concerns are overblown, and that we’re far from being able to map moods and thoughts in a meaningful way.
Others, however, say brain data is the next frontier of privacy, and we need to pass laws to protect our brain data now.
“There are obviously a lot of bad actors out in the world that are going to try to use these devices for really worrying purposes,” Jared Genser, a human rights lawyer and co-founder of the Neurorights Foundation, told As It Happens host Nil Köksal.
What is neural privacy?
Neurotechnology is tech that interacts with our brains or nervous systems. It can largely be broken down into two categories — invasive, like implants, and non-invasive, like wearables.
The consumer sphere is dominated by wearables. Think headbands that monitor your state of relaxation to help you meditate, and hats and headsets that measure fatigue to reduce workplace accidents.
Big companies like Snapchat, Meta and Apple are also exploring the neurotech space, with the latter having patented earbuds that measure the brain’s electrical activity.
Invasive neurotech, meanwhile, is mostly limited to the medical sphere. There’s deep brain stimulation, which uses wires to send signals to the brain to help manage the symptoms of neurological disorders like Parkinson’s disease. Brain implants, placed surgically, can send electrical pulses to the brain to block seizures for patients with drug-resistant epilepsy. And brain-computer interfaces allow people with limited mobility to control robotic limbs.
Some companies are already working to bring invasive neurotech into the consumer sphere. In January, the first human patient received an implant from Elon Musk’s computer-brain interface company, Neuralink, which he later used to play Mario Kart with his mind.
“What’s coming is both incredibly exciting as well as daunting,” Genser said.
As It Happens6:29‘Neurorights’ advocate celebrates law protecting Californians’ mental privacy
With these advances in neurotech, comes the rise in “neurorights” advocacy.
The Neurorights Foundation, born from a three-day academic workshop at Columbia University in 2017, advocates for legislation to protect the information inside our brains.
They’ve had some success. Last week, California amended its existing consumer privacy legislation to include neural data.
Colorado enacted similar legislation in April, and Minnesota is currently considering a bill to enshrine the right to mental privacy.
Chile became the first country to amend its constitution to protect “mental integrity” and neural data in 2021, and several other Latin American countries are considering similar moves.
What’s happening in Canada?
Neurorights are on Canada’s radar, too.
The federal Office of the Privacy Commissioner says it considers neural data to be a type of biometric information, which means its protected under the Personal Information Protection and Electronic Documents Act.
Last fall, the office launched public consultation on new draft guidance on biometric technologies, which it expects to release in the coming months.
“My office will continue to work with our global counterparts to identify ways to promote and protect the fundamental privacy rights of our citizens, while also allowing innovation to flourish in support of the public interest,” privacy commissioner Philippe Dufresne said in an emailed statement.
Health Canada is also working with experts to draft guidelines on the use of neurotech. Dr. Judy Illes, a professor of neurology at the University of British Columbia and director of Neuroethics Canada, is part of the team putting those together.
She says her team’s recommendations, which will be published soon, focus less on imposing laws and regulations, and more on developing a framework of shared values to guide work in this field.
“It’s good to put in practice good frameworks for guiding good innovation. What we don’t want to do is stop it or prevent it from occurring because now smart, well-intentioned researchers and engineers and neuroscientists are getting nervous about what might happen if they overstep.”
Not everyone buys the hype over neurotech
Graeme Moffat, a senior fellow at the University of Toronto’s Munk School of Global Affairs and Public Policy, agrees. He’s been working with Illes on the Health Canada draft guidelines.
He’s also worked for decades in the field of neurotech, most recently as chief scientist at the Canadian company Interaxon, and before that, the medical technology company Oticon.
His experience in the field, he says, has led him to conclude that fears about consumer technology are “way, way overblown.”
“Ethicists are really, you know, dining out on the worry, and the neurotechnology start-ups are benefiting from the hype,” he said.
The non-invasive neurotech devices on the market right now monitor brain waves or electrical signals, information he says can only derive the user’s “gross mental state” — like whether someone is relaxed or alert — “and not even very reliably.”
It’s the kind of information he says you can more reliably glean from more commonplace technology, like surveillance cameras and smart phones, which he says we should be “far more concerned about.”
“The strongest predictor of future behaviour is past behaviour. So if someone is recording your behaviour all of the time, they don’t need to get inside your head to know what you’re going to do or what you’re thinking,” he said.
But Genser says private companies cannot be trusted to self-regulate.
In April, the organization released a report analyzing the privacy policies and user agreements of 30 companies that sell consumer neurotechnology products.
It found all but one could access the data their devices collect and transfer them to third-parties, fewer than half allow users to request their data be deleted, and only three anonymize and encrypt the data they collect.
If data is created, someone will use it: expert
Jennifer Chandler, a law professor at the University of Ottawa who studies biomedical science and technology, says she understands why some people in the tech industry think this issue is over exaggerated.
“But I also think they dismissed the potential uses of these technologies,” she said.
Just because something doesn’t work well doesn’t mean people won’t use it, she said. And whenever data is created, she says, someone will inevitably use it — or misuse it — for unintended purposes.
Law enforcement in India has already used brain-based lie detector tests during interrogation of suspects. The idea is to see if a suspect’s brain lights up in recognition when told details of a crime.
“You could happen to know something about that stimulus for a totally different reason, which would then lead to a false positive, she said.
There was also a case in Ohio, in 2017, in which pacemaker data was deemed admissible as evidence of someone’s innocence in an arson case. It’s not unreasonable, Chandler says, to presume data from an implanted brain device could be used in a similar way.
“I think it’s worthwhile getting ahead of issues,” Chandler said.
“I don’t think there’s much harm, and I think [there’s] a lot of good, in trying to dig in and anticipate where the field might go and what you do with that information.”