On the Mind: What We Know About the Brain-Computer Interface

Science Features neuroscience
Share Tweet Submit Pin
On the Mind: What We Know About the Brain-Computer Interface

This column, On the Mind, is a series about the latest in cognitive science and neuroscience-related research that applies to our everyday lives. This biweekly series is for those interested in cutting-edge findings about the practical side of habits, memories, multitasking and the human-brain interface. What are the recent studies, and what is the context? See what science says and how you can apply it to your life.


Terms such as “artificial intelligence,” “Internet of Things” and “brain-computer interface” can worry even the most tech-savvy consumers. Silicon Valley pros, including celebrity-status entrepreneurs such as Elon Musk, see the possibility of our minds connecting with others as a reality. Researchers have made progress during the last 50 years, but how close are we, really?

Studies Say

In May alone, more than a dozen publications offered up potential uses of the brain-computer interface (BCI) in new ways to help patients navigate spaces or use motor skills. Some are looking for simple ways to create BCI networks for at-home experimentation. Others are conducting cutting-edge research in universities to integrate BCI with limbs and also wheelchairs.

Interestingly, the first BCI test came in 1969 when researcher Eb Fetz at the Center for Sensorimotor Neural Engineering at the University of Washington showed that monkeys could use brain signals to control a needle hooked to a dial, which gave them a food pellet. Now researchers at the center are still delving deeply into the possibilities of controlling objects, limbs and computers with the brain.

“Connecting our brains directly to technology may ultimately be a natural progression of how humans have augmented themselves with technology over the ages, from using wheels to overcome our bipedal limitations to making notations on clay tablets and paper to augment our memories,” said James Wu, a bioengineering researcher at the University of Washington.

“Much like the computers, smartphones and virtual reality headsets of today, augmentative BCIs, when they finally arrive on the consumer market, will be exhilarating, frustrating, risky and, at the same time, full of promise.”

Key Takeaways

In the meantime, we can learn a few facts about BCI and become more comfortable with the state of progress.

1. The technology is still far from everyday use.

Most BCI prototypes are slow and less complex than what humans can do easily. Scientists are trying to get there, but it’ll take some time. Researchers in Toronto, for example, are looking at the differences between fine movements on the left hand and right hand sides of the body.

Professors in Germany, Austria, Spain and the United Kingdom are working on BCI software using electrodes that would be more friendly to at-home users instead of the expensive setups that are more tailored to the research needs of scientists and engineers. They believe the system could include apps for spelling, web access, entertainment, artistic expression and environmental control. In addition, Duke University researchers working with a similar device are developing ways to make the devices more accurate and less error-prone.

2. At the same time, BCI is beginning to help people who are paralyzed.

At the University of Pittsburgh, a paralyzed man learned how to use a mind-controlled robotic arm. Similarly, Stanford University researchers are helping paralyzed patients use a tablet wirelessly to type at high speeds and with high accuracy.

To help disabled people in a mobile world, North Dakota State University computer scientists are looking at mobile phone interfaces that may promote easier integration once we’re able to implement BCI in mobile devices. Essentially, across the world, researchers are developing ways to use BCI to help people with blindness, seizures and memory issues, Australian researchers say.

3. In the meantime, researchers are still working out major kinks.

Professors in China and Germany are finding that some controls — like those managed by a flicker stimulus — can cause fatigue and even seizures and are trying to overcome those barriers by placing a stimulus in the peripheral field of vision. Researchers in Philadelphia are trying to overcome this same drawback by using functional near-infrared spectroscopy (rather than the typical EEG pulses) to signal upper and lower limb movement. So far, it seems fNIRS works best with tasks done by hands, such as tapping fingers. Scientists at the University of Essex in the UK are studying ways for users to target particular objects more accurately. They’re testing three prototypes now.

4. Some impulses can be sent back to the body.

On the other side of the technology, researchers are looking at ways electrodes can provide sensory feedback back to patients, which could help with rehabilitation and prosthetics. Wu, mentioned above, was part of a seven-person team across the U.S. in 2016 that looked at responses to brain stimulation, which instructed patients’ hands to move in certain directions. The study was one of the first demonstrations that used brain stimulation to perform motor tasks and record feedback.

5. The major current studies are working on our sight and sound.

Cochlear implants, which provide hearing to more than 300,000 people worldwide, are the most prevalent bionic implements that help people. Now human control trials are being conducted in bionic eyes for those who have severe vision impairments. Several patients have had surgery at the University of Oxford, for example, where they received a tiny electronic chip in the back of the eye. The device is connected a computer under the skin by the ear, which is powered by a magnetic coil that looks similar to a hearing aid.

These devices are still limited, though. Cochlear implants, for example, help with speech but often distort music. Similarly, bionic eyes provide vision but often low-resolution sight. Improvements are on the way.

Overall, BCI research encompasses a vast and growing field, and scientists are still trying to understand it themselves in terms of next steps and applications. A research team in North Carolina and South Korea, for example, studied a new type of brain-computer interface — a hybrid BCI — that address the limitations of conventional BCI systems. They looked at more than 30 journal articles to figure out the satisfaction, effectiveness and efficiency needed for future research. The next steps? Better ergonomics and design, they say.

Image: Fondazione Santa Lucia, Flickr, CC-BY

Carolyn Crist is a freelance health and science journalist for regional and national publications. She writes the Escape Artist column for Paste Travel, On the Mind column for Paste Science and Stress Test column for Paste Health.