Dawn M. Taylor

The researcher talks about decoding brain signals to bring sight, movement and sensation to the disabled.

Dawn M. Taylor is a research scientist at the Cleveland Functional Electrical Stimulation Center, a consortium of partners Case Western Reserve University, the Cleveland VA Medical Center and MetroHealth Medical Center. She is a co-author of Brain-Computer Interfaces: An International Assessment of Research and Development Trends (Springer, 2008), which discusses how years of research are beginning to produce practical tools for the disabled.

Why is research on brain-computer interfaces taking off now? Back in the early 1970s, University of Washington professor Eberhard Fetz connected individual neurons from a monkey's brain to a voltmeter. He used a simple analog circuit to convert the firing rate of the recorded neuron into a voltage. If the monkey changed the neuron's firing rate and got the voltmeter dial to go to the right, for example, he'd get a reward.

Now that computer capacity has greatly expanded, we have the ability to digitize and process many individual neural signals simultaneously and apply more sophisticated mathematical decoding algorithms to those signals.

What are you doing at the FES Center? The goal at the center is to restore movement and function to people with paralysis from spinal cord injury, stroke or other neurological disorders. Movement is restored by applying low levels of electrical current to the peripheral nerves to activate paralyzed muscles. My particular role is to decode one's intended arm and hand movements from the brain. Basically, we are reconnecting the brain to the muscles so people can control their paralyzed limb just by thinking about doing so. Intended movements can also be used to control other technologies, such as prosthetic limbs, assistive robots or a computer mouse.

What needs to be done to really advance the state of the art? Everyone is making small, incremental improvements in algorithms. But we need to be able to acquire the brain signals in a way that doesn't take a whole laboratory of technicians, so that this technology can be used in the real world.

How do you acquire these signals, and what's difficult about it? The least invasive method is with external surface EEGs, where sensors are stuck to the scalp with conductive paste. EEGs provide the lowest-resolution picture of what is going on in the brain because the brain signals get blurred as they pass through the skull and scalp. However, the video game industry is pushing the development of "dry" sensors embedded in headbands that anyone can easily slip on. Their reliability has not been proven yet, but at least that technology is moving forward.

Sensors can also be implanted under the scalp, embedded in the skull or placed on top of the brain. To detect the activity of individual neurons, you need to use very tiny electrodes inserted a few millimeters into the brain.

We'd like to be able to record the same neurons for decades, so the brain can learn how to fire those neurons to control a device. But if these tiny sensors move even a little bit, they pick up different cells. Then the brain has to relearn how to make these new neurons fire in order to control the same device.

What's an example of something that's been commercialized? We developed at FES a hand-grasp system for people with spinal cord injuries who can't move their hands. That was commercialized and is now in use by several hundred people worldwide.

Where will your research lead over five to 10 years? With all the advancements in technology miniaturization and wireless communication, we should be able to eventually shrink the racks of equipment we use in the lab and make all the processors small enough to carry around on a wheelchair or even small enough to implant in the body. That's a little less glamorous than what draws people into the field, which is, "Oh, I get to read brain signals. That's a cool, fun challenge." But the practical aspects of taking the technology out of the lab and into the real world -- that's where the needs are.

You've talked about the movement of signals from the brain to a computer. How about signals that move the other way, into the brain? With a spinal cord injury, you lose your ability to control your muscles, but you also may lose the sensation of touch and of where your limbs are in space. Other injuries or diseases can cause blindness, deafness or other sensory losses. Instead of simply recording from the brain, we can also use electrode technologies to stimulate the brain and bypass the breaks in the neural pathways that bring sensory information in.

For example, there are a number of places where you can tap into the visual pathway. Where you stimulate depends on where the damage is.

One stimulation system uses 96 little prongs in a 4-by-4 millimeter space in the cortex, and these prongs send out electrical currents that activate the nearby neurons. Even if the eyes are damaged, the artificially stimulated neurons can still fire in the same way that they would have fired if a photon of light had hit a healthy retina. This type of system would work by taking an image from a camera you'd wear on your glasses and processing that image into the electrical stimulation instructions needed to produce that image in the brain.

What are some nonmedical applications of brain-computer interfaces? These are the questions we try to avoid. Our colleagues can give an hourlong interview, and the little bit they say about the lie detector is the quote that gets published. But if you are looking to download the dictionary into your brain, or read your e-mail directly into your head, that's not going to happen anytime soon.

-- Interview by Gary Anthes, a former Computerworld national correspondent

Copyright © 2009 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon