Neuropsychology

The Attention Machine

Written by Nick Matthews

The Attention Machine

A new brain-scanning technique could change the way scientists think about human focus.

Human attention isn’t stable, ever, and it costs us: lives lost when drivers space out, billions of dollars wasted on inefficient work, and mental disorders that hijack focus. Much of the time, people don’t realize they’ve stopped paying attention until it’s too late. This “flight of the mind,” as Virginia Woolf called it, is often beyond conscious control.

So researchers at Princeton set out to build a tool that could show people what their brains are doing in real time, and signal the moments when their minds begin to wander. And they’ve largely succeeded, a paper published today in the journal Nature Neuroscience reports. The scientists who invented this attention machine, led by professor Nick Turk-Browne, are calling it a “mind booster.” It could, they say, change the way we think about paying attention—and even introduce new ways of treating illnesses like depression.

Here’s how the brain decoder works: You lie down in a functional magnetic resonance imaging machine (fMRI)—similar to the MRI machines used to diagnose diseases—which lets scientists track brain activity. Once you’re in the scanner, you watch a series of pictures and press a button when you see certain targets. The task is like a video game—the dullest video game in the world, really, which is the point. You see a face, overlaid atop an image of a landscape. Your job is to press a button if the face is female, as it is 90 percent of the time, but not if it’s male. And ignore the landscape. (There’s also a reverse task, in which you’re asked to judge whether the scene is outside or inside, and ignore the faces.)

cc

Megan deBettencourt/Princeton

To gauge attention from the brain, the researchers used a learning algorithm like the one Facebook uses to recognize friends’ photos. The algorithm can discern “Your Brain On Faces” versus “Your Brain On Scenes.” Whenever you start spacing out, it detects more “scene” than “face” in your brain signal, and tells the program to make the faces you are watching grow dimmer. In turn, you have to focus harder to figure out what you’re seeing, and to succeed at the “game.” In the Princeton face-scene game, college students made errors 30 percent of the time.

If this were a test, they would have gotten a D.

vv

Ken Norman/Princeton

“Internal states are kind of ineffable,” says Turk-Browne, an associate professor of psychology at the Princeton Neuroscience Institute. “You may not know when you’re in a good or bad state. We wanted to see: If we give people feedback before they make mistakes, can they learn to be more sensitive to their own internal states?”

It turns out they can, Turk-Browne says. The key is that, for some subjects, the pictures were controlled not by their own brains, but by someone else’s: meaningless jitter. Of the 16 subjects who got their own brain feedback, 11 said they felt they were making the pictures clearer by focusing, as opposed to four of 16 who watched the placebo feedback. What the scientists found is that only people whose own brains drove the images’ dimming improved their ability to focus. Paying attention, in other words, is like learning basketball or French: Good old-fashioned practice matters.

“I think what’s exciting about this finding,” explains Turk-Browne, “is the idea that certain aspects of cognition like attention are only partly consciously accessible. So, if we can directly access people’s mental states with real time fMRI, we can give them more information than they could get from their own mind.”

Neuroscientists have been reading brain patterns with computer programs like this for just over a decade. Machine-learning algorithms, like the ones Google and Facebook use to recognize everything online, can hack the brain’s code, too: essentially software for reading brain scans. Given samples of neural patterns—your brain imagining faces, say, versus your brain picturing places—a decoder is trained to tell whether you are remembering a face (Jennifer Aniston, President Obama) or a location (the Hollywood sign, the White House). A prior study by researchers at the memory lab of professor Ken Norman, a co-developer of the attention tool, read out these categories from people’s brains as they freely recalled pictures they had studied earlier. Similar work has “decoded” what people see, attend to, learn, remember falsely, and dream. What’s new and remarkable now is how fast neural decoding is happening. Machines today can harness brain activity to drive what a person sees in real time.

“The idea that we could tell anything about a person’s thoughts from a single brain snapshot was such a rush,” Norman recalls of the early days, over a decade ago. “Certainly the kinds of decoding we are doing now can be done much faster.”

The potential to link brain patterns directly to behavior is unprecedented for human neuroscience.

Here is how Princeton’s current scanner sees a human brain: First, it divides a brain image into around 40,000 cubes, called voxels, or 3-D pixels. This basic unit of fMRI is a 3 millimeter by 3 millimeter cube of brain. So, the neural pattern representing any mental state—from how you feel when you smell your wife’s perfume to suicidal despair—is represented by this matrix. The same neural code for, say, Scarlett Johansson, will represent her in your memory, or as you talk to her on the phone, or in your dreams. The decoding approach, first pioneered in 2001 by the neuroscientist James Haxby and colleagues at Princeton, is known technically as “multi-voxel pattern analysis,” or MVPA. This “decoding” is distinct from the more common, less sophisticated form of fMRI analysis that gets a lot of attention in the media, the kind that shows what parts of the brain “light up” when a person does a task, relative to a control. “Though fMRI is not very cheap to use, there may be a certain advantage of neurofeedback training, compared to pure behavioral training,” suggests Kazuhisa Shibata, an assistant professor at Brown University, “if this work is shown to generalize to other tasks or domains.”

That is a big if. One caveat to the neurofeedback trend is that many “brain-training” tasks, including popular corporate games like Lumosity, which promise to improve brain function, are roundly criticized by neuroscientists: People trained on them often only improve at the games themselves. They don’t actually get better at paying attention, remembering things, or controlling mood more generally. As Johns Hopkins neuroscientist and memory expert David Linden points out in his recent book, Touch, physical exercise is one of the few interventions shown to improve general cognition reliably, far better than most “brain games.” So neurofeedback has a high bar to clear. That said, Shibata’s work on vision, one of few other successful examples of real-time fMRI, showed visual learning can be driven by brain feedback.

Other experts note the Princeton team’s technical advance, but with some skepticism. “The setup for monitoring attentional states is impressive,” says Yukiyasu Kamitani, a pioneer of neural decoding at ATR Computational Neuroscience Labs and professor at Nara Institute of Science and Technology, “although the behavioral effects of neurofeedback they found are marginal.”

Let’s not get carried away just yet, in other words. But as the neurofeedback technique improves, it is likely to become widely used. When effective, the potential to link brain patterns directly to behavior is unprecedented for human neuroscience.

Comments

comments

About the author

Nick Matthews

Nick Matthews is an International Speaker, Performance Coach and Clinical Psychologist. He is also an Internet Marketing specialist. Creator of Change Your Destiny In 6 Days, Easy Mind Rescue and Easy Life Rescue. Creator of one of the fastest growing Personal Development Facebook Pages and author of Personality to Success and A Practical Guide to Change Your Destiny.

Leave a Comment

/* ]]> */