Do deep neural networks ‘see’ like you and I do?
Elements of computerized neural networks seem to respond in the same way as neurons in the brain’s visual system, researchers find.
Brian Donohue - 206.543.7856, firstname.lastname@example.org
Computer components designed to recognize images behave surprisingly like neurons in the brain, researchers at the University of Washington School of Medicine report.
“We found that deep neural networks used for recognizing images actually have single units within them that respond in remarkably similar ways as do neurons within the visual system of the brain,” said lead author Dean A Pospisil, a neuroscience doctoral candidate in the laboratory of Wyeth Bair.
Bair and Anitha Pasupathy, both UW associate professors of biological structure, co-authored the paper with Pospisil. It appeared in the journal eLife in December.
Artificial (computer-based) neural networks are made up of layers of “neurons” that receive input signals from higher-level neurons, process those inputs using mathematical algorithms, and send their outputs on to the neurons in the next level, which repeat the process.
If the network’s final output is correct — or approach the correct answer — feedback is sent back through the system to reinforce the algorithms that gave the correct response and inhibit those that were incorrect.
Although these networks are far less complex than the brain, they can be trained to solve a variety of questions, such as how to win at chess. But just how these networks “learn” and how closely they mimic the problem-solving processes of the brain is unclear.
In their study, the UW researchers studied an artificial neural network that is modeled on a structure in the brain called the ventral visual stream. In the brain, neurons within this structure process signals from the eye. As these signals move from neuron to neuron through the ventral visual stream, individual neurons respond to progressively more complex elements of an image. First they respond to patches of dark and light, and then later to elements such as edges and shapes, until finally an object in an image is categorized, such as being that of a cat or a car. That information is sent on for processing in other brain areas.
In their study, the researchers focused on a particular spot in the ventral visual stream called V4. Neurons in this area specialize in recognizing the boundaries of objects. Many are specifically tuned to respond to boundaries that have a curve and are oriented in a particular direction.
To understand whether nodes in the computerized neural network behaved similarly to neurons previously recorded in area V4 in a macaque’s brain, the researchers presented exactly the same visual stimuli from the macaque experiments to single nodes in the network.
They found that individual nodes indeed behaved like individual V4 neurons, responding to the same specific shapes while ignoring others.
“It seemed like the behavior of single units of the neural network were converging on the behavior we saw in single neurons in V4,” Pospisil said.
For neuroscientists who study the brain, the findings suggest that artificial neural networks may be useful models to understand how the brain works, he added, and, for computer scientists, insight into how neural networks solve problems.
Funding for the research was provided by the National Science Foundation (CRCNS Grant IIS-1309725), National Institutes of Health (R01 EY-018839), NIH Office of Research Infrastructure Programs (RR-00166 to the Washington National Primate Research Center) and a Google Faculty Research Award.