|  |  | 

India Top Headlines

Comparing deep networks to the brain: Can they “see” as well as humans? | India News

img-responsive

BENGALURU: A new study from the IISc’s Center for Neuroscience (CNS) has explored how well deep neural networks (machine learning systems inspired by the network of brain cells or neurons in the human brain) compare with the human brain in what regards visual perception.
By pointing out that deep neural networks can be trained to perform specific tasks, the researchers say they have played a critical role in helping scientists understand how our brains perceive the things we see.
“Although deep networks have evolved significantly over the past decade, they are still nowhere near as performing as the human brain in perceiving visual signals. In a recent study, SP Arun, associate professor at CNS, and his team have compared several qualitative properties of these deep networks with those of the human brain, “IISc said in a statement.
Deep networks, while a good model for understanding how the human brain visualizes objects, work differently from the latter, IISc said, adding that while complex computing is trivial for them, certain tasks that are relatively easy to do. humans can be difficult for these networks to perform. full.
“In the current study, published in Nature Communications, Arun and his team tried to understand which visual tasks these networks can perform naturally by virtue of their architecture and which require more training. The team studied 13 different perceptual effects and discovered previously unknown qualitative differences between deep networks and the human brain, ”the statement read.
One example, IISc said, was the Thatcher effect, a phenomenon in which humans find it easier to recognize local feature changes in a vertical image, but this becomes difficult when the image is reversed.
Deep nets trained to recognize upright faces showed a Thatcher effect compared to nets trained to recognize objects. Another visual property of the human brain, called mirror confusion, was tested on these networks. For humans, the reflections from the mirror along the vertical axis appear more similar than those on the horizontal axis. The researchers found that deep grids also show greater mirror confusion for vertically reflected images compared to horizontally reflected images.
“Another peculiar phenomenon of the human brain is that it focuses first on grosser details. This is known as the overall benefit effect. For example, in an image of a tree, our brain would first see the tree as a whole before noticing the details of the leaves in it, ”explains Georgin Jacob, first author and PhD student at CNS.
Surprisingly, he said, neural networks showed a local advantage. This means that, unlike the brain, networks focus first on the finest details in an image. Therefore, although these neural networks and the human brain perform the same object recognition tasks, the steps that both follow are very different.
Arun, the study’s lead author, says that identifying these differences can bring researchers closer to making these networks more like the brain. Such analyzes can help researchers build more robust neural networks that not only perform better, but are also immune to “adversarial attacks” that aim to derail them.

Reference page

comparing-deep-networks-to-the-brain-can-they-see-as-well-as-humans-india-news

ABOUT THE AUTHOR