AI scientists are producing new theories about how the brain learns

Five decades of research on artificial neural networks has earned Geoffrey Hinton the nickname “godfather” of headline-grabbing AI models including ChatGPT and LaMDA. They can write coherent (if uninspiring) prose, diagnose illnesses from medical scans, and drive autonomous vehicles. But for Dr. Hinton, creating better models was never the end goal. Their hope was that by developing artificial neural networks that could learn to solve complex problems, they could shed light on how the brain’s neural networks do the same thing.

Brains learn by being subtly rewired: some connections between neurons, known as synapses, are strengthened, while others must be weakened. But since the brain has billions of neurons, millions of which could be involved in any task, scientists have wondered how it knows which synapses to modify and to what extent. Dr. Hinton popularized a clever mathematical algorithm known as backpropagation to solve this problem in artificial neural networks. But it was long thought to be too unwieldy to have evolved in the human brain. Now, as AI models begin to look more and more like humans in their capabilities, scientists are wondering if the brain could do something similar after all.

Figuring out how the brain does what it does is no easy task. Much of what neuroscientists understand about human learning comes from experiments with small pieces of brain tissue or handfuls of neurons in a Petri dish. It is often unclear whether living, learning brains operate according to expanded versions of these same rules, or whether something more sophisticated is going on. Even with modern experimental techniques, in which neuroscientists track hundreds of neurons at a time in living animals, it is difficult to reverse engineer what is actually happening.

One of the most prominent and oldest theories about how the brain learns is Hebbian learning. The idea is that neurons that fire at about the same time connect more strongly; It is often summarized as “cells that fire together and wire together.” Hebbian learning can explain how the brain learns simple associations (think of Pavlov’s dogs who salivated at the sound of a bell). But for more complicated tasks, such as learning a language, Hebbian learning seems too inefficient. Even with large amounts of training, artificial neural networks trained in this way fall far short of human performance levels.

Today’s major AI models are designed differently. To understand how they work, let’s imagine an artificial neural network trained to detect birds in images. Such a model would be made up of thousands of synthetic neurons, arranged in layers. The images are fed into the first layer of the network, which sends information about the content of each pixel to the next layer through the AI ​​equivalent of synaptic connections. Here, neurons can use this information to select lines or edges before sending signals to the next layer, which could detect eyes or feet. This process continues until the signals reach the final layer responsible for hitting the big call: “bird” or “not bird.”

An integral part of this learning process is the so-called error backpropagation algorithm, often known as backprop. If the network is shown an image of a bird but mistakenly concludes that it is not one, then once it realizes the error, it generates an error signal. This error signal works its way back through the network, layer by layer, strengthening or weakening each connection to minimize any future errors. If the model is shown a similar image again, the modified connections will cause the model to correctly declare: “bird”.

Neuroscientists have always been skeptical about the possibility that backpropagation works in the brain. In 1989, shortly after Dr. Hinton and his colleagues demonstrated that the algorithm could be used to train layered neural networks, Francis Crick, the Nobel laureate who co-discovered the structure of DNA, published a disassembly of the theory in the Nature magazine. Neural networks that use the backpropagation algorithm are biologically “unrealistic in almost every way,” he said.

For one thing, neurons send information primarily in one direction. For backpropagation to work in the brain, there would have to be a perfect mirror image of each network of neurons to send the error signal backwards. In addition, artificial neurons communicate using signals of different intensities. Biological neurons, for their part, send signals of fixed intensity, for which the backprop algorithm is not designed.

Still, the success of neural networks has renewed interest in whether some kind of backup occurs in the brain. There have been promising experimental signs that this might be the case. A preprint study published in November 2023, for example, found that individual neurons in the brains of mice appear to be responding to single error signals, one of the crucial ingredients of backprop-like algorithms long thought to be missing in living brains.

Scientists working on the boundary between neuroscience and AI have also shown that small adjustments to backprop can make it more brain-friendly. An influential study showed that the mirror image network once considered necessary does not have to be an exact replica of the original for learning to occur (albeit more slowly for large networks). This makes it less implausible. Others have found ways to bypass a mirror network entirely. If artificial neural networks can be given biologically realistic features, such as specialized neurons that can integrate activity and error signals in different parts of the cell, then backprop can be produced with a single set of neurons. Some researchers have also made modifications to the backprop algorithm to allow it to process spikes rather than continuous signals.

Other researchers are exploring quite different theories. In a paper published in Nature Neuroscience earlier this year, Yuhang Song and colleagues at the University of Oxford presented a method that rotates the backrest on top of your head. In conventional backprop, error signals cause adjustments in synapses, which in turn cause changes in neuronal activity. The Oxford researchers proposed that the network could first change the activity of neurons and only then adjust synapses to fit. They called this prospective configuration.

When the authors tested the forward setup on artificial neural networks, they found that they learned in a much more human-like way (more robust and with less training) than models trained with backprop. They also found that the network offered a much closer correspondence to human behavior in other very different tasks, such as one that involved learning to move a joystick in response to different visual cues.

Learning the hard way

However, for now, all of these theories are just that. Designing experiments to demonstrate whether backprop, or any other algorithm, is at play in the brain is surprisingly complicated. To Aran Nayebi and his colleagues at Stanford University, this seemed like a problem that AI could solve.

The scientists used one of four different learning algorithms to train more than a thousand neural networks to perform a variety of tasks. They then monitored each network during training, recording neuronal activity and the strength of synaptic connections. Dr. Nayebi and his colleagues then trained another supervisory metamodel to infer the learning algorithm from the recordings. They found that the metamodel could determine which of the four algorithms had been used by recording just a couple hundred virtual neurons at different intervals during learning. The researchers hope that such a metamodel can do something similar with equivalent recordings from a real brain.

Identifying the algorithm or algorithms that the brain uses to learn would be a great step forward for neuroscience. Not only would it shed light on how the body’s most mysterious organ works, it could also help scientists build new AI-powered tools to try to understand specific neural processes. It is unclear whether this could lead to better AI algorithms. For Dr. Hinton, at least, backprop is probably superior to anything happening in the brain.

© 2024, The Economist Newspaper Ltd. All rights reserved. From The Economist, published under license. Original content can be found at www.economist.com

Catch all the business news, Market news, breaking news Events and Latest news Updates on Live Mint. Download The Mint News App to get daily market updates.

FurtherLess

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment