Five decades of research into artificial neural networks have earned Geoffrey Hinton the moniker of the Godfather of artificial intelligence (AI). Work by his group at the University of Toronto laid the foundations for today’s headline-grabbing AI models, including ChatGPT and LaMDA. These can write coherent (if uninspiring) prose, diagnose illnesses from medical scans and navigate self-driving cars. But for Dr Hinton, creating better models was never the end goal. His hope was that by developing artificial neural networks that could learn to solve complex problems, light might be shed on how the brain’s neural networks do the same.
Brains learn by being subtly rewired: some connections between neurons, known as synapses, are strengthened, while others must be weakened. But because the brain has billions of neurons, of which millions could be involved in any single task, scientists have puzzled over how it knows which synapses to tweak and by how much. Dr Hinton popularised a clever mathematical algorithm known as backpropagation to solve this problem in artificial neural networks. But it was long thought to be too unwieldy to have evolved in the human brain. Now, as AI models are beginning to look increasingly human-like in their abilities, scientists are questioning whether the brain might do something similar after all.
Working out how the brain does what it does is no easy feat. Much of what neuroscientists understand about human learning comes from experiments on small slices of brain tissue, or handfuls of neurons in a Petri dish. It’s often not clear whether living, learning brains work by scaled-up versions of these same rules, or if something more sophisticated is taking place. Even with modern experimental techniques, wherein neuroscientists track hundreds of neurons at a time in live animals, it is hard to reverse-engineer what is really going on.
One of the most prominent and longstanding theories of how the brain learns is Hebbian learning.
Source: The Economist
Be the first to comment