The human brain contains a little over 80-odd billion neurons, each joining with other cells to create trillions of connections called synapses.
The numbers are mind-boggling, but the way each individual nerve cell contributes to the brain’s functions is still an area of contention. A new study has overturned a hundred-year-old assumption on what exactly makes a neuron ‘fire’, posing new mechanisms behind certain neurological disorders.
A team of physicists from Bar-Ilan University in Israel conducted experiments on rat neurons grown in a culture to determine exactly how a neuron responds to the signals it receives from other cells.
To understand why this is important, we need to go back to 1907 when a French neuroscientist named Louis Lapicque proposed a model to describe how the voltage of a nerve cell’s membrane increases as a current is applied.
Once reaching a certain threshold, the neuron reacts with a spike of activity, after which the membrane’s voltage resets.
What this means is a neuron won’t send a message unless it collects a strong enough signal.
Lapique’s equations weren’t the last word on the matter, not by far. But the basic principle of his integrate-and-fire model has remained relatively unchallenged in subsequent descriptions, today forming the foundation of most neuronal computational schemes.
According to the researchers, the lengthy history of the idea has meant few have bothered to question whether it’s accurate.
“We reached this conclusion using a new experimental setup, but in principle these results could have been discovered using technology that has existed since the 1980s,” says lead researcher Ido Kanter.
“The belief that has been rooted in the scientific world for 100 years resulted in this delay of several decades.”
The experiments approached the question from two angles – one exploring the nature of the activity spike based on exactly where the current was applied to a neuron, the other looking at the effect multiple inputs had on a nerve’s firing.
Their results suggest the direction of a received signal can make all the difference in how a neuron responds.
A weak signal from the left arriving with a weak signal from the right won’t combine to build a voltage that kicks off a spike of activity. But a single strong signal from a particular direction can result in a message.
This potentially new way of describing what’s known as spatial summation could lead to a novel method of categorising neurons, one that sorts them based on how they compute incoming signals or how fine their resolution is, based on a particular direction.
Better yet, it could even lead to discoveries that explain certain neurological disorders.
It’s important not to throw out a century of wisdom on the topic on the back of a single study. The researchers also admit they’ve only looked at a type of nerve cell called pyramidal neurons, leaving plenty of room for future experiments.
But fine-tuning our understanding of how individual units combine to produce complex behaviours could spread into other areas of research. With neural networks inspiring future computational technology, identifying any new talents in brain cells could have some rather interesting applications.