Hidden Computational Power Found in the Arms of Neurons

The dendritic arms of some human neurons can perform logic operations that once seemed to require whole neural networks.

The information-processing capabilities of the brain are often reported to reside in the trillions of connections that wire its neurons together. But over the past few decades, mounting research has quietly shifted some of the attention to individual neurons, which seem to shoulder much more computational responsibility than once seemed imaginable.

The latest in a long line of evidence comes from scientists’ discovery of a new type of electrical signal in the upper layers of the human cortex. Laboratory and modeling studies have already shown that tiny compartments in the dendritic arms of cortical neurons can each perform complicated operations in mathematical logic. But now it seems that individual dendritic compartments can also perform a particular computation — “exclusive OR” — that mathematical theorists had previously categorized as unsolvable by single-neuron systems.

“I believe that we’re just scratching the surface of what these neurons are really doing,” said Albert Gidon, a postdoctoral fellow at Humboldt University of Berlin and the first author of the paper that presented these findings in Science earlier this month.

The discovery marks a growing need for studies of the nervous system to consider the implications of individual neurons as extensive information processors. “Brains may be far more complicated than we think,” said Konrad Kording, a computational neuroscientist at the University of Pennsylvania, who did not participate in the recent work. It may also prompt some computer scientists to reappraise strategies for artificial neural networks, which have traditionally been built based on a view of neurons as simple, unintelligent switches.

The Limitations of Dumb Neurons

In the 1940s and ’50s, a picture began to dominate neuroscience: that of the “dumb” neuron, a simple integrator, a point in a network that merely summed up its inputs. Branched extensions of the cell, called dendrites, would receive thousands of signals from neighboring neurons — some excitatory, some inhibitory. In the body of the neuron, all those signals would be weighted and tallied, and if the total exceeded some threshold, the neuron fired a series of electrical pulses (action potentials) that directed the stimulation of adjacent neurons.

At around the same time, researchers realized that a single neuron could also function as a logic gate, akin to those in digital circuits (although it still isn’t clear how much the brain really computes this way when processing information). A neuron was effectively an AND gate, for instance, if it fired only after receiving some sufficient number of inputs.

Networks of neurons could therefore theoretically perform any computation. Still, this model of the neuron was limited. Not only were its guiding computational metaphors simplistic, but for decades, scientists lacked the experimental tools to record from the various components of a single nerve cell. “That’s essentially the neuron being collapsed into a point in space,” said Bartlett Mel, a computational neuroscientist at the University of Southern California. “It didn’t have any internal articulation of activity.” The model ignored the fact that the thousands of inputs flowing into a given neuron landed in different locations along its various dendrites. It ignored the idea (eventually confirmed) that individual dendrites might function differently from one another. And it ignored the possibility that computations might be performed by other internal structures.

But that started to change in the 1980s. Modeling work by the neuroscientist Christof Koch and others, later supported by benchtop experiments, showed that single neurons didn’t express a single or uniform voltage signal. Instead, voltage signals decreased as they moved along the dendrites into the body of the neuron, and often contributed nothing to the cell’s ultimate output.

This compartmentalization of signals meant that separate dendrites could be processing information independently of one another. “This was at odds with the point-neuron hypothesis, in which a neuron simply added everything up regardless of location,” Mel said.

That prompted Koch and other neuroscientists, including Gordon Shepherd at the Yale School of Medicine, to model how the structure of dendrites could in principle allow neurons to act not as simple logic gates, but as complex, multi-unit processing systems. They simulated how dendritic trees could host numerous logic operations, through a series of complex hypothetical mechanisms.

Later, Mel and several colleagues looked more closely at how the cell might be managing multiple inputs within its individual dendrites. What they found surprised them: The dendrites generated local spikes, had their own nonlinear input-output curves and had their own activation thresholds, distinct from those of the neuron as a whole. The dendrites themselves could act as AND gates, or as a host of other computing devices.

Mel, along with his former graduate student Yiota Poirazi (now a computational neuroscientist at the Institute of Molecular Biology and Biotechnology in Greece), realized that this meant that they could conceive of a single neuron as a two-layer network. The dendrites would serve as nonlinear computing subunits, collecting inputs and spitting out intermediate outputs. Those signals would then get combined in the cell body, which would determine how the neuron as a whole would respond.

Whether the activity at the dendritic level actually influenced the neuron’s firing and the activity of neighboring neurons was still unclear. But regardless, that local processing might prepare or condition the system to respond differently to future inputs or help wire it in new ways, according to Shepherd.

Whatever the case, “the trend then was, ‘OK, be careful, the neuron might be more powerful than you thought,’” Mel said.

Shepherd agreed. “Much of the power of the processing that takes place in the cortex is actually subthreshold,” he said. “A single-neuron system can be more than just one integrative system. It can be two layers, or even more.” In theory, almost any imaginable computation might be performed by one neuron with enough dendrites, each capable of performing its own nonlinear operation.

In the recent Science paper, the researchers took this idea one step further: They suggested that a single dendritic compartment might be able to perform these complex computations all on its own.

Unexpected Spikes and Old Obstacles

Matthew Larkum, a neuroscientist at Humboldt, and his team started looking at dendrites with a different question in mind. Because dendritic activity had been studied primarily in rodents, the researchers wanted to investigate how electrical signaling might be different in human neurons, which have much longer dendrites. They obtained slices of brain tissue from layers 2 and 3 of the human cortex, which contain particularly large neurons with many dendrites. When they stimulated those dendrites with an electrical current, they noticed something strange.

They saw unexpected, repeated spiking — and those spikes seemed completely unlike other known kinds of neural signaling. They were particularly rapid and brief, like action potentials, and arose from fluxes of calcium ions. This was noteworthy because conventional action potentials are usually caused by sodium and potassium ions. And while calcium-induced signaling had been previously observed in rodent dendrites, those spikes tended to last much longer.

Stranger still, feeding more electrical stimulation into the dendrites lowered the intensity of the neuron’s firing instead of increasing it. “Suddenly, we stimulate more and we get less,” Gidon said. “That caught our eye.”

To figure out what the new kind of spiking might be doing, the scientists teamed up with Poirazi and a researcher in her lab in Greece, Athanasia Papoutsi, who jointly created a model to reflect the neurons’ behavior.

The model found that the dendrite spiked in response to two separate inputs — but failed to do so when those inputs were combined. This was equivalent to a nonlinear computation known as exclusive OR (or XOR), which yields a binary output of 1 if one (but only one) of the inputs is 1.

This finding immediately struck a chord with the computer science community. XOR functions were for many years deemed impossible in single neurons: In their 1969 book Perceptrons, the computer scientists Marvin Minsky and Seymour Papert offered a proof that single-layer artificial networks could not perform XOR. That conclusion was so devastating that many computer scientists blamed it for the doldrums that neural network research fell into until the 1980s.

Neural network researchers did eventually find ways of dodging the obstacle that Minsky and Papert identified, and neuroscientists found examples of those solutions in nature. For example, Poirazi already knew XOR was possible in a single neuron: Just two dendrites together could achieve it. But in these new experiments, she and her colleagues were offering a plausible biophysical mechanism to facilitate it — in a single dendrite.

“For me, it’s another degree of flexibility that the system has,” Poirazi said. “It just shows you that this system has many different ways of computing.” Still, she points out that if a single neuron could already solve this kind of problem, “why would the system go to all the trouble to come up with more complicated units inside the neuron?”

Processors Within Processors

Certainly not all neurons are like that. According to Gidon, there are plenty of smaller, point-like neurons in other parts of the brain. Presumably, then, this neural complexity exists for a reason. So why do single compartments within a neuron need the capacity to do what the entire neuron, or a small network of neurons, can do just fine? The obvious possibility is that a neuron behaving like a multilayered network has much more processing power and can therefore learn or store more. “Maybe you have a deep network within a single neuron,” Poirazi said. “And that’s much more powerful in terms of learning difficult problems, in terms of cognition.”

Perhaps, Kording added, “a single neuron may be able to compute truly complex functions. For example, it might, by itself, be able to recognize an object.” Having such powerful individual neurons, according to Poirazi, might also help the brain conserve energy.

Larkum’s group plans to search for similar signals in the dendrites of rodents and other animals, to determine whether this computational ability is unique to humans. They also want to move beyond the scope of their model to associate the neural activity they observed with actual behavior. Meanwhile, Poirazi now hopes to compare the computations in these dendrites to what happens in a network of neurons, to suss out any advantages the former might have. This will include testing for other types of logic operations and exploring how those operations might contribute to learning or memory. “Until we map this out, we can’t really tell how powerful this discovery is,” Poirazi said.

Though there’s still much work to be done, the researchers believe these findings mark a need to rethink how they model the brain and its broader functions. Focusing on the connectivity of different neurons and brain regions won’t be enough.

The new results also seem poised to influence questions in the machine learning and artificial intelligence fields. Artificial neural networks rely on the point model, treating neurons as nodes that tally inputs and pass the sum through an activity function. “Very few people have taken seriously the notion that a single neuron could be a complex computational device,” said Gary Marcus, a cognitive scientist at New York University and an outspoken skeptic of some claims made for deep learning.

Although the Science paper is but one finding in an extensive history of work that demonstrates this idea, he added, computer scientists might be more responsive to it because it frames the issue in terms of the XOR problem that dogged neural network research for so long. “It’s saying, we really need to think about this,” Marcus said. “The whole game — to come up with how you get smart cognition out of dumb neurons — might be wrong.”

“This is a super clean demonstration of that,” he added. “It’s going to speak above the noise.”

Link original: https://www.quantamagazine.org/neural-dendrites-reveal-their-computational-power-20200114/?fbclid=IwAR1dAkgDLGmaVYkGz1dkmElS3ZvtC89ZGYn9EyDr9hpqU1ocZwasvN_pRC0



Brain changed by caffeine in utero

Photo by fotografierende on Pexels.com

New research finds caffeine consumed during pregnancy can change important brain pathways in baby

Date:February 8, 2021Source:University of Rochester Medical CenterSummary:New research finds caffeine consumed during pregnancy can change important brain pathways that could lead to behavioral problems later in life. Researchers analyzed thousands of brain scans of nine and ten-year-olds, and revealed changes in the brain structure in children who were exposed to caffeine in utero.

New research finds caffeine consumed during pregnancy can change important brain pathways that could lead to behavioral problems later in life. Researchers in the Del Monte Institute for Neuroscience at the University of Rochester Medical Center (URMC) analyzed thousands of brain scans of nine and ten-year-olds, and revealed changes in the brain structure in children who were exposed to caffeine in utero.

Leer Más

3 Nobel-Prize Discoveries For Better Memory

How is it possible to remember our wedding day, but not where we left our glasses? Here’s the deal… The hippocampus part of the brain which transforms memories from short term to long term often becomes inflamed with age. This can lead to short-term memory loss, or frustrating “senior moments”. So what can be done? The answer comes down to 3 Nobel Prize-winning breakthroughs for better memory. One of them is the discovery of a protein called Nerve Growth Factor (or NGF), which has been shown to protect the brain from inflammation & help with age-related memory loss.It’s what Dr. Rita Levi-Montalcini (103) used to obtain a greater mental capacity today than when she was 20 years old!Here’s how to easily boost it: 👉https://advbio.co/fb/amf-mb



Frequent cannabis use by young people linked to decline in IQ


A study has found that adolescents who frequently use cannabis may experience a decline in Intelligence Quotient (IQ) over time. The findings of the research provide further insight into the harmful neurological and cognitive effects of frequent cannabis use on young people.

The paper, led by researchers at RCSI University of Medicine and Health Sciences, is published in Psychological Medicine.

The results revealed that there were declines of approximately 2 IQ points over time in those who use cannabis frequently compared to those who didn’t use cannabis. Further analysis suggested that this decline in IQ points was primarily related to reduction in verbal IQ.

The research involved systematic review and statistical analysis on seven longitudinal studies involving 808 young people who used cannabis at least weekly for a minimum of 6 months and 5308 young people who did not use cannabis. In order to be included in the analysis each study had to have a baseline IQ score prior to starting cannabis use and another IQ score at follow-up. The young people were followed up until age 18 on average although one study followed the young people until age 38.

“Previous research tells us that young people who use cannabis frequently have worse outcomes in life than their peers and are at increased risk for serious mental illnesses like schizophrenia. Loss of IQ points early in life could have significant effects on performance in school and college and later employment prospects,” commented senior author on the paper Professor Mary Cannon, Professor of Psychiatric Epidemiology and Youth Mental Health, RCSI.

“Cannabis use during youth is of great concern as the developing brain may be particularly susceptible to harm during this period. The findings of this study help us to further understand this important public health issue,” said Dr Emmet Power, Clinical Research Fellow at RCSI and first author on the study.

The study was carried out by researchers from the Department of Psychiatry, RCSI and Beaumont Hospital, Dublin (Prof Mary Cannon, Dr Emmet Power, Sophie Sabherwal, Dr Colm Healy, Dr Aisling O’Neill and Professor David Cotter).

The research was funded by a YouLead Collaborative Doctoral Award from the Health Research Board (Ireland) and a European Research Council Consolidator Award.

Link Original: https://www.sciencedaily.com/releases/2021/01/210128134755.htm?fbclid=IwAR1Qrhejc9x-9uGRofHtmX8YX4E6qukoS7LIVMK8iwYvcaitU_RVCH4G_xo


Discovery of New Immune Cell Type May Unlock Strategies against Neurological Disorders and CNS Damage

Investigators at the Ohio State University Wexner Medical Center and the University of Michigan, have identified in mice a new type of immune cell, which their in vivo studies showed can rescue damaged nerve cells from death and partially reverse nerve fiber damage. The scientists also identified a human immune cell line that exhibits similar characteristics, and which promotes nervous system repair.

They suggest that the findings may point to new strategies for enabling recovery from degenerative neurological diseases, such as amyotrophic lateral sclerosis (ALS) and multiple sclerosis (MS), as well as from damage caused by traumatic brain and spine injuries and stroke. “This immune cell subset secretes growth factors that enhance the survival of nerve cells following traumatic injury to the central nervous system,” said Benjamin Segal, MD, professor and chair of the department of neurology at the Ohio State College of Medicine and co-director of the Ohio State Wexner Medical Center’s Neurological Institute. “It stimulates severed nerve fibers to regrow in the central nervous system, which is really unprecedented. In the future, this line of research might ultimately lead to the development of novel cell-based therapies that restore lost neurological functions across a range of conditions.”

Leer Más

Você sabia que em cada etapa de desenvolvimento, o cérebro 🧠 está mais ou menos receptivo para aprender certas coisas 🤔?.➡️ Durante a educação infantil, a criança aprende predominantemente através deexperiências chamadas de “concretas”, por isso utiliza seu corpo para mover-se, suas mãos para tocar e todos seus sentidos para experimentar o mundo 🌍. Os materiais pedagógicos utilizados nesta etapa devem remeter a criança à experiências reais, robustas e consistentes ✅..Estas experiências concretas serão a base para a etapa seguinte, onde elas utilizarão a parte mais mental do cérebro para abstrair e teorizar 💯..



Can Quantum Physics Explain Consciousness? One Scientist Thinks It Might

Fellow scientists labeled him a crackpot. Now Stuart Hameroff’s quantum consciousness theories are getting support from unlikely places.

By Steve VolkMarch 1, 2018 12:00 PM

Stuart-Hameroff

Anesthesiologist Stuart Hameroff believes tiny structures in our cells called microtubules could explain consciousness. (Credit: Steve Craft)

Stuart Hameroff is an impish figure — short, round, with gray hair and a broad, gnomic face. His voice is smoke — deep and granular, rumbling with the weight of his 70 years. For more than two decades, he’s run a scientific conference on consciousness research. He turns up each day in rumpled jeans and short-sleeved shirts. The effect is casual bordering on slovenly. But up close, he is in charge, and to his critics, he comes off as pugnacious.

Leer Más