Could photons, light particles, really condense? And how will this “liquid light” behave? Condensed light is an example of a Bose-Einstein condensate: The theory has been there for 100 years, but University of Twente researchers have now demonstrated the effect even at room temperature. For this, they created a micro-size mirror with channels in which photons actually flow like a liquid. In these channels, the photons try to stay together as group by choosing the path that leads to the lowest losses, and thus, in a way, demonstrate “social behavior.” The results are published in Nature Communications.
A Bose-Einstein condensate (BEC) is typically a sort of wave in which the separate particles can not be seen anymore: There is a wave of matter, a superfluid that typically is formed at temperatures close to absolute zero. Helium, for example, becomes a superfluid at those temperatures, with remarkable properties. The phenomenon was predicted by Albert Einstein almost 100 years ago, based on the work of Satyendra Nath Bose; this state of matter was named for the researchers. One type of elementary particle that can form a Bose-Einstein condensate is the photon, the light particle. UT researcher Jan Klärs and his team developed a mirror structure with channels. Light traveling through the channels behaves like a superfluid and also moves in a preferred direction. Extremely low temperatures are not required in this case, and it works at room temperature.
The structure is the well-known Mach-Zehnder interferometer, in which a channel splits into two channels, and then rejoins again. In such interferometers, the wave nature of photons manifests, in which a photon can be in both channels at the same time. At the reunification point, there are now two options: The light can either take a channel with a closed end, or a channel with an open end. Jan Klärs and his team found that the liquid decides for itself which path to take by adjusting its frequency of oscillation. In this case, the photons try to stay together by choosing the path that leads to the lowest losses—the channel with the closed end. You could call it “social behavior,” according to researcher Klärs. Other types of bosons, like fermions, prefer staying separate.
The mirror structure somewhat resembles that of a laser, in which light is reflected back and forth between two mirrors. The major difference is in the extremely high reflection of the mirrors: 99.9985 percent. This value is so high that photons don’t get the chance to escape; they will be absorbed again. It is in this stadium that the photon gas starts taking the same temperature as room temperature via thermalization. Technically speaking, it then resembles the radiation of a black body: Radiation is in equilibrium with matter. This thermalization is the crucial difference between a normal laser and a Bose-Einstein condensate of photons. In superconductive devices at which the electrical resistance becomes zero, Bose-Einstein condensates play a major role. The photonic microstructures now presented could be used as basic units in a system that solves mathematical problems like the Traveling Salesman problem. But primarily, the paper shows insight into yet another remarkable property of light.
New research by a City College of New York team has uncovered a novel way to combine two different states of matter. For one of the first times, topological photons—light—has been combined with lattice vibrations, also known as phonons, to manipulate their propagation in a robust and controllable way.
The study utilized topological photonics, an emergent direction in photonics which leverages fundamental ideas of the mathematical field of topology about conserved quantities—topological invariants—that remain constant when altering parts of a geometric object under continuous deformations. One of the simplest examples of such invariants is number of holes, which, for instance, makes donut and mug equivalent from the topological point of view. The topological properties endow photons with helicity, when photons spin as they propagate, leading to unique and unexpected characteristics, such as robustness to defects and unidirectional propagation along interfaces between topologically distinct materials. Thanks to interactions with vibrations in crystals, these helical photons can then be used to channel infrared light along with vibrations.
The implications of this work are broad, in particular allowing researchers to advance Raman spectroscopy, which is used to determine vibrational modes of molecules. The research also holds promise for vibrational spectroscopy—also known as infrared spectroscopy—which measures the interaction of infrared radiation with matter through absorption, emission, or reflection. This can then be utilized to study and identify and characterize chemical substances.
“We coupled helical photons with lattice vibrations in hexagonal boron nitride, creating a new hybrid matter referred to as phonon-polaritons,” said Alexander Khanikaev, lead author and physicist with affiliation in CCNY’s Grove School of Engineering. “It is half light and half vibrations. Since infrared light and lattice vibrations are associated with heat, we created new channels for propagation of light and heat together. Typically, lattice vibrations are very hard to control, and guiding them around defects and sharp corners was impossible before.”
The new methodology can also implement directional radiative heat transfer, a form of energy transfer during which heat is dissipated through electromagnetic waves.
“We can create channels of arbitrary shape for this form of hybrid light and matter excitations to be guided along within a two-dimensional material we created,” added Dr. Sriram Guddala, postdoctoral researcher in Prof. Khanikaev’s group and the first author of the manuscript. “This method also allows us to switch the direction of propagation of vibrations along these channels, forward or backward, simply by switching polarizations handedness of the incident laser beam. Interestingly, as the phonon-polaritons propagate, the vibrations also rotate along with the electric field. This is an entirely novel way of guiding and rotating lattice vibrations, which also makes them helical.”
Entitled “Topological phonon-polariton funneling in midinfrared metasurfaces,” the study appears in the journal Science.
Scientists have long known that light can behave as both a particle and a wave—Einstein first predicted it in 1909. But no experiment has been able to show light in both states simultaneously. Now, researchers at the École Polytechnique Fédérale de Lausanne in Switzerland have taken the first ever photograph of light as both a wave and a particle. The key was a new experimental technique that uses electrons to capture the light’s movement. The work was published today in the journal Nature Communications.
To get this snapshot, the researchers shot laser pulses at a nanowire. The wavelengths of light moved in two different directions along the metal. When the waves ran into each other, they look liked a wave standing still, which is effectively a particle.
In order to see how the waves were moving, the researchers shot a beam of electrons at the nanowire, like dropping dye in a river to see the currents. The particles in the light wave changed the speed at which the electrons moved. That enabled the researchers to capture an image just as the waves met.
“This experiment demonstrates that, for the first time ever, we can film quantum mechanics – and its paradoxical nature – directly,” said Fabrizio Carbone, one of the authors of the study, in a press release. Carbone hopes that a better understanding of how light functions can jumpstart the field of quantum computing.
Theory and experiments have shown that future quantum computers will harness the peculiar properties of quantum mechanics to go above and beyond what is currently possible with even the most powerful supercomputers.
These quantum computers will communicate through the quantum internet, which is not as easy as plugging them into the phone line. One crucial requirement in quantum computing is that the particles that perform the calculations are entangled, a quantum mechanical phenomenon where they become part of a single state. A change to one of the particles creates instantaneous changes to the others no matter how far apart they are.
These entangled states are easily disrupted, unfortunately. So how can they be sent between computers to communicate? That’s where quantum teleportation comes in. The entangled state is transferred between two particles. This technique is not perfectly efficient, and scientists are working hard in trying to make the whole process more successful.
A team of researchers from multiple organizations has reported a record-breaking achievement in PRX Quantum. They were able to deliver sustained, long-distance teleportation of qubits (quantum bits) with a fidelity greater than 90% over a fiber-optic network distance of 44 kilometers (27 miles).
“We’re thrilled by these results,” co-author Panagiotis Spentzouris, head of the Fermilab quantum science program, said in a statement. “This is a key achievement on the way to building a technology that will redefine how we conduct global communication.”
Quantum teleportation doesn’t work like the science fiction popularization of teleportation. What you are teleporting is the state of particles via a quantum channel and a classical channel. The sender has the original qubit. This is made to interact with one particle in an entangled pair, producing “classical signal” information about the state of the original qubit. This signal and the other half of that entangled pair are sent to the receiver, and by putting it together, the receiver can recreate the original qubit.
This success is the result of a collaboration between Fermilab, AT&T, Caltech, Harvard University, NASA Jet Propulsion Laboratory, and the University of Calgary. The systems on which this quantum teleportation was achieved were created by Caltech’s public-private research program on Intelligent Quantum Networks and Technologies, or IN-Q-NET.
“We are very proud to have achieved this milestone on sustainable, high-performing and scalable quantum teleportation systems,” explained Maria Spiropulu, the Shang-Yi Ch’en professor of physics at Caltech and director of the IN-Q-NET research program. “The results will be further improved with system upgrades we are expecting to complete by the second quarter of 2021.”
Quantum computers are not here yet, but having the infrastructure to make them work is crucial. The U.S. Department of Energy published its roadmap for a national quantum internet, last July.
Everybody wants to be happy, right? Who doesn’t? Sure, you may not want to sacrifice everything for pleasure, but you certainly want to enjoy yourself. There are a slew of drugs on the market for solving the problems of depression, and the methods for achieving happiness are often sold and advertised as something you can get, and that which you desire above all else.
The pursuit of happiness is so integral to our idea of the good life that it was declared to be an inalienable right by Thomas Jefferson. It summarizes the American Dream like no other idea. For many people it is the meaning of life itself. It is difficult for some to fathom that there is a way of thinking that suggests you don’t want to at least try to be as happy as you can be.
Well, there is one philosopher who doesn’t think you want happiness in itself. Friedrich Nietzsche.
Nietzsche saw the mere pursuit of happiness, defined here as that which gives pleasure, as a dull waste of human life. Declaring: “Mankind does not strive for happiness; only the Englishman does”, referencing the English philosophy of Utilitarianism, and its focus on total happiness. A philosophy which he rejected with his parable of the “Last Man,” a pathetic being who lives in a time where mankind has “invented happiness”.
The Last Men? In Nietzsche’s mind they were happy, but dull.
Nietzsche was instead dedicated to the idea of finding meaning in life. He suggested the Ubermensch, and his creation of meaning in life, as an alternative to the Last Man, and offered us the idea of people who were willing to undertake great suffering in the name of a goal they have set, as examples. Can we imagine that Michelangelo found painting the ceiling of the Sistine Chapel pleasant? Nikola Tesla declared that his celibacy was necessary to his work, but complained of his loneliness his entire life.
Is that happiness? If these great minds wanted happiness in itself, would they have done what they did?
No, says Nietzsche. They would not. Instead, they chose to pursue meaning, and found it. This is what people really want.
Psychology often agrees. Psychologist Victor Frankl suggested that the key to good living is to find meaning, going so far as to suggest positive meanings for the suffering of his patients to help them carry on. His ideas, published in the best-selling work Man’s Search for Meaning, were inspired by his time at a concentration camp and his notes on how people suffering unimaginable horrors were able to carry on through meaning, rather than happiness.
There is also a question of Utilitarian math here for Nietzsche. In his mind, those who do great things suffer greatly. Those who do small things suffer trivially. In this case, if one was to try to do Utilitarian calculations it would be difficult, if not impossible, to find a scenario when the net happiness is very large. This is why the Last Man is so dull; the only things that grant him a large net payoff in happiness are rather dull affairs, not the suffering-inducing activities that we would find interesting.
This problem is called “the paradox of happiness.” Activities which are done to directly increase pleasure are unlikely to have a high payoff. Nietzsche grasped this problem and gave it voice when he said that “Joy accompanies, joy does not move.” A person who enjoys collecting stamps does not do it because it makes them happy, but because they find it interesting. The happiness is a side effect. A person who suffers for years making a masterpiece is not made happy by it, but rather finds joy in the beauty they create after the fact.
Of course, there is opposition to Nietzsche’s idea. The great English thinker Bertrand Russell condemned Nietzsche in his masterpiece A History of Western Philosophy. Chief among his criticisms of Nietzsche was what he saw as a brutality and openness to suffering, and he compared Nietzschean ideas against those of the compassionate Buddha, envisioning Nietzsche shouting:
Why go about sniveling because trivial people suffer? Or, for that matter, because great men suffer? Trivial people suffer trivially, great men suffer greatly, and great sufferings are not to be regretted, because they are noble. Your ideal is a purely negative one, absence of suffering, which can be completely secured by non-existence. I, on the other hand, have positive ideals: I admire Alcibiades, and the Emperor Frederick II, and Napoleon. For the sake of such men, any misery is worthwhile.
Against this Russell contrasts the ideas of the Buddha, and suggests an impartial observer would always side with him. Russell, whose interpretations of Nietzsche were less than accurate and who suffered from having poor translations to work with, saw his philosophy as the stepping stone to fascism, and as being focused on pain.
So, while you may value something above happiness, how much are you willing to suffer to get it? Nietzsche argues that you will give it all up for a higher value. Others still disagree. Are you even able to pursue happiness and receive it? Or is Nietzsche correct that you must focus elsewhere, on meaning, in order to even hope for satisfaction later?
In fascinating new research, cosmologists explain the history of the universe as one of self-teaching, autodidactic algorithms.
The scientists, including physicists from Brown University and the Flatiron Institute, say the universe has probed all the possible physical laws before landing on the ones we observe around us today. Could this wild idea help inform scientific research to come?
In their novella-length paper, published to the pre-print server arXiV, the researchers—who received “computational, logistical, and other general support” from Microsoft—offer ideas “at the intersection of theoretical physics, computer science, and philosophy of science with a discussion from all three perspectives,” they write, teasing the bigness and multidisciplinary nature of the research.
Here’s how it works: Our universe observes a whole bunch of laws of physics, but the researchers say other possible laws of physics seem equally likely, given the way mathematics works in the universe. So if a group of candidate laws were equally likely, then how did we end up with the laws we really have?
The scientists explain:
“The notion of ‘learning’ as we use it is more than moment-to-moment, brute adaptation. It is a cumulative process that can be thought of as theorizing, modeling, and predicting. For instance, the DNA/RNA/protein system on Earth must have arisen from an adaptive process, and yet it foresees a space of organisms much larger than could be called upon in any given moment of adaptation.”
We can analogize to the research of Charles Darwin, who studied all the different ways animals specialized in order to thrive in different environments. For example, why do we have one monolithic body of laws of physics, rather than, say, a bunch of specialized kinds of finches? This is an old question that dates back to at least 1893, when a philosopher first posited “natural selection,” but for the laws of the universe.
In the paper, the scientists define a slew of terms including how they’re defining “learning” in the context of the universe around us. The universe is made of systems that each have processes to fulfill every day, they say.
Each system is surrounded by an environment made of different other systems. Imagine standing in a crowd of people (remember that?), where your immediate environment is just made of other people. Each of their environments is made of, well, you and other stuff.
Evolution is already a kind of learning, so when we suggest the universe has used natural selection as part of the realization of physics, we’re invoking that specific kind of learning. (Does something have to have consciousness in order to learn? You need to carefully define learning in order to make that the case. Organisms and systems constantly show learning outcomes, like more success or a higher rate of reproduction.)
The researchers explain this distinction well:
“In one sense, learning is nothing special; it is a causal process, conveyed by physical interactions. And yet we need to consider learning as special to explain events that transpire because of learning.”
Consider the expression “You never learn,” which suggests that outcomes for a specific person and activity are still bad. We’re using that outcome to say learning hasn’t happened. What if the person is trying to change their outcomes and just isn’t succeeding? We’re gauging learning based on visible outcomes only.
If you’re interested in the nitty gritty, the full, 79-page study defines a ton of fascinating terms and introduces some wild and wonderful arguments using them. The scientists’ goal is to kick off a whole new arm of cosmological research into the idea of a learning universe.
In upcoming research, scientists will attempt to show the universe has consciousness. Yes, really. No matter the outcome, we’ll soon learn more about what it means to be conscious—and which objects around us might have a mind of their own.
What will that mean for how we treat objects and the world around us? Buckle in, because things are about to get weird.
What Is Consciousness?
The basic definition of consciousness intentionally leaves a lot of questions unanswered. It’s “the normal mental condition of the waking state of humans, characterized by the experience of perceptions, thoughts, feelings, awareness of the external world, and often in humans (but not necessarily in other animals) self-awareness,” according to the Oxford Dictionary of Psychology.
Scientists simply don’t have one unified theory of what consciousness is. We also don’t know where it comes from, or what it’s made of.
However, one loophole of this knowledge gap is that we can’t exhaustively say other organisms, and even inanimate objects, don’t have consciousness. Humans relate to animals and can imagine, say, dogs and cats have some amount of consciousness because we see their facial expressions and how they appear to make decisions. But just because we don’t “relate to” rocks, the ocean, or the night sky, that isn’t the same as proving those things don’t have consciousness.
This is where a philosophical stance called panpsychismcomes into play, writes All About Space’s David Crookes:
“This claims consciousness is inherent in even the tiniest pieces of matter — an idea that suggests the fundamental building blocks of reality have conscious experience. Crucially, it implies consciousness could be found throughout the universe.”
It’s also where physics enters the picture. Some scientists have posited that the thing we think of as consciousness is made of micro-scale quantum physics events and other “spooky actions at a distance,” somehow fluttering inside our brains and generating conscious thoughts.
The Free Will Conundrum
One of the leading minds in physics, 2020 Nobel laureate and black hole pioneer Roger Penrose, has written extensively about quantum mechanics as a suspected vehicle of consciousness. In 1989, he wrote a book called The Emperor’s New Mind, in which he claimed “that human consciousness is non-algorithmic and a product of quantum effects.”
Let’s quickly break down that statement. What does it mean for human consciousness to be “algorithmic”? Well, an algorithm is simply a series of predictable steps to reach an outcome, and in the study of philosophy, this idea plays a big part in questions about free will versus determinism.
Are our brains simply cranking out math-like processes that can be telescoped in advance? Or is something wild happening that allows us true free will, meaning the ability to make meaningfully different decisions that affect our lives?
Within philosophy itself, the study of free will dates back at least centuries. But the overlap with physics is much newer. And what Penrose claimed in The Emperor’s New Mind is that consciousness isn’t strictly causal because, on the tiniest level, it’s a product of unpredictable quantum phenomena that don’t conform to classical physics.
So, where does all that background information leave us? If you’re scratching your head or having some uncomfortable thoughts, you’re not alone. But these questions are essential to people who study philosophy and science, because the answers could change how we understand the entire universe around us. Whether or not humans do or don’t have free will has huge moral implications, for example. How do you punish criminals who could never have done differently?
Consciousness Is Everywhere
In physics, scientists could learn key things from a study of consciousness as a quantum effect. This is where we rejoin today’s researchers: Johannes Kleiner, mathematician and theoretical physicist at the Munich Center For Mathematical Philosophy, and Sean Tull, mathematician at the University of Oxford.
Kleiner and Tull are following Penrose’s example, in both his 1989 book and a 2014 paper where he detailed his belief that our brains’ microprocesses can be used to model things about the whole universe. The resulting theory is called integrated information theory (IIT), and it’s an abstract, “highly mathematical” form of the philosophy we’ve been reviewing.
In IIT, consciousness is everywhere, but it accumulates in places where it’s needed to help glue together different related systems. This means the human body is jam-packed with a ton of systems that must interrelate, so there’s a lot of consciousness (or phi, as the quantity is known in IIT) that can be calculated. Think about all the parts of the brain that work together to, for example, form a picture and sense memory of an apple in your mind’s eye.
The revolutionary thing in IIT isn’t related to the human brain—it’s that consciousness isn’t biological at all, but rather is simply this value, phi, that can be calculated if you know a lot about the complexity of what you’re studying.
If your brain has almost countless interrelated systems, then the entire universe must have virtually infinite ones. And if that’s where consciousness accumulates, then the universe must have a lot of phi.
Hey, we told you this was going to get weird.
“The theory consists of a very complicated algorithm that, when applied to a detailed mathematical description of a physical system, provides information about whether the system is conscious or not, and what it is conscious of,” Kleiner told All About Space. “If there is an isolated pair of particles floating around somewhere in space, they will have some rudimentary form of consciousness if they interact in the correct way.”
Kleiner and Tull are working on turning IIT into this complex mathematical algorithm—setting down the standard that can then be used to examine how conscious things operate.
Think about the classic philosophical comment, “I think, therefore I am,” then imagine two geniuses turning that into a workable formula where you substitute in a hundred different number values and end up with your specific “I am” answer.
The next step is to actually crunch the numbers, and then to grapple with the moral implications of a hypothetically conscious universe. It’s an exciting time to be a philosopher—or a philosopher’s calculator.
What can you tell by looking into someone’s eyes? You can spot a glint of humor, signs of tiredness, or maybe that they don’t like something or someone.
But outside of assessing an emotional state, a person’s eyes may also provide clues about their intelligence, suggests new research. A study carried out at the Georgia Institute of Technology shows that pupil size is “closely related” to differences in intelligence between individuals.
The scientists found that larger pupils may be connected to higher intelligence, as demonstrated by tests that gauged reasoning skills, memory, and attention. In fact, the researchers claim that the relationship of intelligence to pupil size is so pronounced, that it came across their previous two studies as well and can be spotted just with your naked eyes, without any additional scientific instruments. You should be able to tell who scored the highest or the lowest on the cognitive tests just by looking at them, say the researchers.
The pupil-IQ link
The connection was first noticed across memory tasks, looking at pupil dilations as signs of mental effort. The studies involved more than 500 people aged 18 to 35 from the Atlanta area. The subjects’ pupil sizes were measured by eye trackers, which use a camera and a computer to capture light reflecting off the pupil and cornea. As the scientists explained in Scientific American, pupil diameters range from two to eight millimeters. To determine average pupil size, they took measurements of the pupils at rest when the participants were staring at a blank screen for a few minutes.
Another part of the experiment involved having the subjects take a series of cognitive tests that evaluated “fluid intelligence” (the ability to reason when confronted with new problems), “working memory capacity” (how well people could remember information over time), and “attention control” (the ability to keep focusing attention even while being distracted). An example of the latter involves a test that attempts to divert a person’s focus on a disappearing letter by showing a flickering asterisk on another part of the screen. If a person pays too much attention to the asterisk, they might miss the letter.
The conclusions of the research were that having a larger baseline pupil size was related to greater fluid intelligence, having more attention control, and even greater working memory capacity, although to a smaller extent. In an email exchange with Big Think, author Jason Tsukahara pointed out, “It is important to consider that what we find is a correlation — which should not be confused with causation.”
The researchers also found that pupil size seemed to decrease with age. Older people had more constricted pupils but when the scientists standardized for age, the pupil-size-to-intelligence connection still remained.
Why are pupils linked to intelligence?
The connection between pupil size and IQ likely resides within the brain. Pupil size has been previously connected to the locus coeruleus, a part of the brain that’s responsible for synthesizing the hormone and neurotransmitter norepinephrine (noradrenaline), which mobilizes the brain and body for action. Activity in the locus coeruleus affects our perception, attention, memory, and learning processes.
As the authors explain, this region of the brain “also helps maintain a healthy organization of brain activity so that distant brain regions can work together to accomplish challenging tasks and goals.” Because it is so important, loss of function in the locus coeruleus has been linked to conditions like Alzheimer’s disease, Parkinson’s, clinical depression, and attention deficit hyperactivity disorder (ADHD).
The researchers hypothesize that people who have larger pupils while in a restful state, like staring at a blank computer screen, have “greater regulation of activity by the locus coeruleus.” This leads to better cognitive performance. More research is necessary, however, to truly understand why having larger pupils is related to higher intelligence.
In an email to Big Think, Tsukahara shared, “If I had to speculate, I would say that it is people with greater fluid intelligence that develop larger pupils, but again at this point we only have correlational data.”
Do other scientists believe this?
As the scientists point out in the beginning of their paper, their conclusions are controversial and, so far, other researchers haven’t been able to duplicate their results. The research team addresses this criticism by explaining that other studies had methodological issues and examined only memory capacity but not fluid intelligence, which is what they measured.
Todos os personagens principais dos contos interativos são customizáveis! Isso quer dizer que as crianças podem escolher peças e cores das roupas, cor de pele , tipos de cabelos, cor dos olhos , e outros detalhes!Fizemos isso não só pela interatividade e diversão, as também porque sabemos da importância da representatividade para as crianças.Nos testes, vimos que praticamente todas as crianças montavam o personagem com características parecidas com as delas.Isso aproxima o leitor da história e do próprio personagem, fazendo com que a criança pertença àquele universo. Essa interatividade ajuda até a desenvolver empatia, já que as crianças se colocam mais facilmente no lugar dos personagens. A gente adora a customização! É uma característica do Truth and Tales que tem nosso coração.
Though progress is being made, our brains remain organs of many mysteries. Among these are the exact workings of neurons, with some 86 billion of them in the human brain. Neurons are interconnected in complicated, labyrinthine networks across which they exchange information in the form of electrical signals. We know that signals exit an individual neuron through a fiber called an axon, and also that signals are received by each neuron through input fibers called dendrites.
Understanding the electrical capabilities of dendrites in particular — which, after all, may be receiving signals from countless other neurons at any given moment — is fundamental to deciphering neurons’ communication. It may surprise you to learn, though, that much of everything we assume about human neurons is based on observations made of rodent dendrites — there’s just not a lot of fresh, still-functional human brain tissue available for thorough examination.
For a new study published January 3 in the journal Science, however, scientists got a rare chance to explore some neurons from the outer layer of human brains, and they discovered startling dendrite behaviors that may be unique to humans, and may even help explain how our billions of neurons process the massive amount of information they exchange.
Electrical signals weaken with distance, and that poses a riddle to those seeking to understand the human brain: Human dendrites are known to be about twice as long as rodent dendrites, which means that a signal traversing a human dendrite could be much weaker arriving at its destination than one traveling a rodent’s much shorter dendrite. Says paper co-author biologist Matthew Larkum of Humboldt University in Berlin speaking to LiveScience, “If there was no change in the electrical properties between rodents and people, then that would mean that, in the humans, the same synaptic inputs would be quite a bit less powerful.” Chalk up another strike against the value of animal-based human research. The only way this would not be true is if the signals being exchanged in our brains are not the same as those in a rodent. This is exactly what the study’s authors found.
The researchers worked with brain tissue sliced for therapeutic reasons from the brains of tumor and epilepsy patients. Neurons were resected from the disproportionately thick layers 2 and 3 of the cerebral cortex, a feature special to humans. In these layers reside incredibly dense neuronal networks.
Without blood-borne oxygen, though, such cells only last only for about two days, so Larkum’s lab had no choice but to work around the clock during that period to get the most information from the samples. “You get the tissue very infrequently, so you’ve just got to work with what’s in front of you,” says Larkum. The team made holes in dendrites into which they could insert glass pipettes. Through these, they sent ions to stimulate the dendrites, allowing the scientists to observe their electrical behavior.
In rodents, two type of electrical spikes have been observed in dendrites: a short, one-millisecond spike with the introduction of sodium, and spikes that last 50- to 100-times longer in response to calcium.
In the human dendrites, one type of behavior was observed: super-short spikes occurring in rapid succession, one after the other. This suggests to the researchers that human neurons are “distinctly more excitable ” than rodent neurons, allowing them to successfully traverse our longer dendrites.
In addition, the human neuronal spikes — though they behaved somewhat like rodent spikes prompted by the introduction of sodium — were found to be generated by calcium, essentially the opposite of rodents.
The study also reports a second major finding. Looking to better understand how the brain utilizes these spikes, the team programmed computer models based on their findings. (The brains slices they’d examined could not, of course, be put back together and switched on somehow.)
The scientists constructed virtual neuronal networks, each of whose neurons could could be stimulated at thousands of points along its dendrites, to see how each handled so many input signals. Previous, non-human, research has suggested that neurons add these inputs together, holding onto them until the number of excitatory input signals exceeds the number of inhibitory signals, at which point the neuron fires the sum of them from its axon out into the network.
However, this isn’t what Larkum’s team observed in their model. Neurons’ output was inverse to their inputs: The more excitatory signals they received, the less likely they were to fire off. Each had a seeming “sweet spot” when it came to input strength.
What the researchers believe is going on is that dendrites and neurons may be smarter than previously suspected, processing input information as it arrives. Mayank Mehta of UC Los Angeles, who’s not involved in the research, tells LiveScience, “It doesn’t look that the cell is just adding things up — it’s also throwing things away.” This could mean each neuron is assessing the value of each signal to the network and discarding “noise.” It may also be that different neurons are optimized for different signals and thus tasks.
Much in the way that octopuses distribute decision-making across a decentralized nervous system, the implication of the new research is that, at least in humans, it’s not just the neuronal network that’s smart, it’s all of the individual neurons it contains. This would constitute exactly the kind of computational super-charging one would hope to find somewhere in the amazing human brain.