Could photons, light particles, really condense? And how will this “liquid light” behave? Condensed light is an example of a Bose-Einstein condensate: The theory has been there for 100 years, but University of Twente researchers have now demonstrated the effect even at room temperature. For this, they created a micro-size mirror with channels in which photons actually flow like a liquid. In these channels, the photons try to stay together as group by choosing the path that leads to the lowest losses, and thus, in a way, demonstrate “social behavior.” The results are published in Nature Communications.
A Bose-Einstein condensate (BEC) is typically a sort of wave in which the separate particles can not be seen anymore: There is a wave of matter, a superfluid that typically is formed at temperatures close to absolute zero. Helium, for example, becomes a superfluid at those temperatures, with remarkable properties. The phenomenon was predicted by Albert Einstein almost 100 years ago, based on the work of Satyendra Nath Bose; this state of matter was named for the researchers. One type of elementary particle that can form a Bose-Einstein condensate is the photon, the light particle. UT researcher Jan Klärs and his team developed a mirror structure with channels. Light traveling through the channels behaves like a superfluid and also moves in a preferred direction. Extremely low temperatures are not required in this case, and it works at room temperature.
The structure is the well-known Mach-Zehnder interferometer, in which a channel splits into two channels, and then rejoins again. In such interferometers, the wave nature of photons manifests, in which a photon can be in both channels at the same time. At the reunification point, there are now two options: The light can either take a channel with a closed end, or a channel with an open end. Jan Klärs and his team found that the liquid decides for itself which path to take by adjusting its frequency of oscillation. In this case, the photons try to stay together by choosing the path that leads to the lowest losses—the channel with the closed end. You could call it “social behavior,” according to researcher Klärs. Other types of bosons, like fermions, prefer staying separate.
The mirror structure somewhat resembles that of a laser, in which light is reflected back and forth between two mirrors. The major difference is in the extremely high reflection of the mirrors: 99.9985 percent. This value is so high that photons don’t get the chance to escape; they will be absorbed again. It is in this stadium that the photon gas starts taking the same temperature as room temperature via thermalization. Technically speaking, it then resembles the radiation of a black body: Radiation is in equilibrium with matter. This thermalization is the crucial difference between a normal laser and a Bose-Einstein condensate of photons. In superconductive devices at which the electrical resistance becomes zero, Bose-Einstein condensates play a major role. The photonic microstructures now presented could be used as basic units in a system that solves mathematical problems like the Traveling Salesman problem. But primarily, the paper shows insight into yet another remarkable property of light.
New research by a City College of New York team has uncovered a novel way to combine two different states of matter. For one of the first times, topological photons—light—has been combined with lattice vibrations, also known as phonons, to manipulate their propagation in a robust and controllable way.
The study utilized topological photonics, an emergent direction in photonics which leverages fundamental ideas of the mathematical field of topology about conserved quantities—topological invariants—that remain constant when altering parts of a geometric object under continuous deformations. One of the simplest examples of such invariants is number of holes, which, for instance, makes donut and mug equivalent from the topological point of view. The topological properties endow photons with helicity, when photons spin as they propagate, leading to unique and unexpected characteristics, such as robustness to defects and unidirectional propagation along interfaces between topologically distinct materials. Thanks to interactions with vibrations in crystals, these helical photons can then be used to channel infrared light along with vibrations.
The implications of this work are broad, in particular allowing researchers to advance Raman spectroscopy, which is used to determine vibrational modes of molecules. The research also holds promise for vibrational spectroscopy—also known as infrared spectroscopy—which measures the interaction of infrared radiation with matter through absorption, emission, or reflection. This can then be utilized to study and identify and characterize chemical substances.
“We coupled helical photons with lattice vibrations in hexagonal boron nitride, creating a new hybrid matter referred to as phonon-polaritons,” said Alexander Khanikaev, lead author and physicist with affiliation in CCNY’s Grove School of Engineering. “It is half light and half vibrations. Since infrared light and lattice vibrations are associated with heat, we created new channels for propagation of light and heat together. Typically, lattice vibrations are very hard to control, and guiding them around defects and sharp corners was impossible before.”
The new methodology can also implement directional radiative heat transfer, a form of energy transfer during which heat is dissipated through electromagnetic waves.
“We can create channels of arbitrary shape for this form of hybrid light and matter excitations to be guided along within a two-dimensional material we created,” added Dr. Sriram Guddala, postdoctoral researcher in Prof. Khanikaev’s group and the first author of the manuscript. “This method also allows us to switch the direction of propagation of vibrations along these channels, forward or backward, simply by switching polarizations handedness of the incident laser beam. Interestingly, as the phonon-polaritons propagate, the vibrations also rotate along with the electric field. This is an entirely novel way of guiding and rotating lattice vibrations, which also makes them helical.”
Entitled “Topological phonon-polariton funneling in midinfrared metasurfaces,” the study appears in the journal Science.
Scientists have long known that light can behave as both a particle and a wave—Einstein first predicted it in 1909. But no experiment has been able to show light in both states simultaneously. Now, researchers at the École Polytechnique Fédérale de Lausanne in Switzerland have taken the first ever photograph of light as both a wave and a particle. The key was a new experimental technique that uses electrons to capture the light’s movement. The work was published today in the journal Nature Communications.
To get this snapshot, the researchers shot laser pulses at a nanowire. The wavelengths of light moved in two different directions along the metal. When the waves ran into each other, they look liked a wave standing still, which is effectively a particle.
In order to see how the waves were moving, the researchers shot a beam of electrons at the nanowire, like dropping dye in a river to see the currents. The particles in the light wave changed the speed at which the electrons moved. That enabled the researchers to capture an image just as the waves met.
“This experiment demonstrates that, for the first time ever, we can film quantum mechanics – and its paradoxical nature – directly,” said Fabrizio Carbone, one of the authors of the study, in a press release. Carbone hopes that a better understanding of how light functions can jumpstart the field of quantum computing.
Theory and experiments have shown that future quantum computers will harness the peculiar properties of quantum mechanics to go above and beyond what is currently possible with even the most powerful supercomputers.
These quantum computers will communicate through the quantum internet, which is not as easy as plugging them into the phone line. One crucial requirement in quantum computing is that the particles that perform the calculations are entangled, a quantum mechanical phenomenon where they become part of a single state. A change to one of the particles creates instantaneous changes to the others no matter how far apart they are.
These entangled states are easily disrupted, unfortunately. So how can they be sent between computers to communicate? That’s where quantum teleportation comes in. The entangled state is transferred between two particles. This technique is not perfectly efficient, and scientists are working hard in trying to make the whole process more successful.
A team of researchers from multiple organizations has reported a record-breaking achievement in PRX Quantum. They were able to deliver sustained, long-distance teleportation of qubits (quantum bits) with a fidelity greater than 90% over a fiber-optic network distance of 44 kilometers (27 miles).
“We’re thrilled by these results,” co-author Panagiotis Spentzouris, head of the Fermilab quantum science program, said in a statement. “This is a key achievement on the way to building a technology that will redefine how we conduct global communication.”
Quantum teleportation doesn’t work like the science fiction popularization of teleportation. What you are teleporting is the state of particles via a quantum channel and a classical channel. The sender has the original qubit. This is made to interact with one particle in an entangled pair, producing “classical signal” information about the state of the original qubit. This signal and the other half of that entangled pair are sent to the receiver, and by putting it together, the receiver can recreate the original qubit.
This success is the result of a collaboration between Fermilab, AT&T, Caltech, Harvard University, NASA Jet Propulsion Laboratory, and the University of Calgary. The systems on which this quantum teleportation was achieved were created by Caltech’s public-private research program on Intelligent Quantum Networks and Technologies, or IN-Q-NET.
“We are very proud to have achieved this milestone on sustainable, high-performing and scalable quantum teleportation systems,” explained Maria Spiropulu, the Shang-Yi Ch’en professor of physics at Caltech and director of the IN-Q-NET research program. “The results will be further improved with system upgrades we are expecting to complete by the second quarter of 2021.”
Quantum computers are not here yet, but having the infrastructure to make them work is crucial. The U.S. Department of Energy published its roadmap for a national quantum internet, last July.
In fascinating new research, cosmologists explain the history of the universe as one of self-teaching, autodidactic algorithms.
The scientists, including physicists from Brown University and the Flatiron Institute, say the universe has probed all the possible physical laws before landing on the ones we observe around us today. Could this wild idea help inform scientific research to come?
In their novella-length paper, published to the pre-print server arXiV, the researchers—who received “computational, logistical, and other general support” from Microsoft—offer ideas “at the intersection of theoretical physics, computer science, and philosophy of science with a discussion from all three perspectives,” they write, teasing the bigness and multidisciplinary nature of the research.
Here’s how it works: Our universe observes a whole bunch of laws of physics, but the researchers say other possible laws of physics seem equally likely, given the way mathematics works in the universe. So if a group of candidate laws were equally likely, then how did we end up with the laws we really have?
The scientists explain:
“The notion of ‘learning’ as we use it is more than moment-to-moment, brute adaptation. It is a cumulative process that can be thought of as theorizing, modeling, and predicting. For instance, the DNA/RNA/protein system on Earth must have arisen from an adaptive process, and yet it foresees a space of organisms much larger than could be called upon in any given moment of adaptation.”
We can analogize to the research of Charles Darwin, who studied all the different ways animals specialized in order to thrive in different environments. For example, why do we have one monolithic body of laws of physics, rather than, say, a bunch of specialized kinds of finches? This is an old question that dates back to at least 1893, when a philosopher first posited “natural selection,” but for the laws of the universe.
In the paper, the scientists define a slew of terms including how they’re defining “learning” in the context of the universe around us. The universe is made of systems that each have processes to fulfill every day, they say.
Each system is surrounded by an environment made of different other systems. Imagine standing in a crowd of people (remember that?), where your immediate environment is just made of other people. Each of their environments is made of, well, you and other stuff.
Evolution is already a kind of learning, so when we suggest the universe has used natural selection as part of the realization of physics, we’re invoking that specific kind of learning. (Does something have to have consciousness in order to learn? You need to carefully define learning in order to make that the case. Organisms and systems constantly show learning outcomes, like more success or a higher rate of reproduction.)
The researchers explain this distinction well:
“In one sense, learning is nothing special; it is a causal process, conveyed by physical interactions. And yet we need to consider learning as special to explain events that transpire because of learning.”
Consider the expression “You never learn,” which suggests that outcomes for a specific person and activity are still bad. We’re using that outcome to say learning hasn’t happened. What if the person is trying to change their outcomes and just isn’t succeeding? We’re gauging learning based on visible outcomes only.
If you’re interested in the nitty gritty, the full, 79-page study defines a ton of fascinating terms and introduces some wild and wonderful arguments using them. The scientists’ goal is to kick off a whole new arm of cosmological research into the idea of a learning universe.
In upcoming research, scientists will attempt to show the universe has consciousness. Yes, really. No matter the outcome, we’ll soon learn more about what it means to be conscious—and which objects around us might have a mind of their own.
What will that mean for how we treat objects and the world around us? Buckle in, because things are about to get weird.
What Is Consciousness?
The basic definition of consciousness intentionally leaves a lot of questions unanswered. It’s “the normal mental condition of the waking state of humans, characterized by the experience of perceptions, thoughts, feelings, awareness of the external world, and often in humans (but not necessarily in other animals) self-awareness,” according to the Oxford Dictionary of Psychology.
Scientists simply don’t have one unified theory of what consciousness is. We also don’t know where it comes from, or what it’s made of.
However, one loophole of this knowledge gap is that we can’t exhaustively say other organisms, and even inanimate objects, don’t have consciousness. Humans relate to animals and can imagine, say, dogs and cats have some amount of consciousness because we see their facial expressions and how they appear to make decisions. But just because we don’t “relate to” rocks, the ocean, or the night sky, that isn’t the same as proving those things don’t have consciousness.
This is where a philosophical stance called panpsychismcomes into play, writes All About Space’s David Crookes:
“This claims consciousness is inherent in even the tiniest pieces of matter — an idea that suggests the fundamental building blocks of reality have conscious experience. Crucially, it implies consciousness could be found throughout the universe.”
It’s also where physics enters the picture. Some scientists have posited that the thing we think of as consciousness is made of micro-scale quantum physics events and other “spooky actions at a distance,” somehow fluttering inside our brains and generating conscious thoughts.
The Free Will Conundrum
One of the leading minds in physics, 2020 Nobel laureate and black hole pioneer Roger Penrose, has written extensively about quantum mechanics as a suspected vehicle of consciousness. In 1989, he wrote a book called The Emperor’s New Mind, in which he claimed “that human consciousness is non-algorithmic and a product of quantum effects.”
Let’s quickly break down that statement. What does it mean for human consciousness to be “algorithmic”? Well, an algorithm is simply a series of predictable steps to reach an outcome, and in the study of philosophy, this idea plays a big part in questions about free will versus determinism.
Are our brains simply cranking out math-like processes that can be telescoped in advance? Or is something wild happening that allows us true free will, meaning the ability to make meaningfully different decisions that affect our lives?
Within philosophy itself, the study of free will dates back at least centuries. But the overlap with physics is much newer. And what Penrose claimed in The Emperor’s New Mind is that consciousness isn’t strictly causal because, on the tiniest level, it’s a product of unpredictable quantum phenomena that don’t conform to classical physics.
So, where does all that background information leave us? If you’re scratching your head or having some uncomfortable thoughts, you’re not alone. But these questions are essential to people who study philosophy and science, because the answers could change how we understand the entire universe around us. Whether or not humans do or don’t have free will has huge moral implications, for example. How do you punish criminals who could never have done differently?
Consciousness Is Everywhere
In physics, scientists could learn key things from a study of consciousness as a quantum effect. This is where we rejoin today’s researchers: Johannes Kleiner, mathematician and theoretical physicist at the Munich Center For Mathematical Philosophy, and Sean Tull, mathematician at the University of Oxford.
Kleiner and Tull are following Penrose’s example, in both his 1989 book and a 2014 paper where he detailed his belief that our brains’ microprocesses can be used to model things about the whole universe. The resulting theory is called integrated information theory (IIT), and it’s an abstract, “highly mathematical” form of the philosophy we’ve been reviewing.
In IIT, consciousness is everywhere, but it accumulates in places where it’s needed to help glue together different related systems. This means the human body is jam-packed with a ton of systems that must interrelate, so there’s a lot of consciousness (or phi, as the quantity is known in IIT) that can be calculated. Think about all the parts of the brain that work together to, for example, form a picture and sense memory of an apple in your mind’s eye.
The revolutionary thing in IIT isn’t related to the human brain—it’s that consciousness isn’t biological at all, but rather is simply this value, phi, that can be calculated if you know a lot about the complexity of what you’re studying.
If your brain has almost countless interrelated systems, then the entire universe must have virtually infinite ones. And if that’s where consciousness accumulates, then the universe must have a lot of phi.
Hey, we told you this was going to get weird.
“The theory consists of a very complicated algorithm that, when applied to a detailed mathematical description of a physical system, provides information about whether the system is conscious or not, and what it is conscious of,” Kleiner told All About Space. “If there is an isolated pair of particles floating around somewhere in space, they will have some rudimentary form of consciousness if they interact in the correct way.”
Kleiner and Tull are working on turning IIT into this complex mathematical algorithm—setting down the standard that can then be used to examine how conscious things operate.
Think about the classic philosophical comment, “I think, therefore I am,” then imagine two geniuses turning that into a workable formula where you substitute in a hundred different number values and end up with your specific “I am” answer.
The next step is to actually crunch the numbers, and then to grapple with the moral implications of a hypothetically conscious universe. It’s an exciting time to be a philosopher—or a philosopher’s calculator.
The dendritic arms of some human neurons can perform logic operations that once seemed to require whole neural networks.
The information-processing capabilities of the brain are often reported to reside in the trillions of connections that wire its neurons together. But over the past few decades, mounting research has quietly shifted some of the attention to individual neurons, which seem to shoulder much more computational responsibility than once seemed imaginable.
The latest in a long line of evidence comes from scientists’ discovery of a new type of electrical signal in the upper layers of the human cortex. Laboratory and modeling studies have already shown that tiny compartments in the dendritic arms of cortical neurons can each perform complicated operations in mathematical logic. But now it seems that individual dendritic compartments can also perform a particular computation — “exclusive OR” — that mathematical theorists had previously categorized as unsolvable by single-neuron systems.
“I believe that we’re just scratching the surface of what these neurons are really doing,” said Albert Gidon, a postdoctoral fellow at Humboldt University of Berlin and the first author of the paper that presented these findings in Science earlier this month.
The discovery marks a growing need for studies of the nervous system to consider the implications of individual neurons as extensive information processors. “Brains may be far more complicated than we think,” said Konrad Kording, a computational neuroscientist at the University of Pennsylvania, who did not participate in the recent work. It may also prompt some computer scientists to reappraise strategies for artificial neural networks, which have traditionally been built based on a view of neurons as simple, unintelligent switches.
The Limitations of Dumb Neurons
In the 1940s and ’50s, a picture began to dominate neuroscience: that of the “dumb” neuron, a simple integrator, a point in a network that merely summed up its inputs. Branched extensions of the cell, called dendrites, would receive thousands of signals from neighboring neurons — some excitatory, some inhibitory. In the body of the neuron, all those signals would be weighted and tallied, and if the total exceeded some threshold, the neuron fired a series of electrical pulses (action potentials) that directed the stimulation of adjacent neurons.
At around the same time, researchers realized that a single neuron could also function as a logic gate, akin to those in digital circuits (although it still isn’t clear how much the brain really computes this way when processing information). A neuron was effectively an AND gate, for instance, if it fired only after receiving some sufficient number of inputs.
Networks of neurons could therefore theoretically perform any computation. Still, this model of the neuron was limited. Not only were its guiding computational metaphors simplistic, but for decades, scientists lacked the experimental tools to record from the various components of a single nerve cell. “That’s essentially the neuron being collapsed into a point in space,” said Bartlett Mel, a computational neuroscientist at the University of Southern California. “It didn’t have any internal articulation of activity.” The model ignored the fact that the thousands of inputs flowing into a given neuron landed in different locations along its various dendrites. It ignored the idea (eventually confirmed) that individual dendrites might function differently from one another. And it ignored the possibility that computations might be performed by other internal structures.
This compartmentalization of signals meant that separate dendrites could be processing information independently of one another. “This was at odds with the point-neuron hypothesis, in which a neuron simply added everything up regardless of location,” Mel said.
That prompted Koch and other neuroscientists, including Gordon Shepherd at the Yale School of Medicine, to model how the structure of dendrites could in principle allow neurons to act not as simple logic gates, but as complex, multi-unit processing systems. They simulated how dendritic trees could host numerous logic operations, through a series of complex hypothetical mechanisms.
Later, Mel and several colleagues looked more closely at how the cell might be managing multiple inputs within its individual dendrites. What they found surprised them: The dendrites generated local spikes, had their own nonlinear input-output curves and had their own activation thresholds, distinct from those of the neuron as a whole. The dendrites themselves could act as AND gates, or as a host of other computing devices.
Mel, along with his former graduate student Yiota Poirazi (now a computational neuroscientist at the Institute of Molecular Biology and Biotechnology in Greece), realized that this meant that they could conceive of a single neuron as a two-layer network. The dendrites would serve as nonlinear computing subunits, collecting inputs and spitting out intermediate outputs. Those signals would then get combined in the cell body, which would determine how the neuron as a whole would respond.
Whether the activity at the dendritic level actually influenced the neuron’s firing and the activity of neighboring neurons was still unclear. But regardless, that local processing might prepare or condition the system to respond differently to future inputs or help wire it in new ways, according to Shepherd.
Whatever the case, “the trend then was, ‘OK, be careful, the neuron might be more powerful than you thought,’” Mel said.
Shepherd agreed. “Much of the power of the processing that takes place in the cortex is actually subthreshold,” he said. “A single-neuron system can be more than just one integrative system. It can be two layers, or even more.” In theory, almost any imaginable computation might be performed by one neuron with enough dendrites, each capable of performing its own nonlinear operation.
In the recent Science paper, the researchers took this idea one step further: They suggested that a single dendritic compartment might be able to perform these complex computations all on its own.
Unexpected Spikes and Old Obstacles
Matthew Larkum, a neuroscientist at Humboldt, and his team started looking at dendrites with a different question in mind. Because dendritic activity had been studied primarily in rodents, the researchers wanted to investigate how electrical signaling might be different in human neurons, which have much longer dendrites. They obtained slices of brain tissue from layers 2 and 3 of the human cortex, which contain particularly large neurons with many dendrites. When they stimulated those dendrites with an electrical current, they noticed something strange.
They saw unexpected, repeated spiking — and those spikes seemed completely unlike other known kinds of neural signaling. They were particularly rapid and brief, like action potentials, and arose from fluxes of calcium ions. This was noteworthy because conventional action potentials are usually caused by sodium and potassium ions. And while calcium-induced signaling had been previously observed in rodent dendrites, those spikes tended to last much longer.
Stranger still, feeding more electrical stimulation into the dendrites lowered the intensity of the neuron’s firing instead of increasing it. “Suddenly, we stimulate more and we get less,” Gidon said. “That caught our eye.”
To figure out what the new kind of spiking might be doing, the scientists teamed up with Poirazi and a researcher in her lab in Greece, Athanasia Papoutsi, who jointly created a model to reflect the neurons’ behavior.
The model found that the dendrite spiked in response to two separate inputs — but failed to do so when those inputs were combined. This was equivalent to a nonlinear computation known as exclusive OR (or XOR), which yields a binary output of 1 if one (but only one) of the inputs is 1.
This finding immediately struck a chord with the computer science community. XOR functions were for many years deemed impossible in single neurons: In their 1969 book Perceptrons, the computer scientists Marvin Minsky and Seymour Papert offered a proof that single-layer artificial networks could not perform XOR. That conclusion was so devastating that many computer scientists blamed it for the doldrums that neural network research fell into until the 1980s.
Neural network researchers did eventually find ways of dodging the obstacle that Minsky and Papert identified, and neuroscientists found examples of those solutions in nature. For example, Poirazi already knew XOR was possible in a single neuron: Just two dendrites together could achieve it. But in these new experiments, she and her colleagues were offering a plausible biophysical mechanism to facilitate it — in a single dendrite.
“For me, it’s another degree of flexibility that the system has,” Poirazi said. “It just shows you that this system has many different ways of computing.” Still, she points out that if a single neuron could already solve this kind of problem, “why would the system go to all the trouble to come up with more complicated units inside the neuron?”
Processors Within Processors
Certainly not all neurons are like that. According to Gidon, there are plenty of smaller, point-like neurons in other parts of the brain. Presumably, then, this neural complexity exists for a reason. So why do single compartments within a neuron need the capacity to do what the entire neuron, or a small network of neurons, can do just fine? The obvious possibility is that a neuron behaving like a multilayered network has much more processing power and can therefore learn or store more. “Maybe you have a deep network within a single neuron,” Poirazi said. “And that’s much more powerful in terms of learning difficult problems, in terms of cognition.”
Perhaps, Kording added, “a single neuron may be able to compute truly complex functions. For example, it might, by itself, be able to recognize an object.” Having such powerful individual neurons, according to Poirazi, might also help the brain conserve energy.
Larkum’s group plans to search for similar signals in the dendrites of rodents and other animals, to determine whether this computational ability is unique to humans. They also want to move beyond the scope of their model to associate the neural activity they observed with actual behavior. Meanwhile, Poirazi now hopes to compare the computations in these dendrites to what happens in a network of neurons, to suss out any advantages the former might have. This will include testing for other types of logic operations and exploring how those operations might contribute to learning or memory. “Until we map this out, we can’t really tell how powerful this discovery is,” Poirazi said.
Though there’s still much work to be done, the researchers believe these findings mark a need to rethink how they model the brain and its broader functions. Focusing on the connectivity of different neurons and brain regions won’t be enough.
The new results also seem poised to influence questions in the machine learning and artificial intelligence fields. Artificial neural networks rely on the point model, treating neurons as nodes that tally inputs and pass the sum through an activity function. “Very few people have taken seriously the notion that a single neuron could be a complex computational device,” said Gary Marcus, a cognitive scientist at New York University and an outspoken skeptic of some claims made for deep learning.
Although the Science paper is but one finding in an extensive history of work that demonstrates this idea, he added, computer scientists might be more responsive to it because it frames the issue in terms of the XOR problem that dogged neural network research for so long. “It’s saying, we really need to think about this,” Marcus said. “The whole game — to come up with how you get smart cognition out of dumb neurons — might be wrong.”
“This is a super clean demonstration of that,” he added. “It’s going to speak above the noise.”
Few creatures have captured the attention of both the general public and scientists as thoroughly as a peculiar-looking salamander known as the axolotl. Native only to Lake Xochimilco, south of Mexico City, axolotls are less and less frequently found in the wild. However, they are relatively abundant in captivity, with pet enthusiasts raising them due to their alien features, such as the striking, fringy crown they wear on their heads. Researchers also keep a large supply of axolotl in captivity due to the many unique properties that make them attractive subjects of study.
Perhaps the most notable and potentially useful of these characteristics is the axolotl’s uncanny ability to regenerate. Unlike humans and other animals, axolotls don’t heal large wounds with the fibrous tissue that composes scars. Instead, they simply regrow their injured part.
“It regenerates almost anything after almost any injury that doesn’t kill it,” said Yale researcher Parker Flowers in a statement. This capability is remarkably robust, even for salamanders. Where regular salamanders are known to regrow lost limbs, axolotls have been observed regenerating ovaries, lung tissues, eyes, and even parts of the brain and spinal cord.
Obviously, figuring out how these alien-looking salamanders manage this magic trick is of great interest to researchers. Doing so could reveal a method for providing humans with a similar regenerative capability. But identifying the genes involved in this process has been tricky — the axolotl has a genome 10 times larger than that of a human’s, making it the largest animal genome sequenced to date.
Fortunately, Flowers and colleagues recently discovered a means of more easily navigating this massive genome and, in the process, identified two genes involved in the axolotl’s remarkable regenerative capacity.
A new role for two genes
We’ve understood the basic process of regeneration in axolotls for a while now. After a limb is severed, for instance, blood cells clot at the site, and skin cells start to divide and cover the exposed wound. Then, nearby cells begin to travel to the site and congregate in a blob called the blastema. The blastema then begins to differentiate into the cells needed to grow the relevant body part and grow outward according to the appropriate limb structure, resulting in a new limb identical to its severed predecessor.
But identifying which genes code for this process and what mechanisms guide its actions is less clear. Building off of previous work using CRISPR/Cas9, Flowers and colleagues were able to imprint regenerated cells with a kind of genetic barcode that enabled them to trace the cells back to their governing genes. In this way, they were able to identify and track 25 genes suspected to be involved in the regeneration process. From these 25, they identified two genes related to the axolotls’ tail regeneration; specifically, the catalase and fetub genes.
Although the researchers stressed that many more genes were likely driving this complicated process, the finding does have important implications for human beings — namely that humans also possess similar genes to the two identified in this study. Despite sharing similar genes, the same gene can do very different work across species and within a single animal. The human equivalent gene FETUB, for example, produces proteins that regulate bone resorption, regulates insulin and hepatocyte growth factor receptors, responds to inflammation, and more. In the axolotl, it appears that regulating the regenerative process is another duty.
Since humans possess the same genes that enable axolotls to regenerate, researchers are optimistic that one day we will be able to speed up wound healing or even to completely replicate the axolotl’s incredible ability to regenerate organs and limbs. With continued research such as this, it’s only a matter of time until this strange salamander gives ups its secrets.
This theory has far-reaching implications and has changed our perceptions of space and time. What Physicists discovered is that we are all continuously moving not just in three-dimensional space but in four-dimensional spacetime. In a similar way, all objects in the real world are moving in a four-dimensional spacetime, at a constant velocity as that of light. Sounds astounding but it’s true.