If information carries mass, could it be the dark matter physicists are craving?
The existence of dark energy and dark matter was inferred in order to correctly predict the expansion of the universe and the rotational velocity of galaxies. In this view, dark energy could be the source of the centrifugal force expanding the universe (it is what accounts for the Hubble constant in the leading theories), while dark matter could be the centripetal force (an additional gravity source) necessary to stabilize galaxies and clusters of galaxies, since there isn’t enough ordinary mass to keep them together. Among other hypotheses, dark energy and dark matter are believed to be related to the vacuum fluctuations, and huge efforts have been devoted to detecting it. The fact that no evidence has yet been found calls for a change of perspective that could be due to information theory.
How could we measure the mass of information? Dr. Melvin Vopson, of the University of Portsmouth, has a hypothesis he calls the mass-energy-information equivalence. It extends the already existing information-energy equivalence by proposing information has mass. Initial works on Shannon’s classical information theory, its applications to quantum mechanics by Dr. Wheeler, and Landauer’s principle predicting that erasing one bit of information would release a tiny amount of heat, connect information to energy. Therefore, through Einstein’s equivalence between mass and energy, information – once created – has mass. The figure below depicts the extended equivalence principle.
In order to find the mass of digital information, one would start with an empty data storage device, measuring its total mass with a highly sensitive device. Once the information is recorded in the device, its mass is measured again. The next step is to erase one file and measure again. The limiting step is the fact that such an ultra-sensitive device doesn’t exist yet. In his paper published in the journal AIP Advances, Vopson proposes that this device could be in the form of an interferometer similar to LIGO, or a weighing machine like a Kibble balance. In the same paper, Vopson describes the mathematical basis for the mechanism and physics by which information acquires mass, and formulates this powerful principle, proposing a possible experiment to test it.
In regard to dark matter, Vopson says that his estimate of the ‘information bit content’ of the universe is very close to the number of bits of information that the visible universe would contain to make up all the missing dark matter, as estimated by M.P. Gough and published in 2008,.
Vopson is applying for a grant in order to design and build the measurement device and perform the experiments. We are so looking forward to his results!
RSF in perspective
Both dark matter and dark energy have been inferred as a consequence of neglecting spin in the structure of space-time. In the frame of the Generalized Holographic approach, spin is the natural source of centrifugal and centripetal force that emerges from the gradient density across scales, just as a hurricane emerges due to pressure and temperature gradients. The vacuum energy of empty space – the classical or cosmological vacuum – has been estimated to be 10−9 joules per cubic meter. However, vacuum energy density at quantum scale is 10113joules per cubic meter. Such a discrepancy of 122 orders of magnitude difference in vacuum densities between micro and cosmological scales is known as the vacuum catastrophe. This extremely large density gradient in the Planck field originates spin at all scales.
Additionally, the holographic model explains mass as an emergent property of an information transfer potential between the information-energy stored in a confined volume and the information-energy in the surface or boundary of that volume, with respect to the size or volume of a bit of information. Each bit of information-energy voxelating the surface and volume is spinning at an extremely fast speed. Space is composed of these voxels, named Planck Spherical Units (PSU), which are a quanta of action. The expressed or unfolded portion of the whole information is what we call mass. For more details on how the holographic approach explains dark mass and dark energy, please see our RSF article on the Vacuum Catastrophe (https://resonance.is/the-vacuum-catastrophe/).
Light propagates through the atomic cloud shown in the center and then falls onto the SiN membrane shown on the left. As a result of interaction with light the precession of atomic spins and vibration of the membrane become quantum correlated. This is the essence of entanglement between the atoms and the membrane. Credit: Niels Bohr Institute
A team of researchers from the University of Copenhagen’s Niels Bohr Institute has successfully entangled two very distinct quantum particles. The findings, which were reported in Nature Physics, have various possible applications in ultra-precise sensing and quantum communication.
Quantum communication and quantum sensing are both based on entanglement. It’s a quantum link between two items that allows them to act as if they’re one quantum object.
Researchers were able to create entanglement between a mechanical oscillator—a vibrating dielectric membrane—and a cloud of atoms, each serving as a small magnet, or «spin,» according to physicists. By joining these disparate entities with photons, or light particles, they were able to entangle. The membrane—or mechanical quantum systems in general—can be used to process quantum information, and the membrane—or mechanical quantum systems in general—can be used to store quantum information.
Professor Eugene Polzik, the project’s leader, says: «We’re on our way to pushing the boundaries of entanglement’s capabilities with this new technique. The larger the objects, the further away they are, and the more different they are, the more intriguing entanglement becomes from both a basic and an applied standpoint. Entanglement between highly diverse things is now conceivable thanks to the new result.»
Imagine the position of the vibrating membrane and the tilt of the total spin of all atoms, similar to a spinning top, to explain entanglement using the example of spins entangled with a mechanical membrane. A correlation occurs when both items move randomly yet are observed travelling right or left at the same moment. The so-called zero-point motion—the residual, uncorrelated motion of all matter that occurs even at absolute zero temperature—is generally the limit of such correlated motion. This limits our understanding of any of the systems.
Eugene Polzik’s team entangled the systems in their experiment, which means they moved in a correlated way with more precision than zero-point motion. «Quantum mechanics is a double-edged sword—it gives us amazing new technology, but it also restricts the precision of measurements that would appear simple from a classical standpoint,» explains Micha Parniak, a team member. Even if they are separated by a large distance, entangled systems can maintain perfect correlation, a fact that has perplexed academics since quantum physics’ inception more than a century ago.
Christoffer stfeldt, a Ph.D. student, elaborates: «Consider the many methods for manifesting quantum states as a zoo of diverse realities or circumstances, each with its own set of features and potentials. If, for example, we want to construct a gadget that can take advantage of the many attributes they all have and perform different functions and accomplish different tasks, we’ll need to invent a language that they can all understand. For us to fully utilise the device’s capabilities, the quantum states must be able to communicate. This entanglement of two zoo elements has demonstrated what we are presently capable of.»
Quantum sensing is an example of distinct perspectives on entangling different quantum things. Different objects have different levels of sensitivity to external pressures. Mechanical oscillators, for example, are employed in accelerometers and force sensors, while atomic spins are used in magnetometers. Entanglement permits only one of the two entangled objects to be measured with a sensitivity not restricted by the object’s zero-point fluctuations when only one of the two is subject to external perturbation.
The approach has the potential to be used in sensing for both small and large oscillators in the near future. The first detection of gravity waves, performed by the Laser Interferometer Gravitational-wave Observatory, was one of the most significant scientific breakthroughs in recent years (LIGO). LIGO detects and monitors extremely faint waves produced by deep-space astronomical events such as black hole mergers and neutron star mergers. The waves can be seen because they shake the interferometer’s mirrors. However, quantum physics limits LIGO’s sensitivity since the laser interferometer’s mirrors are likewise disturbed by zero-point fluctuations. These variations produce noise, which makes it impossible to see the tiny movements of the mirrors induced by gravitational waves.
It is theoretically possible to entangle the LIGO mirrors with an atomic cloud and so cancel the reflectors’ zero-point noise in the same manner that the membrane noise is cancelled in the current experiment. Due to their entanglement, the mirrors and atomic spins have a perfect correlation that can be used in such sensors to almost eliminate uncertainty. It’s as simple as taking data from one system and applying what you’ve learned to the other. In this method, one may simultaneously learn about the position and momentum of LIGO’s mirrors, entering a so-called quantum-mechanics-free subspace and moving closer to unlimited precision in motion measurements. A model experiment demonstrating this principle is on the way at Eugene Polzik’s laboratory.
Two new studies examine ways we could engineer human wormhole travel.
Imagine if we could cut paths through the vastness of space to make a network of tunnels linking distant stars somewhat like subway stations here on Earth? The tunnels are what physicists call wormholes, strange funnel-like folds in the very fabric of spacetime that would be—if they exist—major shortcuts for interstellar travel. You can visualize it in two dimensions like this: Take a piece of paper and bend it in the middle so that it makes a U shape. If an imaginary flat little bug wants to go from one side to the other, it needs to slide along the paper. Or, if there were a bridge between the two sides of the paper the bug could go straight between them, a much shorter path. Since we live in three dimensions, the entrances to the wormholes would be more like spheres than holes, connected by a four-dimensional “tube.” It’s much easier to write the equations than to visualize this! Amazingly, because the theory of general relativity links space and time into a four-dimensional spacetime, wormholes could, in principle, connect distant points in space, or in time, or both.
The idea of wormholes is not new. Its origins reach back to 1935 (and even earlier), when Albert Einstein and Nathan Rosen published a paper constructing what became known as an Einstein-Rosen bridge. (The name ‘wormhole’ came up later, in a 1957 paper by Charles Misner and John Wheeler, Wheeler also being the one who coined the term ‘black hole.’) Basically, an Einstein-Rosen bridge is a connection between two distant points of the universe or possibly even different universes through a tunnel that goes into a black hole. Exciting as the possibility is, the throats of such bridges are notoriously unstable and any object with mass that ventures through it would cause it to collapse upon itself almost immediately, closing the connection. To force the wormholes to stay open, one would need to add a kind of exotic matter that has both negative energy density and pressure—not something that is known in the universe. (Interestingly, negative pressure is not as crazy as it seems; dark energy, the fuel that is currently accelerating the cosmic expansion, does it exactly because it has negative pressure. But negative energy density is a whole other story.)
If wormholes exist, if they have wide mouths, and if they can be kept open (three big but not impossible ifs) then it’s conceivable that we could travel through them to faraway spots in the universe. Arthur C. Clarke used them in “2001: A Space Odyssey”, where the alien intelligences had constructed a network of intersecting tunnels they used as we use the subway. Carl Sagan used them in “Contact” so that humans could confirm the existence of intelligent ETs. “Interstellar” uses them so that we can try to find another home for our species.
If wormholes exist, if they have wide mouths, and if they can be kept open (three big but not impossible ifs) then it’s conceivable that we could travel through them to faraway spots in the universe.
Two recent papers try to get around some of these issues. Jose Luis Blázquez-Salcedo, Christian Knoll, and Eugen Radu use normal matter with electric charge to stabilize the wormhole, but the resulting throat is still of submicroscopic width, so not useful for human travel. It is also hard to justify net electric charges in black hole solutions as they tend to get neutralized by surrounding matter, similar to how we get shocked with static electricity in dry weather. Juan Maldacena and Alexey Milekhin’s paper is titled ‘Humanly Traversable Wormholes’, thus raising the stakes right off the bat. However, they are open to admitting that “in this paper, we revisit the question [of humanly traversable wormholes] and we engage in some ‘science fiction.’” The first ingredient is the existence of some kind of matter (the “dark sector”) that only interacts with normal matter (stars, us, frogs) through gravity. Another point is that to support the passage of human-size travelers, the model needs to exist in five dimensions, thus one extra space dimension. When all is set up, the wormhole connects two black holes with a magnetic field running through it. And the whole thing needs to spin to keep it stable, and completely isolated from particles that may fall into it compromising its design. Oh yes, and extremely low temperature as well, even better at absolute zero, an unattainable limit in practice.
Maldacena and Milekhins’ paper is an amazing tour through the power of speculative theoretical physics. They are the first to admit that the object they construct is very implausible and have no idea how it could be formed in nature. In their defense, pushing the limits (or beyond the limits) of understanding is what we need to expand the frontiers of knowledge. For those who dream of humanly traversable wormholes, let’s hope that more realistic solutions would become viable in the future, even if not the near future. Or maybe aliens that have built them will tell us how.
Quantum physics isn’t quite magic, but it requires an entirely novel set of rules to make sense of the quantum universe.
The most powerful idea in all of science is this: The universe, for all its complexity, can be reduced to its simplest, most fundamental components. If you can determine the underlying rules, laws, and theories that govern your reality, then as long as you can specify what your system is like at any moment in time, you can use your understanding of those laws to predict what things will be like both in the far future as well as the distant past. The quest to unlock the secrets of the universe is fundamentally about rising to this challenge: figuring out what makes up the universe, determining how those entities interact and evolve, and then writing down and solving the equations that allow you to predict outcomes that you have not yet measured for yourself.
In this regard, the universe makes a tremendous amount of sense, at least in concept. But when we start talking about what, precisely, it is that composes the universe, and how the laws of nature actually work in practice, a lot of people bristle when faced with this counterintuitive picture of reality: quantum mechanics. That’s the subject of this week’s Ask Ethan, where Rajasekaran Rajagopalan writes in to inquire:
“Can you please provide a very detailed article on quantum mechanics, which even a… student can understand?”
Let’s assume you’ve heard about quantum physics before, but don’t quite know what it is just yet. Here’s a way that everyone can — at least, to the limits that anyone can — make sense of our quantum reality.
Before there was quantum mechanics, we had a series of assumptions about the way the universe worked. We assumed that everything that exists was made out of matter, and that at some point, you’d reach a fundamental building block of matter that could be divided no further. In fact, the very word “atom” comes from the Greek ἄτομος, which literally means “uncuttable,” or as we commonly think about it, indivisible. These uncuttable, fundamental constituents of matter all exerted forces on one another, like the gravitational or electromagnetic force, and the confluence of these indivisible particles pushing and pulling on one another is what was at the core of our physical reality.
The laws of gravitation and electromagnetism, however, are completely deterministic. If you describe a system of masses and/or electric charges, and specify their positions and motions at any moment in time, those laws will allow you to calculate — to arbitrary precision — what the positions, motions, and distributions of each and every particle was and will be at any other moment in time. From planetary motion to bouncing balls to the settling of dust grains, the same rules, laws, and fundamental constituents of the universe accurately described it all.
Until, that is, we discovered that there was more to the universe than these classical laws.
1.) You can’t know everything, exactly, all at once. If there’s one defining characteristic that separates the rules of quantum physics from their classical counterparts, it’s this: you cannot measure certain quantities to arbitrary precisions, and the better you measure them, the more inherently uncertain other, corresponding properties become.
Measure a particle’s position to a very high precision, and its momentum becomes less well-known.
Measure the angular momentum (or spin) of a particle in one direction, and you destroy information about its angular momentum (or spin) in the other two directions.
Measure the lifetime of an unstable particle, and the less time it lives for, the more inherently uncertain the particle’s rest mass will be.
These are just a few examples of the weirdness of quantum physics, but they’re sufficient to illustrate the impossibility of knowing everything you can imagine knowing about a system all at once. Nature fundamentally limits what’s simultaneously knowable about any physical system, and the more precisely you try and pin down any one of a large set of properties, the more inherently uncertain a set of related properties becomes.
2.) Only a probability distribution of outcomes can be calculated: not an explicit, unambiguous, single prediction. Not only is it impossible to know all of the properties, simultaneously, that define a physical system, but the laws of quantum mechanics themselves are fundamentally indeterminate. In the classical universe, if you throw a pebble through a narrow slit in a wall, you can predict where and when it will hit the ground on the other side. But in the quantum universe, if you do the same experiment but use a quantum particle instead — whether a photon, and electron, or something even more complicated — you can only describe the possible set of outcomes that will occur.
Quantum physics allows you to predict what the relative probabilities of each of those outcomes will be, and it allows you do to it for as complicated of a quantum system as your computational power can handle. Still, the notion that you can set up your system at one point in time, know everything that’s possible to know about it, and then predict precisely how that system will have evolved at some arbitrary point in the future is no longer true in quantum mechanics. You can describe what the likelihood of all the possible outcomes will be, but for any single particle in particular, there’s only one way to determine its properties at a specific moment in time: by measuring them.
3.) Many things, in quantum mechanics, will be discrete, rather than continuous. This gets to what many consider the heart of quantum mechanics: the “quantum” part of things. If you ask the question “how much” in quantum physics, you’ll find that there are only certain quantities that are allowed.
Particles can only come in certain electric charges: in increments of one-third the charge of an electron.
Particles that bind together form bound states — like atoms — and atoms can only have explicit sets of energy levels.
Light is made up of individual particles, photons, and each photon only has a specific, finite amount of energy inherent to it.
In all of these cases, there’s some fundamental value associated with the lowest (non-zero) state, and then all other states can only exist as some sort of integer (or fractional integer) multiple of that lowest-valued state. From the excited states of atomic nuclei to the energies released when electrons fall into their “hole” in LED devices to the transitions that govern atomic clocks, some aspects of reality are truly granular, and cannot be described by continuous changes from one state to another.
4.) Quantum systems exhibit both wave-like and particle-like behaviors. And which one you get — get this — depends on if or how you measure the system. The most famous example of this is the double slit experiment: passing a single quantum particle, one-at-a-time, through a set of two closely-spaced slits. Now, here’s where things get weird.
If you don’t measure which particle goes through which slit, the pattern you’ll observe on the screen behind the slit will show interference, where each particle appears to be interfering with itself along the journey. The pattern revealed by many such particles shows interference, a purely quantum phenomenon.
If you do measure which slit each particle goes through — particle 1 goes through slit 2, particle 2 goes through slit 2, particle 3 goes through slit 1, etc. — there is no interference pattern anymore. In fact, you simply get two “lumps” of particles, one each corresponding to the particles that went through each of the slits.
It’s almost as if everything exhibits wave-like behavior, with its probability spreading out over space and through time, unless an interaction forces it to be particle-like. But depending on which experiment you perform and how you perform it, quantum systems exhibit properties that are both wave-like and particle-like.
5.) The act of measuring a quantum system fundamentally changes the outcome of that system. According to the rules of quantum mechanics, a quantum object is allowed to exist in multiple states all at once. If you have an electron passing through a double slit, part of that electron must be passing through both slits, simultaneously, in order to produce the interference pattern. If you have an electron in a conduction band in a solid, its energy levels are quantized, but its possible positions are continuous. Same story, believe it or not, for an electron in an atom: we can know its energy level, but asking “where is the electron” is something can only answer probabilistically.
So you get an idea. You say, “okay, I’m going to cause a quantum interaction somehow, either by colliding it with another quantum or passing it through a magnetic field or something like that,” and now you have a measurement. You know where the electron is at the moment of that collision, but here’s the kicker: by making that measurement, you have now changed the outcome of your system. You’ve pinned down the object’s position, you’ve added energy to it, and that causes a change in momentum. Measurements don’t just “determine” a quantum state, but create an irreversible change in the quantum state of the system itself.
6.) Entanglement can be measured, but superpositions cannot. Here’s a puzzling feature of the quantum universe: you can have a system that’s simultaneously in more than one state at once. Schrodinger’s cat can be alive and dead at once; two water waves colliding at your location can cause you to either rise or fall; a quantum bit of information isn’t just a 0 or a 1, but rather can be some percentage “0” and some percentage “1” at the same time. However, there’s no way to measure a superposition; when you make a measurement, you only get one state out per measurement. Open the box: the cat is dead. Observe the object in the water: it will rise or fall. Measure your quantum bit: get a 0 or a 1, never both.
But whereas superposition is different effects or particles or quantum states all superimposed atop one another, entanglement is different: it’s a correlation between two or more different parts of the same system. Entanglement can extend to regions both within and outside of one another’s light-cones, and basically states that properties are correlated between two distinct particles. If I have two entangled photons, and I wanted to guess the “spin” of each one, I’d have 50/50 odds. But if I measured the spin of one, I would know the other’s spin to more like 75/25 odds: much better than 50/50. There isn’t any information getting exchanged faster than light, but beating 50/50 odds in a set of measurements is a surefire way to show that quantum entanglement is real, and affect the information content of the universe.
7.) There are many ways to “interpret” quantum physics, but our interpretations are not reality. This is, at least in my opinion, the trickiest part of the whole endeavor. It’s one thing to be able to write down equations that describe the universe and agree with experiments. It’s quite another thing to accurately describe just exactly what’s happening in a measurement-independent way.
I would argue that this is a fool’s errand. Physics is, at its core, about what you can predict, observe, and measure in this universe. Yet when you make a measurement, what is it that’s occurring? And what does that means about reality? Is reality:
a series of quantum wavefunctions that instantaneously “collapse” upon making a measurement?
an infinite ensemble of quantum waves, were measurement “selects” one of those ensemble members?
a superposition of forwards-moving and backwards-moving potentials that meet up, now, in some sort of “quantum handshake?”
an infinite number of possible worlds, where each world corresponds to one outcome, and yet our universe will only ever walk down one of those paths?
If you believe this line of thought is useful, you’ll answer, “who knows; let’s try to find out.” But if you’re like me, you’ll think this line of thought offers no knowledge and is a dead end. Unless you can find an experimental benefit of one interpretation over another — unless you can test them against each other in some sort of laboratory setting — all you’re doing in choosing an interpretation is presenting your own human biases. If it isn’t the evidence doing the deciding, it’s very hard to argue that there’s any scientific merit to your endeavor t all.
If you were to only teach someone the classical laws of physics that we thought governed the universe as recently as the 19th century, they would be utterly astounded by the implications of quantum mechanics. There is no such thing as a “true reality” that’s independent of the observer; in fact, the very act of making a measurement alters your system irrevocably. Additionally, nature itself is inherently uncertain, with quantum fluctuations being responsible for everything from the radioactive decay of atoms to the initial seeds of structure that allow the universe to grow up and form stars, galaxies, and eventually, human beings.
The quantum nature of the universe is written on the face of every object that now exists within it. And yet, it teaches us a humbling point of view: that unless we make a measurement that reveals or determines a specific quantum property of our reality, that property will remain indeterminate until such a time arises. If you take a course on quantum mechanics at the college level, you’ll likely learn how to calculate probability distributions of possible outcomes, but it’s only by making a measurement that you determine which specific outcome occurs in your reality. As unintuitive as quantum mechanics is, experiment after experiment continues to prove it correct. While many still dream of a completely predictable universe, quantum mechanics, not our ideological preferences, most accurately describes the reality we all inhabit.
It appears a quantum computer rivalry is growing between the U.S. and China.
Physicists in China claim they’ve constructed two quantum computers with performance speeds that outrival competitors in the U.S., debuting a superconducting machine, in addition to an even speedier one that uses light photons to obtain unprecedented results, according to a recent study published in the peer-reviewed journals Physical Review Letters and Science Bulletin.
China has exaggerated the capabilities of its technology before, but such soft spins are usually tagged to defense tech, which means this new feat could be the real deal.
China’s quantum computers still make a lot of errors
The supercomputer, called Jiuzhang 2, can calculate in a single millisecond a task that the fastest conventional computer in the world would take a mind-numbing 30 trillion years to do. The breakthrough was revealed during an interview with the research team, which was broadcast on China’s state-owned CCTV on Tuesday, which could make the news suspect. But with two peer-reviewed papers, it’s important to take this seriously. Pan Jianwei, lead researcher of the studies, said that Zuchongzhi 2, which is a 66-qubit programmable superconducting quantum computer is an incredible 10 million times faster than Google’s 55-qubit Sycamore, making China’s new machine the fastest in the world, and the first to beat Google’s in two years.
The Zuchongzhi 2 is an improved version of a previous machine, completed three months ago. The Jiuzhang 2, a different quantum computer that runs on light, has fewer applications but can run at blinding speeds of 100 sextillion times faster than the biggest conventional computers of today. In case you missed it, that’s a one with 23 zeroes behind it. But while the features of these new machines hint at a computing revolution, they won’t hit the marketplace anytime soon. As things stand, the two machines can only operate in pristine environments, and only for hyper-specific tasks. And even with special care, they still make lots of errors. «In the next step we hope to achieve quantum error correction with four to five years of hard work,» said Professor Pan of the University of Science and Technology of China, in Hefei, which is in the southeastern province of Anhui.
China’s quantum computers could power the next-gen advances of the coming decades
«Based on the technology of quantum error correction, we can explore the use of some dedicated quantum computers or quantum simulators to solve some of the most important scientific questions with practical value,» added Pan. The circuits of the Zuchongzhi have to be cooled to very low temperatures to enable optimal performance for a complex task called random walk, which is a model that corresponds to the tactical movements of pieces on a chessboard.
The applications for this task include calculating gene mutations, predicting stock prices, air flows in hypersonic flight, and the formation of novel materials. Considering the rapidly increasing relevance of these processes as the fourth industrial revolution picks up speed, it’s no exaggeration to say that quantum computers will be central in key societal functions, from defense research to scientific advances to the next generation of economics.
Los científicos a menudo se refieren al neutrino como la «partícula fantasma. «Los neutrinos fueron una de las partículas más abundantes en el origen del universo y lo siguen siendo hoy en día. Las reacciones de fusión en el sol producen vastos ejércitos de ellos, que vierten sobre la Tierra todos los días. Trillones pasan a través de nuestros cuerpos cada segundo, luego vuelan a través de la Tierra como si no estuviera allí.Aunque fue postulado por primera vez hace casi un siglo y detectado por primera vez hace 65 años, los neutrinos permanecen envueltos en el misterio debido a su reticencia a interactuar con la materia», dijo Alessandro Lovato, un físico nuclear del Departamento de Energía de los Estados Unidos (DO E) Laboratorio Nacional Argonne.Lovato es miembro de un equipo de investigación de cuatro laboratorios nacionales que ha construido un modelo para abordar uno de los muchos misterios acerca de los neutrinos – cómo interactúan con los núcleos atómicos, sistemas complicados hechos de protones y neutrones («núcleo ns») unidos por la fuerza fuerte. Este conocimiento es esencial para desentrañar un misterio aún más grande — por qué durante su viaje a través del espacio o la materia los neutrinos se transforman mágicamente de uno en otro de tres posibles tipos o «sabores. «Para estudiar estas oscilaciones, se han llevado a cabo dos series de experimentos en el Laboratorio Nacional de Accelerator Fermi (MiniBooNE y NOvA). En estos experimentos, los científicos generan una intensa corriente de neutrinos en un acelerador de partículas, luego los envían a detectores de partículas durante un largo período de tiempo (MiniBooNE) o a quinientas millas de la fuente (NOvA).Conociendo la distribución original de los sabores de neutrinos, los experimentalistas recogen datos relacionados con las interacciones de los neutrinos con los núcleos atómicos en los detectores. A partir de esa información, pueden calcular cualquier cambio en los sabores de neutrinos a lo largo del tiempo o la distancia. En el caso de los detectores MiniBooNE y NOvA, los núcleos son del isótopo carbono-12, que tiene seis protones y seis neutrones.»Nuestro equipo entró en escena porque estos experimentos requieren un modelo muy preciso de las interacciones de los neutrinos con los núcleos detectores en un gran rango de energía», dijo Noemi Rocco, una posdoctora de la división de Física de Argonne y Fermilab. Dada la esquividad de los neutrinos, lograr una descripción completa de estas reacciones es un desafío formidable.El modelo de física nuclear del equipo de interacciones de neutrinos con un solo nucleón y un par de ellos es el más preciso hasta ahora. «El nuestro es el primer enfoque para modelar estas interacciones a un nivel tan microscópico», dijo Rocco. «Los enfoques anteriores no eran tan finos. «Uno de los hallazgos importantes del equipo, basado en los cálculos llevados a cabo en la supercomputadora Mira ahora retirada en la Argonne Leadership Computing Facility (ALCF), fue que la interacción del par de nucleones es crucial para modelar las interacciones de neutrinos con nu Clei con exactitud. El ALCF es una instalación de usuario de la Oficina de Ciencia DOE.»Cuanto más grandes son los núcleos en el detector, mayor es la probabilidad de que los neutrinos interactúen con ellos», dijo Lovato. «En el futuro, planeamos extender nuestro modelo a datos de núcleos más grandes, a saber, los de oxígeno y argón, en apoyo de experimentos planeados en Japón y los EE. UU.».Rocco añadió que «Para esos cálculos, nos basaremos en computadoras ALCF aún más potentes, el sistema Theta existente y la próxima máquina exascale, Aurora. «Los científicos esperan que, eventualmente, surja una imagen completa de oscilaciones de sabor tanto para neutrinos como para sus antipartículas, llamados «antineutrinos». «Ese conocimiento puede arrojar luz sobre por qué el universo se construye a partir de materia en lugar de antimateria — una de las preguntas fundamentales sobre el universo.
Though progress is being made, our brains remain organs of many mysteries. Among these are the exact workings of neurons, with some 86 billion of them in the human brain. Neurons are interconnected in complicated, labyrinthine networks across which they exchange information in the form of electrical signals. We know that signals exit an individual neuron through a fiber called an axon, and also that signals are received by each neuron through input fibers called dendrites.
Understanding the electrical capabilities of dendrites in particular — which, after all, may be receiving signals from countless other neurons at any given moment — is fundamental to deciphering neurons’ communication. It may surprise you to learn, though, that much of everything we assume about human neurons is based on observations made of rodent dendrites — there’s just not a lot of fresh, still-functional human brain tissue available for thorough examination.
For a new study published January 3 in the journal Science, however, scientists got a rare chance to explore some neurons from the outer layer of human brains, and they discovered startling dendrite behaviors that may be unique to humans, and may even help explain how our billions of neurons process the massive amount of information they exchange.
A puzzle, solved?
Electrical signals weaken with distance, and that poses a riddle to those seeking to understand the human brain: Human dendrites are known to be about twice as long as rodent dendrites, which means that a signal traversing a human dendrite could be much weaker arriving at its destination than one traveling a rodent’s much shorter dendrite. Says paper co-author biologist Matthew Larkum of Humboldt University in Berlin speaking to LiveScience, “If there was no change in the electrical properties between rodents and people, then that would mean that, in the humans, the same synaptic inputs would be quite a bit less powerful.” Chalk up another strike against the value of animal-based human research. The only way this would not be true is if the signals being exchanged in our brains are not the same as those in a rodent. This is exactly what the study’s authors found.
The researchers worked with brain tissue sliced for therapeutic reasons from the brains of tumor and epilepsy patients. Neurons were resected from the disproportionately thick layers 2 and 3 of the cerebral cortex, a feature special to humans. In these layers reside incredibly dense neuronal networks.
Without blood-borne oxygen, though, such cells only last only for about two days, so Larkum’s lab had no choice but to work around the clock during that period to get the most information from the samples. “You get the tissue very infrequently, so you’ve just got to work with what’s in front of you,” says Larkum. The team made holes in dendrites into which they could insert glass pipettes. Through these, they sent ions to stimulate the dendrites, allowing the scientists to observe their electrical behavior.
In rodents, two type of electrical spikes have been observed in dendrites: a short, one-millisecond spike with the introduction of sodium, and spikes that last 50- to 100-times longer in response to calcium.
In the human dendrites, one type of behavior was observed: super-short spikes occurring in rapid succession, one after the other. This suggests to the researchers that human neurons are “distinctly more excitable ” than rodent neurons, allowing them to successfully traverse our longer dendrites.
In addition, the human neuronal spikes — though they behaved somewhat like rodent spikes prompted by the introduction of sodium — were found to be generated by calcium, essentially the opposite of rodents.
An even bigger surprise
The study also reports a second major finding. Looking to better understand how the brain utilizes these spikes, the team programmed computer models based on their findings. (The brains slices they’d examined could not, of course, be put back together and switched on somehow.)
The scientists constructed virtual neuronal networks, each of whose neurons could could be stimulated at thousands of points along its dendrites, to see how each handled so many input signals. Previous, non-human, research has suggested that neurons add these inputs together, holding onto them until the number of excitatory input signals exceeds the number of inhibitory signals, at which point the neuron fires the sum of them from its axon out into the network.
However, this isn’t what Larkum’s team observed in their model. Neurons’ output was inverse to their inputs: The more excitatory signals they received, the less likely they were to fire off. Each had a seeming “sweet spot” when it came to input strength.
What the researchers believe is going on is that dendrites and neurons may be smarter than previously suspected, processing input information as it arrives. Mayank Mehta of UC Los Angeles, who’s not involved in the research, tells LiveScience, “It doesn’t look that the cell is just adding things up — it’s also throwing things away.” This could mean each neuron is assessing the value of each signal to the network and discarding “noise.” It may also be that different neurons are optimized for different signals and thus tasks.
Much in the way that octopuses distribute decision-making across a decentralized nervous system, the implication of the new research is that, at least in humans, it’s not just the neuronal network that’s smart, it’s all of the individual neurons it contains. This would constitute exactly the kind of computational super-charging one would hope to find somewhere in the amazing human brain.
In fascinating new research, cosmologists explain the history of the universe as one of self-teaching, autodidactic algorithms.
The scientists, including physicists from Brown University and the Flatiron Institute, say the universe has probed all the possible physical laws before landing on the ones we observe around us today. Could this wild idea help inform scientific research to come?
In their novella-length paper, published to the pre-print server arXiV, the researchers—who received “computational, logistical, and other general support” from Microsoft—offer ideas “at the intersection of theoretical physics, computer science, and philosophy of science with a discussion from all three perspectives,” they write, teasing the bigness and multidisciplinary nature of the research.
Here’s how it works: Our universe observes a whole bunch of laws of physics, but the researchers say other possible laws of physics seem equally likely, given the way mathematics works in the universe. So if a group of candidate laws were equally likely, then how did we end up with the laws we really have?
The scientists explain:
“The notion of ‘learning’ as we use it is more than moment-to-moment, brute adaptation. It is a cumulative process that can be thought of as theorizing, modeling, and predicting. For instance, the DNA/RNA/protein system on Earth must have arisen from an adaptive process, and yet it foresees a space of organisms much larger than could be called upon in any given moment of adaptation.”
We can analogize to the research of Charles Darwin, who studied all the different ways animals specialized in order to thrive in different environments. For example, why do we have one monolithic body of laws of physics, rather than, say, a bunch of specialized kinds of finches? This is an old question that dates back to at least 1893, when a philosopher first posited “natural selection,” but for the laws of the universe.
In the paper, the scientists define a slew of terms including how they’re defining “learning” in the context of the universe around us. The universe is made of systems that each have processes to fulfill every day, they say.
Each system is surrounded by an environment made of different other systems. Imagine standing in a crowd of people (remember that?), where your immediate environment is just made of other people. Each of their environments is made of, well, you and other stuff.
Evolution is already a kind of learning, so when we suggest the universe has used natural selection as part of the realization of physics, we’re invoking that specific kind of learning. (Does something have to have consciousness in order to learn? You need to carefully define learning in order to make that the case. Organisms and systems constantly show learning outcomes, like more success or a higher rate of reproduction.)
The researchers explain this distinction well:
“In one sense, learning is nothing special; it is a causal process, conveyed by physical interactions. And yet we need to consider learning as special to explain events that transpire because of learning.”
Consider the expression “You never learn,” which suggests that outcomes for a specific person and activity are still bad. We’re using that outcome to say learning hasn’t happened. What if the person is trying to change their outcomes and just isn’t succeeding? We’re gauging learning based on visible outcomes only.
If you’re interested in the nitty gritty, the full, 79-page study defines a ton of fascinating terms and introduces some wild and wonderful arguments using them. The scientists’ goal is to kick off a whole new arm of cosmological research into the idea of a learning universe.
In upcoming research, scientists will attempt to show the universe has consciousness. Yes, really. No matter the outcome, we’ll soon learn more about what it means to be conscious—and which objects around us might have a mind of their own.
What will that mean for how we treat objects and the world around us? Buckle in, because things are about to get weird.
What Is Consciousness?
The basic definition of consciousness intentionally leaves a lot of questions unanswered. It’s “the normal mental condition of the waking state of humans, characterized by the experience of perceptions, thoughts, feelings, awareness of the external world, and often in humans (but not necessarily in other animals) self-awareness,” according to the Oxford Dictionary of Psychology.
Scientists simply don’t have one unified theory of what consciousness is. We also don’t know where it comes from, or what it’s made of.
However, one loophole of this knowledge gap is that we can’t exhaustively say other organisms, and even inanimate objects, don’t have consciousness. Humans relate to animals and can imagine, say, dogs and cats have some amount of consciousness because we see their facial expressions and how they appear to make decisions. But just because we don’t “relate to” rocks, the ocean, or the night sky, that isn’t the same as proving those things don’t have consciousness.
This is where a philosophical stance called panpsychismcomes into play, writes All About Space’s David Crookes:
“This claims consciousness is inherent in even the tiniest pieces of matter — an idea that suggests the fundamental building blocks of reality have conscious experience. Crucially, it implies consciousness could be found throughout the universe.”
It’s also where physics enters the picture. Some scientists have posited that the thing we think of as consciousness is made of micro-scale quantum physics events and other “spooky actions at a distance,” somehow fluttering inside our brains and generating conscious thoughts.
The Free Will Conundrum
One of the leading minds in physics, 2020 Nobel laureate and black hole pioneer Roger Penrose, has written extensively about quantum mechanics as a suspected vehicle of consciousness. In 1989, he wrote a book called The Emperor’s New Mind, in which he claimed “that human consciousness is non-algorithmic and a product of quantum effects.”
Let’s quickly break down that statement. What does it mean for human consciousness to be “algorithmic”? Well, an algorithm is simply a series of predictable steps to reach an outcome, and in the study of philosophy, this idea plays a big part in questions about free will versus determinism.
Are our brains simply cranking out math-like processes that can be telescoped in advance? Or is something wild happening that allows us true free will, meaning the ability to make meaningfully different decisions that affect our lives?
Within philosophy itself, the study of free will dates back at least centuries. But the overlap with physics is much newer. And what Penrose claimed in The Emperor’s New Mind is that consciousness isn’t strictly causal because, on the tiniest level, it’s a product of unpredictable quantum phenomena that don’t conform to classical physics.
So, where does all that background information leave us? If you’re scratching your head or having some uncomfortable thoughts, you’re not alone. But these questions are essential to people who study philosophy and science, because the answers could change how we understand the entire universe around us. Whether or not humans do or don’t have free will has huge moral implications, for example. How do you punish criminals who could never have done differently?
Consciousness Is Everywhere
In physics, scientists could learn key things from a study of consciousness as a quantum effect. This is where we rejoin today’s researchers: Johannes Kleiner, mathematician and theoretical physicist at the Munich Center For Mathematical Philosophy, and Sean Tull, mathematician at the University of Oxford.
Kleiner and Tull are following Penrose’s example, in both his 1989 book and a 2014 paper where he detailed his belief that our brains’ microprocesses can be used to model things about the whole universe. The resulting theory is called integrated information theory (IIT), and it’s an abstract, “highly mathematical” form of the philosophy we’ve been reviewing.
In IIT, consciousness is everywhere, but it accumulates in places where it’s needed to help glue together different related systems. This means the human body is jam-packed with a ton of systems that must interrelate, so there’s a lot of consciousness (or phi, as the quantity is known in IIT) that can be calculated. Think about all the parts of the brain that work together to, for example, form a picture and sense memory of an apple in your mind’s eye.
The revolutionary thing in IIT isn’t related to the human brain—it’s that consciousness isn’t biological at all, but rather is simply this value, phi, that can be calculated if you know a lot about the complexity of what you’re studying.
If your brain has almost countless interrelated systems, then the entire universe must have virtually infinite ones. And if that’s where consciousness accumulates, then the universe must have a lot of phi.
Hey, we told you this was going to get weird.
“The theory consists of a very complicated algorithm that, when applied to a detailed mathematical description of a physical system, provides information about whether the system is conscious or not, and what it is conscious of,” Kleiner told All About Space. “If there is an isolated pair of particles floating around somewhere in space, they will have some rudimentary form of consciousness if they interact in the correct way.”
Kleiner and Tull are working on turning IIT into this complex mathematical algorithm—setting down the standard that can then be used to examine how conscious things operate.
Think about the classic philosophical comment, “I think, therefore I am,” then imagine two geniuses turning that into a workable formula where you substitute in a hundred different number values and end up with your specific “I am” answer.
The next step is to actually crunch the numbers, and then to grapple with the moral implications of a hypothetically conscious universe. It’s an exciting time to be a philosopher—or a philosopher’s calculator.
The dendritic arms of some human neurons can perform logic operations that once seemed to require whole neural networks.
The information-processing capabilities of the brain are often reported to reside in the trillions of connections that wire its neurons together. But over the past few decades, mounting research has quietly shifted some of the attention to individual neurons, which seem to shoulder much more computational responsibility than once seemed imaginable.
The latest in a long line of evidence comes from scientists’ discovery of a new type of electrical signal in the upper layers of the human cortex. Laboratory and modeling studies have already shown that tiny compartments in the dendritic arms of cortical neurons can each perform complicated operations in mathematical logic. But now it seems that individual dendritic compartments can also perform a particular computation — “exclusive OR” — that mathematical theorists had previously categorized as unsolvable by single-neuron systems.
“I believe that we’re just scratching the surface of what these neurons are really doing,” said Albert Gidon, a postdoctoral fellow at Humboldt University of Berlin and the first author of the paper that presented these findings in Science earlier this month.
The discovery marks a growing need for studies of the nervous system to consider the implications of individual neurons as extensive information processors. “Brains may be far more complicated than we think,” said Konrad Kording, a computational neuroscientist at the University of Pennsylvania, who did not participate in the recent work. It may also prompt some computer scientists to reappraise strategies for artificial neural networks, which have traditionally been built based on a view of neurons as simple, unintelligent switches.
The Limitations of Dumb Neurons
In the 1940s and ’50s, a picture began to dominate neuroscience: that of the “dumb” neuron, a simple integrator, a point in a network that merely summed up its inputs. Branched extensions of the cell, called dendrites, would receive thousands of signals from neighboring neurons — some excitatory, some inhibitory. In the body of the neuron, all those signals would be weighted and tallied, and if the total exceeded some threshold, the neuron fired a series of electrical pulses (action potentials) that directed the stimulation of adjacent neurons.
At around the same time, researchers realized that a single neuron could also function as a logic gate, akin to those in digital circuits (although it still isn’t clear how much the brain really computes this way when processing information). A neuron was effectively an AND gate, for instance, if it fired only after receiving some sufficient number of inputs.
Networks of neurons could therefore theoretically perform any computation. Still, this model of the neuron was limited. Not only were its guiding computational metaphors simplistic, but for decades, scientists lacked the experimental tools to record from the various components of a single nerve cell. “That’s essentially the neuron being collapsed into a point in space,” said Bartlett Mel, a computational neuroscientist at the University of Southern California. “It didn’t have any internal articulation of activity.” The model ignored the fact that the thousands of inputs flowing into a given neuron landed in different locations along its various dendrites. It ignored the idea (eventually confirmed) that individual dendrites might function differently from one another. And it ignored the possibility that computations might be performed by other internal structures.
This compartmentalization of signals meant that separate dendrites could be processing information independently of one another. “This was at odds with the point-neuron hypothesis, in which a neuron simply added everything up regardless of location,” Mel said.
That prompted Koch and other neuroscientists, including Gordon Shepherd at the Yale School of Medicine, to model how the structure of dendrites could in principle allow neurons to act not as simple logic gates, but as complex, multi-unit processing systems. They simulated how dendritic trees could host numerous logic operations, through a series of complex hypothetical mechanisms.
Later, Mel and several colleagues looked more closely at how the cell might be managing multiple inputs within its individual dendrites. What they found surprised them: The dendrites generated local spikes, had their own nonlinear input-output curves and had their own activation thresholds, distinct from those of the neuron as a whole. The dendrites themselves could act as AND gates, or as a host of other computing devices.
Mel, along with his former graduate student Yiota Poirazi (now a computational neuroscientist at the Institute of Molecular Biology and Biotechnology in Greece), realized that this meant that they could conceive of a single neuron as a two-layer network. The dendrites would serve as nonlinear computing subunits, collecting inputs and spitting out intermediate outputs. Those signals would then get combined in the cell body, which would determine how the neuron as a whole would respond.
Whether the activity at the dendritic level actually influenced the neuron’s firing and the activity of neighboring neurons was still unclear. But regardless, that local processing might prepare or condition the system to respond differently to future inputs or help wire it in new ways, according to Shepherd.
Whatever the case, “the trend then was, ‘OK, be careful, the neuron might be more powerful than you thought,’” Mel said.
Shepherd agreed. “Much of the power of the processing that takes place in the cortex is actually subthreshold,” he said. “A single-neuron system can be more than just one integrative system. It can be two layers, or even more.” In theory, almost any imaginable computation might be performed by one neuron with enough dendrites, each capable of performing its own nonlinear operation.
In the recent Science paper, the researchers took this idea one step further: They suggested that a single dendritic compartment might be able to perform these complex computations all on its own.
Unexpected Spikes and Old Obstacles
Matthew Larkum, a neuroscientist at Humboldt, and his team started looking at dendrites with a different question in mind. Because dendritic activity had been studied primarily in rodents, the researchers wanted to investigate how electrical signaling might be different in human neurons, which have much longer dendrites. They obtained slices of brain tissue from layers 2 and 3 of the human cortex, which contain particularly large neurons with many dendrites. When they stimulated those dendrites with an electrical current, they noticed something strange.
They saw unexpected, repeated spiking — and those spikes seemed completely unlike other known kinds of neural signaling. They were particularly rapid and brief, like action potentials, and arose from fluxes of calcium ions. This was noteworthy because conventional action potentials are usually caused by sodium and potassium ions. And while calcium-induced signaling had been previously observed in rodent dendrites, those spikes tended to last much longer.
Stranger still, feeding more electrical stimulation into the dendrites lowered the intensity of the neuron’s firing instead of increasing it. “Suddenly, we stimulate more and we get less,” Gidon said. “That caught our eye.”
To figure out what the new kind of spiking might be doing, the scientists teamed up with Poirazi and a researcher in her lab in Greece, Athanasia Papoutsi, who jointly created a model to reflect the neurons’ behavior.
The model found that the dendrite spiked in response to two separate inputs — but failed to do so when those inputs were combined. This was equivalent to a nonlinear computation known as exclusive OR (or XOR), which yields a binary output of 1 if one (but only one) of the inputs is 1.
This finding immediately struck a chord with the computer science community. XOR functions were for many years deemed impossible in single neurons: In their 1969 book Perceptrons, the computer scientists Marvin Minsky and Seymour Papert offered a proof that single-layer artificial networks could not perform XOR. That conclusion was so devastating that many computer scientists blamed it for the doldrums that neural network research fell into until the 1980s.
Neural network researchers did eventually find ways of dodging the obstacle that Minsky and Papert identified, and neuroscientists found examples of those solutions in nature. For example, Poirazi already knew XOR was possible in a single neuron: Just two dendrites together could achieve it. But in these new experiments, she and her colleagues were offering a plausible biophysical mechanism to facilitate it — in a single dendrite.
“For me, it’s another degree of flexibility that the system has,” Poirazi said. “It just shows you that this system has many different ways of computing.” Still, she points out that if a single neuron could already solve this kind of problem, “why would the system go to all the trouble to come up with more complicated units inside the neuron?”
Processors Within Processors
Certainly not all neurons are like that. According to Gidon, there are plenty of smaller, point-like neurons in other parts of the brain. Presumably, then, this neural complexity exists for a reason. So why do single compartments within a neuron need the capacity to do what the entire neuron, or a small network of neurons, can do just fine? The obvious possibility is that a neuron behaving like a multilayered network has much more processing power and can therefore learn or store more. “Maybe you have a deep network within a single neuron,” Poirazi said. “And that’s much more powerful in terms of learning difficult problems, in terms of cognition.”
Perhaps, Kording added, “a single neuron may be able to compute truly complex functions. For example, it might, by itself, be able to recognize an object.” Having such powerful individual neurons, according to Poirazi, might also help the brain conserve energy.
Larkum’s group plans to search for similar signals in the dendrites of rodents and other animals, to determine whether this computational ability is unique to humans. They also want to move beyond the scope of their model to associate the neural activity they observed with actual behavior. Meanwhile, Poirazi now hopes to compare the computations in these dendrites to what happens in a network of neurons, to suss out any advantages the former might have. This will include testing for other types of logic operations and exploring how those operations might contribute to learning or memory. “Until we map this out, we can’t really tell how powerful this discovery is,” Poirazi said.
Though there’s still much work to be done, the researchers believe these findings mark a need to rethink how they model the brain and its broader functions. Focusing on the connectivity of different neurons and brain regions won’t be enough.
The new results also seem poised to influence questions in the machine learning and artificial intelligence fields. Artificial neural networks rely on the point model, treating neurons as nodes that tally inputs and pass the sum through an activity function. “Very few people have taken seriously the notion that a single neuron could be a complex computational device,” said Gary Marcus, a cognitive scientist at New York University and an outspoken skeptic of some claims made for deep learning.
Although the Science paper is but one finding in an extensive history of work that demonstrates this idea, he added, computer scientists might be more responsive to it because it frames the issue in terms of the XOR problem that dogged neural network research for so long. “It’s saying, we really need to think about this,” Marcus said. “The whole game — to come up with how you get smart cognition out of dumb neurons — might be wrong.”
“This is a super clean demonstration of that,” he added. “It’s going to speak above the noise.”