This Simple Experiment Could Challenge Standard Quantum Theory

A deceptively simple experiment that involves making precise measurements of the time it takes for a particle to go from point A to point B could spark a breakthrough in quantum physics. The findings could focus attention on an alternative to standard quantum theory called Bohmian mechanics, which posits an underworld of unseen waves that guide particles from place to place.

A new study, by a team at the Ludwig Maximilian University of Munich (LMU) in Germany, makes precise predictions for such an experiment using Bohmian mechanics, a theory formulated by theoretical physicist David Bohm in the 1950s and augmented by modern-day theorists. Standard quantum theory fails in this regard, and physicists have to resort to assumptions and approximations to calculate particle transit times.

“If people knew that a theory that they love so much—standard quantum mechanics—cannot make [precise] predictions in such a simple case, that should at least make them wonder,” says theorist and LMU team member Serj Aristarhov.

It is no secret that the quantum world is weird. Consider a setup that fires electrons at a screen. You cannot predict exactly where any given electron will land to form, say, a fluorescent dot. But you can predict with precision the spatial distribution, or pattern, of dots that takes shape over time as the electrons land one by one. Some locations will have more electrons; others will have fewer. But this weirdness hides something even stranger. All else being equal, each electron will reach the detector at a slightly different time, its so-called arrival time. Just like the positions, the arrival times will have a distribution: some arrival times will be more common, and others will be less so.

But textbook quantum physics has no mechanism for precisely predicting this temporal distribution. “Normal quantum theory is only concerned with ‘where’; they ignore the ‘when,’” says team member and theorist Siddhant Das. “That’s one way to diagnose that there’s something fishy.”

There is a deep reason for this curious shortcoming. In standard quantum theory, a physical property that can be measured is called an “observable.” The position of a particle, for example, is an observable. Each and every observable is associated with a corresponding mathematical entity called an “operator.” But the standard theory has no such operator for observing time. In 1933 Austrian theoretical physicist Wolfgang Pauli showed that quantum theory could not accommodate a time operator, at least not in the standard way of thinking about it. “We conclude therefore that the introduction of a time operator … must be abandoned fundamentally,” he wrote.


But measuring particle arrival times and or their “time of flight” is an important aspect of experimental physics. For example, such measurements are made with detectors at the Large Hadron Collider or instruments called mass spectrometers that use such information to calculate the masses and momenta of particles, ions and molecules.

Even though such calculations concern quantum systems, physicists cannot use unadulterated quantum mechanics all the way through. “You would have no way to come up with [an unambiguous] prediction,” Das says.

Instead they resort to assumptions to arrive at answers. For example, in one method, experimenters assume that once the particle leaves its source, it behaves classically, meaning it follows Newton’s equations of motion.

This results in a hybrid approach—one that is part quantum, part classical. It starts with the quantum perspective, where each particle is represented by a mathematical abstraction called a wave function. Identically prepared particles will have identical wave functions when they are released from their source. But measuring the momentum of each particle (or, for that matter, its position) at the instant of release will yield different values each time. Taken together, these values follow a distribution that is precisely predicted by the initial wave function. Starting from this ensemble of values for identically prepared particles, and assuming that a particle follows a classical trajectory once it is emitted, the result is a distribution of arrival times at the detector that depends on the initial momentum distribution.

Standard theory is also often used for another quantum mechanical method for calculating arrival times. As a particle flies toward a detector, its wave function evolves according to the Schrödinger equation, which describes a particle’s changing state over time. Consider the one-dimensional case of a detector that is a certain horizontal distance from an emission source. The Schrödinger equation determines the wave function of the particle and hence the probability of detecting that particle at that location, assuming that the particle crosses the location only once (there is, of course, no clear way to substantiate this assumption in standard quantum mechanics). Using such assumptions, physicists can calculate the probability that the particle will arrive at the detector at a given time (t) or earlier.

“From the perspective of standard quantum mechanics, it sounds perfectly fine,” Aristarhov says. “And you expect to have a nice answer from that.”

There is a hitch, however. To go from the probability that the arrival time is less than or equal to t to the probability that it is exactly equal to tinvolves calculating a quantity that physicists call the quantum flux, or quantum probability current—a measure of how the probability of finding the particle at the detector location changes with time. This works well, except that, at times, the quantum flux can be negative even though it is hard to find wave functions for which the quantity becomes appreciably negative. But nothing “prohibits this quantity from being negative,” Aristarhov says. “And this is a disaster.” A negative quantum flux leads to negative probabilities, and probabilities can never be less than zero.

Using the Schrödinger evolution to calculate the distribution of arrival times only works when the quantum flux is positive—a case that, in the real world, only definitively exists when the detector is in the “far field,” or at a considerable distance from the source, and the particle is moving freely in the absence of potentials. When experimentalists measure such far-field arrival times, both the hybrid and quantum flux approaches make similar predictions that tally well with experimental findings. But they do not make clear predictions for “near field” cases, where the detector is very close to the source.


Dissatisfied with this flawed status quo, in 2018 Das and Aristarhov, along with their then Ph.D. adviser Detlef Dürr, an expert on Bohmian mechanics at LMU who died earlier this year, and their colleagues, began working on Bohmian-based predictions of arrival times. Bohm’s theory holds that each particle is guided by its wave function. Unlike standard quantum mechanics, in which a particle is considered to have no precise position or momentum prior to a measurement—and hence no trajectory—particles in Bohmian mechanics are real and have squiggly trajectories described by precise equations of motion (albeit ones that differ from Newton’s equations of motion).

Among the researchers’ first findings was that far-field measurements would fail to distinguish between the predictions of Bohmian mechanics and those of the hybrid or quantum flux approaches. This is because, over large distances, Bohmian trajectories become straight lines, so the hybrid semi-classical approximation holds. Also, for straight far-field trajectories, the quantum flux is always positive, and its value is predicted exactly by Bohmian mechanics. “If you put a detector far enough [away], and you do Bohmian analysis, you see that it coincides with the hybrid approach and the quantum flux approach,” Aristarhov says.

The key, then, is to do near-field measurements, but those have been considered impossible. “The near-field regime is very volatile. It’s very sensitive to the initial wave function shape you have created,” Das says. Also, “if you come very close to the region of initial preparation, the particle will just be detected instantaneously. You cannot resolve [the arrival times] and see the differences between this prediction and that prediction.”

To avoid this problem, Das and Dürr proposed an experimental setup that would allow particles to be detected far away from the source while still generating unique results that could distinguish the predictions of Bohmian mechanics from those of the more standard methods.

Conceptually, the team’s proposed setup is rather simple. Imagine a waveguide—a cylindrical pathway that confines the motion of a particle (an optical fiber is such a waveguide for photons of light, for example). On one end of the waveguide, prepare a particle—ideally an electron or some particle of matter—in its lowest energy, or ground, state and trap it in a bowl-shaped electric potential well. This well is actually the composite of two adjacent potential barriers that collectively create the parabolic shape. If one of the barriers is switched off, the particle will still be blocked by the other that remains in place, but it is free to escape from the well into the waveguide.

Das pursued the painstaking task of fleshing out the experiment’s parameters, performing calculations and simulations to determine the theoretical distribution of arrival times at a detector placed far away from a source along a waveguide’s axis. After a few years of work, he had obtained clear results for two different types of initial wave functions associated with particles such as electrons. Each wave function can be characterized by something called its spin vector. Imagine an arrow associated with the wave function that can be pointing in any direction. The team looked at two cases: one in which the arrow points along the axis of the waveguide and another in which it is perpendicular to that axis.

The team showed that, when the wave function’s spin vector is aligned along the waveguide’s axis, the distribution of arrival times predicted by the quantum flux method and by Bohmian mechanics are identical. But they differ significantly from the hybrid approach.

When the spin vector is perpendicular, however, the distinctions become starker. With help from their LMU colleague Markus Nöth, the researchers showed that all the Bohmian trajectories will strike the detector at or before this cutoff time. “This was very unexpected,” Das says.

Again, the Bohmian prediction differs significantly from the predictions of the semi-classical hybrid theory, which do not exhibit such a sharp arrival-time cutoff. And crucially, in this scenario, the quantum flux is negative, meaning that calculating arrival times using Schrödinger evolution becomes impossible. The standard quantum theorists “put their hands up when [the quantum flux] becomes negative,” Das says.


Quantum theorist Charis Anastopoulos of the University of Patras in Greece, an expert on arrival times, who was not involved with this work, is both impressed and circumspect. “The setup they are proposing seems plausible,” he says. And because each approach to calculating the distribution of arrival times involves a different way of thinking about quantum reality, a clear experimental finding could jolt the foundations of quantum mechanics. “It will vindicate particular ways of thinking. So in this way, it will have some impact,” Anastopoulos says. “If it [agrees with] Bohmian mechanics, which is a very distinctive prediction, this would be a great impact, of course.”

At least one experimentalist is gearing up to make the team’s proposal a reality. Before Dürr’s death, Ferdinand Schmidt-Kaler of the Johannes Gutenberg University Mainz in Germany had been in discussions with him about testing arrival times. Schmidt-Kaler is an expert on a type of ion trap in which electric fields are used to confine a single calcium ion. An array of lasers is used to cool the ion to its quantum ground state, where the momentum and position uncertainties of the ion are at their minimum. The trap is a three-dimensional bowl-shaped region created by the combination of two electric potentials; the ion sits at the bottom of this “harmonic” potential. Switching off one of the potentials creates conditions similar to what is required by the theoretical proposal: a barrier on one side and a sloping electric potential on the other side. The ion moves down that slope, accelerates and gains velocity. “You can have a detector outside the trap and measure the arrival time,” Schmidt-Kaler says. “That is what made it so attractive.”

For now, his group has done experiments in which the researchers eject the ion out of its trap and detect it outside. They showed that the time of flight is dependent on a particle’s initial wave function. The results were published in New Journal of Physics this year. Schmidt-Kaler and his colleagues have also performed not yet published tests of the ion exiting the trap only to be reflected back in by an “electric mirror” and recaptured—a process the setup achieves with 98 percent efficiency, he says. “We are underway,” Schmidt-Kaler says. “Of course, it is not tuned to optimize this measurement of the time of flight distribution, but it could be.”

That is easier said than done. The detector outside the ion trap will likely be a sheet of laser light, and the team will have to measure the ion’s interaction with the light sheet to nanosecond precision. The experimentalists will also need to switch off one half of the harmonic potential with similar temporal precision—another serious challenge. These and other pitfalls abound on the tortuous path that must be traversed between theoretical prediction and experimental realization.

Still, Schmidt-Kaler is excited about the prospects of using time-of-flight measurements to test the foundations of quantum mechanics. “This has the attraction of being completely different from other [kinds of] tests. It really is something new,” he says. “This will go through many iterations. We will see the first results, I hope, in the next year. That’s my clear expectation.”

Meanwhile Aristarhov and Das are reaching out to others, too. “We really hope that the experimentalists around the world notice our work,” Aristarhov says. “We will join forces to do the experiments.”

And a conclusion written by Dürr in a yet to be published paper features final words that could almost be an epitaph: “It should be clear by now that the chapter on time measurements in quantum physics can only be written if genuine quantum mechanical time-of-flight data become available,” he wrote. Which theory will the experimental data pick out as correct—if any? “It’s a very exciting question,” Dürr added.

Link Original:

Qué es la epigenética y cuál es su importancia para el futuro

MADRID, 27 Dic. (EDIZIONES) –    Del mismo modo que no podemos alterar el significado de las palabras de un diccionario, los genes heredados de nuestros padres y los que aportaremos como herencia a nuestros hijos contienen instrucciones precisas que nuestro cuerpo no puede dejar de obedecer.  

Si los genes fuesen palabras, el epigenoma sería la gramática que da sentido a las palabras y que permite ordenarlas en frases con sentido. «La gramática, sin embargo, es mucho más versátil y maleable. Dentro de unos límites podemos manipularla para redactar desde simples manuales de cocina a poesías excelsas llenas de emoción y sentimientos usando el mismo vocabulario. Lo mismo hace la epigenética, que tiene la función de regular el funcionamiento de todos nuestros genes para configurar el curso de nuestras vidas».

Así lo afirma en una entrevista con Infosalus el doctor en Biología e investigador y profesor de Genética en la Universidad de Barcelona David Bueno i Torrens, con motivo de la publicación de su libro ‘Epigenoma para cuidar tu cuerpo y tu vida’ (Plataforma Editorial).

En concreto, el término epigenética fue acuñado en 1953 para referirse al estudio de las interacciones entre genes y factores ambientales que se producen en los organismos. Las modificaciones epigenéticas se van construyendo con el paso del tiempo y, a veces, también se van eliminando. No son permanentes como los genes, sino temporales, aunque muy a menudo duran toda la vida.

Buena parte sí dependen de nosotros y de nuestro estilo de vida. Según cómo sea éste, y en función de los imprevisibles azares que nos depare la vida, se fijarán unas modificaciones epigenéticas u otras. E incluso en algunos casos dependen de nuestros propios pensamientos.

«Se trata de unas señales de tráfico que están puestas en nuestro genoma. Contiene todas las instrucciones para que funcionemos y nuestro cuerpo se forme desde la fecundación hasta ser viejos. Como cualquier manual de instrucciones hay que leerlo bien y la epigenética sería como las normas sintánticas que permiten leer bien toda la información, dicen cuándo usar cada palabra, en qué cantidad, o cuándo dejar de usarla, por ejemplo.  Es como tener una carretera, que sería nuestro genoma, y pones una señal que limita la velocidad, otra que hay que ir a 50 etc. La carretera es la mismo, pero funcionará de otra manera porque has limitado la velocidad, o has hecho un stop, o un sentido obligatorio, son señales que permiten que el genoma funcione de manera correcta», explica el también divulgador científico.

En concreto, dice que son moléculas específicas que se pegan al ADN o a las proteínas que lo acompañan y se ponen en función de las condiciones ambientales, además de ayudar a regular el genoma.

 Por ejemplo, dice que una persona con una dieta rica en azúcares necesita producir más enzimas para degradarlos, los genes que gestionan los azúcares están más activos porque tienen señales que les hacen estar más activos. «Es la forma de adaptar el funcionamiento del genoma a la vida que cada persona lleva», sostiene Bueno i Torrens.

A su juicio, el epigenoma es importante porque se ha visto que muchos de estos factores ambientales, las modificaciones que introducimos, pueden favorecer algunos aspectos del genoma o bien perjudicar otros. «Se ha visto que sustancias tóxicas como el humo del tabaco provoca modificaciones epigenéticas en varias docenas de genes para que los pulmones se acostumbren a respirar ese aire contaminado.

El efecto secundario es que aumentan las posibilidades de tener cáncer de pulmón. Cuando un fumador deja de fumar puede pensar que ha quedado libre de este riesgo pero se ha visto que estas modificaciones epigenéticas pueden permanecer en los genes de sus pulmones durante unos 20 años y es donde está la importancia médica», explica el biólogo.

 Según señala, otro ejemplo sería por ejemplo un consumo excesivo de grasas, ya que éste hace que se activen unos genes a través de modificaciones epigenéticas para que puedas digerirlas mejor y como consecuencia aumenta la posibilidad de que se pueda padecer diabetes en el futuro.

«Por ello, se permite ver que estas modificaciones epigenéticas están en el origen de muchas enfermedades y permite explicar por qué hay personas que tienen determinadas enfermedades. La epigenética está en fase de investigación y el campo sanitario en el que está más avanzada es en el del cáncer. Se ha visto que muchos procesos cancersos tienen origen epigenético y hay pruebas que, según qué modificaciones epigenéticas tenga el paciente, indican qué tratamiento le funcionará mejor para el cáncer, es algo que se está empezando a usar», celebra el experto.


Así con todo, a juicio de este experto en Genética, la epigenética pasa primero por identificar qué modificaciones pueden ocasionar enfermedades. «Se puede emplear como método diagnóstico y como pronóstico», indica.

Después para ver cuál es el origen de estas modificaciones y qué factores ambientales las hacen más habituales. El humo del tabaco es obvio que provoca enfermedades así como el alcohol, pero también hay otras costumbres que no se saben que producen modificaciones epigenéticas que pueden resultar nocivas», añade el especialista.

En tercer lugar cree que desarrollar fármacos epigenéticos que permitan reconducir estas modificaciones cuando estén mal hechas. «Se tienen unas modificaciones epigenéticas que te hacen ser propenso a tener trastornos mentales, cáncer, diabetes por ejemplo, y si se identifica cuáles son a través de un fármaco se podrá cambiar el epigenoma, como mínimo para disminuir la severidad del trastorno», agrega David Bueno i Torrens.

Link Original: