Those, more or less, are the alien laws that explain everything from hydrogen atoms to lasers and transistors, and from which no hint of an experimental deviation has ever been found, from the 1920s until today. But could this really be how the universe operates? Is the “bedrock layer of reality” a giant wave of complex numbers encoding potentialities—until someone looks? And what do we mean by “looking,” anyway?

There are different interpretive camps within quantum mechanics, which have squabbled with each other for generations, even though, by design, they all lead to the same predictions for any experiment that anyone can imagine doing. One interpretation is Many Worlds, which says that the different possible configurations of a system (when far enough apart) are literally parallel universes, with the “weight” of each universe given by its amplitude. In this view, the whole concept of measurement—and of the amplitude waves collapsing on measurement—is a sort of illusion, playing no fundamental role in physics. All that ever happens is linear evolution of the entire universe’s amplitude wave—including a part that describes the atoms of your body, which (the math then demands) “splits” into parallel copies whenever you think you’re making a measurement. Each copy would perceive only itself and not the others. While this might surprise people, Many Worlds is seen by many (certainly by its proponents, who are growing in number) as the *conservative* option: the one that adds the least to the bare math.

A second interpretation is Bohmian mechanics, which agrees with Many Worlds about the reality of the giant amplitude wave, but supplements it with a “true” configuration that a physical system is “really” in, regardless of whether or not anyone measures it. The amplitude wave pushes around the “true” configuration in a way that precisely matches the predictions of quantum mechanics. A third option is Niels Bohr’s original “Copenhagen Interpretation,” which says—but in many more words!—that the amplitude wave is just something in your head, a tool you use to make predictions. In this view, “reality” doesn’t even exist prior to your making a measurement of it—and if you don’t understand that, well, that just proves how mired you are in outdated classical ways of thinking, and how stubbornly you insist on asking illegitimate questions.

But wait: if these interpretations (and others that I omitted) all lead to the same predictions, then how could we ever decide which one is right? More pointedly, does it even mean anything for one to be right and the others wrong, or are these just different flavors of optional verbal seasoning on the same mathematical meat? In his recent quantum mechanics textbook, the great physicist Steven Weinberg reviews the interpretive options, ultimately finding all of them wanting. He ends with the hope that new developments in physics will give us better options. But what could those new developments be?

In the last few decades, the biggest new thing in quantum mechanics has been the field of quantum computing and information. The goal here, you might say, is to “put the giant amplitude wave to work”: rather than obsessing over its true nature, simply exploit it to do calculations faster than is possible classically, or to help with other information-processing tasks (like communication and encryption). The key insight behind quantum computing was articulated by Richard Feynman in 1982: to write down the state of n interacting particles each of which could be in either of two states, quantum mechanics says you need 2^{n} amplitudes, one for every possible configuration of all n of the particles. Chemists and physicists have known for decades that this can make quantum systems prohibitively difficult to simulate on a classical computer, since 2^{n} grows so rapidly as a function of n.

But if so, then why not build computers that would themselves take advantage of giant amplitude waves? If nothing else, such computers could be useful for simulating quantum physics! What’s more, in 1994, Peter Shor discovered that such a machine would be useful for more than physical simulations: it could also be used to factor large numbers efficiently, and thereby break most of the cryptography currently used on the Internet. Genuinely useful quantum computers are still a ways away, but experimentalists have made dramatic progress, and have already demonstrated many of the basic building blocks.

I should add that, for my money, the biggest application of quantum computers will be neither simulation nor codebreaking, but simply proving that this is possible at all! If you like, a useful quantum computer would be the most dramatic demonstration imaginable that our world really does need to be described by a gigantic amplitude wave, that there’s no way around that, no simpler classical reality behind the scenes. It would be the final nail in the coffin of the idea—which many of my colleagues still defend—that quantum mechanics, as currently understood, must be merely an approximation that works for a few particles at a time; and when systems get larger, some new principle must take over to stop the exponential explosion.

But if quantum computers provide a new regime in which to probe quantum mechanics, that raises an even broader question: could the field of quantum computing somehow clear up the generations-old debate about the interpretation of quantum mechanics? Indeed, could it do that even before useful quantum computers are built?

At one level, the answer seems like an obvious “no.” Quantum computing could be seen as “merely” a proposed application of quantum mechanics as that theory has existed in physics books for generations. So, to whatever extent all the interpretations make the same predictions, they also agree with each other about what a quantum computer would do. In particular, if quantum computers are built, you shouldn’t expect any of the interpretive camps I listed before to concede that its ideas were wrong. (More likely that each camp will claim its ideas were vindicated!)

At another level, however, quantum computing makes certain aspects of quantum mechanics more salient—for example, the fact that it takes 2^{n} amplitudes to describe n particles—and so might make some interpretations seem more natural than others. Indeed that prospect, more than any application, is why quantum computing was invented in the first place. David Deutsch, who’s considered one of the two founders of quantum computing (along with Feynman), is a diehard proponent of the Many Worlds interpretation, and saw quantum computing as a way to convince the world (at least, this world!) of the truth of Many Worlds. Here’s how Deutsch put it in his 1997 book “The Fabric of Reality”:

Logically, the possibility of complex quantum computations adds nothing to a case [for the Many Worlds Interpretation] that is already unanswerable. But it does add psychological impact. With Shor’s algorithm, the argument has been writ very large. To those who still cling to a single-universe world-view, I issue this challenge:

explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10^{500}or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 10^{80}atoms in the entire visible universe, an utterly minuscule number compared with 10^{500}. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?

As you might imagine, not all researchers agree that a quantum computer would be “psychological evidence” for Many Worlds, or even that the two things have much to do with each other. Yes, some researchers reply, a quantum computer would take exponential resources to simulate classically (using any known algorithm), but all the interpretations agree about that. And more pointedly: thinking of the branches of a quantum computation as parallel universes might lead you to imagine that a quantum computer could solve hard problems in an instant, by simply “trying each possible solution in a different universe.” That is, indeed, how most popular articles explain quantum computing, but it’s also wrong!

The issue is this: suppose you’re facing some arbitrary problem—like, say, the Traveling Salesman problem, of finding the shortest path that visits a collection of cities—that’s hard because of a combinatorial explosion of possible solutions. It’s easy to program your quantum computer to assign every possible solution an equal amplitude. At some point, however, you need to make a measurement, which returns a single answer. And if you haven’t done anything to boost the amplitude of the answer you want, then you’ll see merely a random answer—which, of course, you could’ve picked for yourself, with no quantum computer needed!

For this reason, the only hope for a quantum-computing advantage comes from *interference*: the key aspect of amplitudes that has no classical counterpart, and indeed, that taught physicists that the world has to be described with amplitudes in the first place. Interference is customarily illustrated by the double-slit experiment, in which we shoot a photon at a screen with two slits in it, and then observe where the photon lands on a second screen behind it. What we find is that there are certain “dark patches” on the second screen where the photon never appears—and yet, if we close one of the slits, then the photon can appear in those patches. In other words, *decreasing* the number of ways for the photon to get somewhere can *increase* the probability that it gets there! According to quantum mechanics, the reason is that the amplitude for the photon to land somewhere can receive a positive contribution from the first slit, and a negative contribution from the second. In that case, if both slits are open, then the two contributions cancel each other out, and the photon never appears there at all. (Because the probability is the amplitude squared, both negative and positive amplitudes correspond to positive probabilities.)

Likewise, when designing algorithms for quantum computers, the goal is always to choreograph things so that, for each wrong answer, some of the contributions to its amplitude are positive and others are negative, so on average they cancel out, leaving an amplitude close to zero. Meanwhile, the contributions to the right answer’s amplitude should reinforce each other (being, say, all positive, or all negative). If you can arrange this, then when you measure, you’ll see the right answer with high probability.

It was precisely by orchestrating such a clever interference pattern that Peter Shor managed to devise his quantum algorithm for factoring large numbers. To do so, Shor had to exploit extremely specific properties of the factoring problem: it was not just a matter of “trying each possible divisor in a different parallel universe.” In fact, an important 1994 theorem of Bennett, Bernstein, Brassard, and Vazirani shows that what you might call the “naïve parallel-universe approach” never yields an exponential speed improvement. The naïve approach can reveal solutions in only the square root of the number of steps that a classical computer would need, an important phenomenon called the Grover speedup. But that square-root advantage turns out to be the limit: if you want to do better, then like Shor, you need to find something special about your problem that lets interference reveal its answer.

What are the implications of these facts for Deutsch’s argument that only Many Worlds can explain how a quantum computer works? At the least, we should say that the “exponential cornucopia of parallel universes” almost always hides from us, revealing itself only in very special interference experiments where all the “universes” collaborate, rather than any one of them shouting above the rest. But one could go even further. One could say: To whatever extent the parallel universes do collaborate in a huge interference pattern to reveal (say) the factors of a number, to that extent they never had separate identities as “parallel universes” at all—even according to the Many Worlds interpretation! Rather, they were just one interfering, quantum-mechanical mush. And from a certain perspective, all the quantum computer did was to linearly transform the way in which we measured that mush, as if we were rotating it to see it from a more revealing angle. Conversely, whenever the branches do act like parallel universes, Many Worlds itself tells us that we only observe one of them—so from a strict empirical standpoint, we could treat the others (if we liked) as unrealized hypotheticals. That, at least, is the sort of reply a modern Copenhagenist *might* give, if she wanted to answer Deutsch’s argument on its own terms.

There are other aspects of quantum information that seem more “Copenhagen-like” than “Many-Worlds-like”—or at least, for which thinking about “parallel universes” too naïvely could lead us astray. So for example, suppose Alice sends n quantum-mechanical bits (or qubits) to Bob, then Bob measures qubits in any way he likes. How many classical bits can Alice transmit to Bob that way? If you remember that n qubits require 2^{n} amplitudes to describe, you might conjecture that Alice could achieve an incredible information compression—“storing one bit in each parallel universe.” But alas, an important result called Holevo’s Theorem says that, because of the severe limitations on what Bob learns when he measures the qubits, such compression is impossible. In fact, by sending n qubits to Bob, Alice can reliably communicate only n bits (or 2n bits, if Alice and Bob shared quantum correlations in advance), essentially no better than if she’d sent the bits classically. So for this task, you might say, the amplitude wave acts more like “something in our heads” (as the Copenhagenists always said) than like “something out there in reality” (as the Many-Worlders say).

But the Many-Worlders don’t need to take this lying down. They could respond, for example, by pointing to other, more specialized communication problems, in which it’s been proven that Alice and Bob can solve using exponentially fewer qubits than classical bits. Here’s one example of such a problem, drawing on a 1999 theorem of Ran Raz and a 2010 theorem of Boaz Klartag and Oded Regev: Alice knows a vector in a high-dimensional space, while Bob knows two orthogonal subspaces. Promised that the vector lies in one of the two subspaces, can you figure out which one holds the vector? Quantumly, Alice can encode the components of her vector as amplitudes—in effect, squeezing n numbers into exponentially fewer qubits. And crucially, after receiving those qubits, Bob can measure them in a way that doesn’t reveal everything about Alice’s vector, but does reveal which subspace it lies in, which is the one thing Bob wanted to know.

So, do the Many Worlds become “real” for these special problems, but retreat back to being artifacts of the math for ordinary information transmission?

To my mind, one of the wisest replies came from the mathematician and quantum information theorist Boris Tsirelson, who said: “a quantum possibility is more real than a classical possibility, but less real than a classical reality.” In other words, this is a new ontological category, one that our pre-quantum intuitions simply don’t have a good slot for. From this perspective, the contribution of quantum computing is to delineate for which tasks the giant amplitude wave acts “real and Many-Worldish,” and for which other tasks it acts “formal and Copenhagenish.” Quantum computing can give both sides plenty of fresh ammunition, without handing an obvious victory to either.

So then, is there any interpretation that flat-out doesn’t fare well under the lens of quantum computing? While some of my colleagues will strongly disagree, I’d put forward Bohmian mechanics as a candidate. Recall that David Bohm’s vision was of real particles, occupying definite positions in ordinary three-dimensional space, but which are jostled around by a giant amplitude wave in a way that perfectly reproduces the predictions of quantum mechanics. A key selling point of Bohm’s interpretation is that it restores the determinism of classical physics: all the uncertainty of measurement, we can say in his picture, arises from lack of knowledge of the initial conditions. I’d describe Bohm’s picture as striking and elegant—as long as we’re only talking about one or two particles at a time.

But what happens if we try to apply Bohmian mechanics to a quantum computer—say, one that’s running Shor’s algorithm to factor a 10,000-digit number, using hundreds of thousands of particles? We can do that, but if we do, talking about the particles’ “real locations” will add spectacularly little insight. The amplitude wave, you might say, will be “doing all the real work,” with the “true” particle positions bouncing around like comically-irrelevant fluff. Nor, for that matter, will the bouncing be completely deterministic. The reason for this is technical: it has to do with the fact that, while particles’ positions in space are continuous, the 0’s and 1’s in a computer memory (which we might encode, for example, by the spins of the particles) are discrete. And one can prove that, if we want to reproduce the predictions of quantum mechanics for discrete systems, then we need to inject randomness at many times, rather than only at the beginning of the universe.

But it gets worse. In 2005, I proved a theorem that says that, in any theory like Bohmian mechanics, if you wanted to calculate the entire trajectory of the “real” particles, you’d need to solve problems that are thought to be intractable even for quantum computers. One such problem is the so-called collision problem, where you’re given a cryptographic hash function (a function that maps a long message to a short “hash value”) and asked to find any two messages with the same hash. In 2002, I proved that, at least if you use the “naïve parallel-universe” approach, any quantum algorithm for the collision problem requires at least ~H^{1/5} steps, where H is the number of possible hash values. (This lower bound was subsequently improved to ~H^{1/3} by Yaoyun Shi, exactly matching an upper bound of Brassard, Høyer, and Tapp.) By contrast, if (with godlike superpower) you could somehow see the whole histories of Bohmian particles, you could solve the collision problem almost instantly.

What makes this interesting is that, if you ask to see the locations of Bohmian particles at any one time, you won’t find anything that you couldn’t have easily calculated with a standard, garden-variety quantum computer. It’s only when you ask for the particles’ locations at *multiple* times—a question that Bohmian mechanics answers, but that ordinary quantum mechanics rejects as meaningless—that you’re able to see multiple messages with the same hash, and thereby solve the collision problem.

My conclusion is that, if you believe in the reality of Bohmian trajectories, you believe that Nature does even more computational work than a quantum computer could efficiently simulate—but then it hides the fruits of its labor where no one can ever observe it. Now, this sits uneasily with a principle that we might call “Occam’s Razor with Computational Aftershave.” Namely: In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed. (Admittedly, some people would probably argue that the Many Worlds interpretation violates my “aftershave principle” even more flagrantly than Bohmian mechanics does! But that depends, in part, on what we count as “observation”: just our observations, or also the observations of any parallel-universe doppelgängers?)

Could future discoveries in quantum computing theory settle once and for all, to every competent physicist’s satisfaction, “which interpretation is the true one”? To me, it seems much more likely that future insights will continue to do what the previous ones did: broaden our language, strip away irrelevancies, clarify the central issues, while still leaving plenty to argue about for people who like arguing. In the end, asking how quantum computing affects the interpretation of quantum mechanics is sort of like asking how classical computing affects the debate about whether the mind is a machine. In both cases, there was a range of philosophical positions that people defended before a technology came along, and most of those positions still have articulate defenders after the technology. So, by that standard, the technology can’t be said to have “resolved” much! Yet the technology is so striking that even the idea of it—let alone the thing itself—can shift the terms of the debate, which analogies people use in thinking about it, which possibilities they find natural and which contrived. This might, more generally, be the main way technology affects philosophy.

**Go Deeper**

*Editor’s picks for further reading*

Nature News: Quantum Physics: What is really real?

Science writer Zeeya Merali discusses new experiments designed to rule out, or confirm, different interpretations of quantum mechanics.

The Nature of Reality: Debating the Meaning of Quantum Mechanics

An introduction to some of the leading interpretations of quantum mechanics.

**Also by Scott Aaronson**

The Nature of Reality: Is there Anything Beyond Quantum Computing?

Are quantum computers the ultimate limit of computing, or are there devices that could tackle problems too hard for quantum computers?

Quantum Computing Since Democritus

Described by its author as “a candidate for the weirdest book ever to be published by Cambridge University Press,” Scott Aaronson’s book is a journey through the past and present of math, physics, and computer science.

Shtetl-Optimized

Scott Aaronson has been blogging about quantum computing for almost ten years. Read his latest thoughts, and archived posts, here.

Why Philosophers Should Care About Computational Complexity

In this essay, discover connections between computational complexity theory and philosophical questions like the nature of mathematical knowledge, the problem of logical omniscience, the foundations of quantum mechanics, and more.

Yet most researchers think general relativity is wrong.

To be more precise: most believe it is *incomplete*. After all, the other forces of nature are governed by quantum physics; gravity alone has stubbornly resisted a quantum description. Meanwhile, a small but vocal group of researchers thinks that phenomena such as dark matter are actually failures of general relativity, requiring us to look at alternative ideas.

Astronomical observations show that there isn’t enough ordinary matter to account for the behavior of galaxies and other objects. The fix is dark matter, particles invisible to light but endowed with gravity. However, none of our detectors or experiments have ever seen a dark matter particle directly, leading some to doubt that dark matter actually exists. Just as Newton’s theory of gravity is “good enough” for most familiar situations and reveals its limitations only in extreme situations or upon the most detailed examination, maybe what we call dark matter is actually a breakdown of general relativity.

It’s a tantalizing thought, but as Perimeter Institute physicist John Moffat points out, “It’s not easy to modify Einstein’s theory!” The problem is that general relativity (“GR”) is too good: its predictions match observations so closely that, in changing it, physicists seem likely to fall short. The “classic tests” of GR—the small shifts in the orbit of Mercury, the bending of the paths of light around the Sun, and the change in light properties when moving in and out of gravitational fields—are precise enough that they can be used to judge any alternative idea.

That hasn’t stopped maverick scientists like Moffat from looking at alternatives to GR. The rotation of spiral galaxies inspired a particular modification to gravity that lingers like a fungus in the basement of astronomy: “modified Newtonian dynamics,” or MOND. As the name suggests, it’s a change to Newton’s law of gravity rather than general relativity, and it does very well at describing the motion of stars and gas in spiral galaxies without the need for dark matter. However, MOND fails for some other types of galaxies, galaxy clusters, and—because it isn’t compatible with relativity—it cannot explain the “classic tests” of GR, much less the evolution of the universe as a whole.

Nevertheless, MOND is successful enough in galaxies to inspire some theorists to try to modify *it*, in hopes of making predictions that more closely match nature. One of these carries the science-fiction-villain name TeVeS (standing for “tensor-vector-scalar,” the mathematical quantities it depends on), which garnered some attention about 10 years ago for being able to both reproduce the predictions of MOND and still pass the classic GR tests. However, TeVeS still required some kind of dark matter to fit observations of galaxy clusters and cosmological data, and it is mathematically far more difficult to work with than Einstein’s theory.

John Moffat was also motivated to modify GR by the problem of dark matter, but is uninterested in reproducing MOND because of its observational failures. Instead, his modification of general relativity involves allowing the strength of gravity to vary slightly in space and time and changing the way gravity acts over long distances.

While light and matter still move along paths dictated by the curvature of spacetime, just as they do in GR, in Moffat’s theory, that curvature is shaped by something called a vector field, that itself carries mass. For objects like Earth or the Sun, we won’t see a measurable difference, but stars positioned toward the edges of their home galaxies will feel a larger gravitational force than predicted by Newtonian gravity. Farther out still, the force strength drops off in proportion to the mass of the vector field. To Moffat, this is particularly appealing because it reproduces the behavior of dark matter in galaxies, and because the field is part of gravity rather than a form of matter, it explains why we haven’t seen dark matter in our detectors. So far, his theory seems to be able to explain many observed properties of galaxies, galaxy clusters, and other observations.

“That’s not good enough,” says Moffat. “A theory should predict something that the other theories or other paradigm cannot reproduce. Then you know you’re on the right track.” That thought turned him toward another major GR prediction: black holes, objects so massive and compact that their curvature prevents light from escaping.

Few astrophysicists doubt that black holes exist: We know of a large number of very massive, very dense objects in the cosmos, for which the black hole hypothesis is the only one that fits. However, we have yet to “see” the event horizon, the boundary separating the exterior of a black hole from its interior—where nothing can escape back into the outside Universe. That’s the goal of the Event Horizon Telescope (EHT), which is actually made of six observatories scattered around the world, observing the same objects in concert. Working together, they can create real images of whatever is right outside the black hole at the center of the Milky Way, a new frontier where GR’s most exotic effects could be measured.

One of those effects is the black hole’s “shadow,” says MIT astronomer Shep Doeleman, one of the lead researchers on the EHT. The shadow is created as light orbits close to, but not quite in the event horizon; the EHT would see it as a faint ring with a dark interior. GR makes very specific predictions about the shape and size of that ring—which was a dramatic visual effect in the movie Interstellar. But if Moffat’s theory is correct, then the black hole’s shadow could be significantly larger than predicted by GR. That wouldn’t prove him right—unfortunately, science is rarely as clear-cut as that—but it would likely get people’s attention, combined with the ability to describe galaxies without dark matter.

Moffat’s theory, like TeVeS and other modifications, is significantly more complicated to use than GR, and there is no handy visual metaphor, like general relativity’s curved spacetime, to help deepen our intuition about how it works. That’s not to say the theory is *wrong*: sometimes familiarity can make complex things seem simpler, as happened with GR itself.

Also, there’s a mismatch between the rogue gravitational theorists and the astronomers. Saavik Ford, astrophysicist at CUNY and the American Museum of Natural History, says, “GR is the ocean we swim in.” Rather than comparing their observations to every theory out there, Ford and her colleagues look for anything that might be a discrepancy: “No one is saying it’s either GR or this other thing.”

The ultimate arbiter of a theory, after all, is nature. If one of the dark matter experiments found particles with the right properties, then the motive to modify GR would diminish; if more and more experiments fail to find dark matter, then researchers are likely to pay more attention to alternative theories, perhaps even ones that are unorthodox or complex.

**Go Deeper**

*Editor’s picks for further reading
*

John W. Moffat

Find books, papers, and media appearances by Perimeter Institute physicist John Moffat at his personal web site.

Kavli Institute for Theoretical Physics: Dark Matter vs Modified Gravity

Caltech physicist Sean Carroll delivers an hour-long lecture on dark matter and the pros and cons of modified gravity theories.

Quantum Diaries: How do we know dark matter exists?

CERN particle physicist Pauline Gagnon outlines the evidence for dark matter.

The answer to this mystery could come from a decades-old quest exploring the structure of the neutron. Although neutrons are electrically neutral, they are made of particles known as quarks that are positively or negatively charged. Overall, these electric charges cancel out in neutrons, but there is the possibility they are not equally distributed—for instance, perhaps the north poles of neutrons are slightly positive, while the south poles are negative, explains Tim Chupp, a physicist at the University of Michigan who helped test the new shield and design experiments for it. The distribution of charge within the neutron is called its electric dipole moment.

If there is a tiny imbalance of electric charge within the neutron, this could be evidence of a special kind of asymmetry, called a charge parity (or “CP”) violation, that explains why there is more matter than antimatter in the universe. “This asymmetry would have had a strong effect in the early universe, shaping how the universe evolved a microsecond to a nanosecond after the Big Bang,” says physicist Peter Fierlinger of the Technical University of Munich, who led the overall effort that developed the new shield. But to detect it, physicists must work with neutrons that are shielded from the natural magnetic field of the Earth and the artificial magnetic fields of motors, electronics, and all the ubiquitous devices that produce their own magnetic fields.

Experiments conducted since the 1950s have not found a neutron electric dipole moment, but the new shield could improve the sensitivity of such measurements a hundredfold, down to the theoretically predicted scale of this phenomenon.

These exquisitely precise measurements are made possible by a magnetic shield that is ten times better than previous state-of-the-art shields. Previous research typically used a trial-and-error approach to find designs that magnetically shielded their interiors. Instead, Fierlinger and his colleagues used computer models of how magnetic fields interact with matter to optimize elements of their room’s design, including the spacing and thickness of the layers making up the walls.

The result is shield constructed like a Russian nesting doll, made of four one-millimeter-thick sheets of a soft nickel-iron alloy that are extraordinarily easy to magnetize and demagnetize, and act like a sponge to absorb outside magnetic fields. Within a one-cubic-meter space inside this haven, items would experience as little as a tenth of a billionth of a tesla—about 500,000 times less than the magnetic field typically felt at the surface of the Earth, and some ten times less than the magnetic field that suffuses “empty” space in the solar system. The scientists detailed their research May 12 in the Journal of Applied Physics.

The neutron is slightly magnetic, and so experiences a torque in the presence of an external magnetic field. If the neutron has an electric dipole moment, the amount of torque it experiences from an external magnetic field will change if an electric field is also applied on it. The lasers surrounding the shield are part of instruments known as optical atomic magnetometers that help measure and correct for the effects of extraneous magnetic fields.

“This magnetic shield will help us perform tests of the Standard Model with unprecedented precision,” says physicist Wolfgang Korsch at the University of Kentucky, who did not take part in this research. “Any deviation from the predictions of the Standard Model will clearly hint at new physics.”

“These low-energy precision measurements are very complementary with the experiments conducted at high-energy atom smashers such as the LHC,” says theoretical physicist Michael Ramsey-Musolf at the University of Massachusetts at Amherst, who did not participate in this study. “They’re both ways to solve the mystery of physics beyond the Standard Model.”

There are many other discoveries this magnetic shield could lead to, the researchers add. For example, it could help detect magnetic monopoles, long-theorized, hitherto-unseen particles that each only possess one magnetic pole, either north or south, unlike every known particle, which each have two. Magnetic sensors inside the shield could detect the faint, unique magnetic signals from any monopoles zipping through the room.

“The symmetries in nature may make beautiful patterns, but what are really interesting are the imperfections or breaking of symmetries—that’s where interesting physics lies,” says Chupp.

The shield may also open up unexpected avenues of research. “It could help detect the magnetic fields generated by electric currents in the brain,” Korsch says. “If they could actually put a person in there to do those measurements, I’d find that quite exciting.”

**Go Deeper**

*Editor’s picks for further reading*

PhysicsWorld: Extraordinary magnetic shield could reveal neutron’s electric dipole moment

A profile of the new magnetic shield and its scientific potential.

Symmetry: Explain it in 60 seconds: CP violation

A concise explanation of CP violation from Yosef Nir of the Weizmann Institute of Science.

Wikipedia: Neutron Electric Dipole Moment

An overview of the theoretical implications of the neutron electric dipole moment and the experimental searches for its value.

Now, replace the beaten drum with a gravitational disturbance, such as the sudden collapse of a stellar core into an ultra-compact neutron star or black hole. Einstein predicted that such a collapse would create gravitational waves. But do these waves carry energy and, if so, how is that energy distributed—that is, what is its density (energy per unit volume) from point to point?

Gravitational energy is notoriously hard to define. In Einstein’s equations of general relativity, celebrating their 100th anniversary this year, gravitation and energy are on opposite sides. One side of the equations—call it the geometric side—represents gravitation as distortions in the fabric of spacetime. The other—call it the material side—describes the matter and energy in each region, including all forms of non-gravitational radiation, such as light. General relativity informs us that matter and energy compel spacetime to warp, creating what we feel as gravity. For example, the Sun’s mass distorts spacetime and generates a gravitational well in its vicinity. In short, matter and energy are the *cause* of gravitation’s *effect*.

Where then does gravitational energy fit in? Is it a cause, an effect, or both? Einstein’s equations of general relativity do not offer a clear answer. Gravitational energy doesn’t neatly fit on either the geometric or material side of the equations.

Einstein recognized the situation early on, and developed a separate formula for measuring the energy and momentum of gravitational fields. Known as the Einstein energy-momentum complex, it determines the gravitational energy and momentum within any region of spacetime, given its geometric structure.

Due to mathematical limitations of Einstein’s definition, other physicists began to develop independent energy-momentum formulas. These include formulations by French-Greek physicist Achilles Papapetrou, Russian physicists Lev Landau and Evgeny Lifshitz, American physicist Steven Weinberg, and others. Each of these complexes obeys conservation laws, meaning that energy and momentum can be transformed but not lost. For basic cases, such as determining the energy of a non-rotating black hole of mass *M*, they beautifully match each other in predicting an energy of *E= Mc ^{2}*. Thus, they conform to what might be expected for a relativistic definition of energy.

Yet one prediction made by these formulas is most unsettling. In 1955, Nathan Rosen, a former assistant of Einstein, applied several different energy-momentum complexes to a particular model of gravitational waves and calculated its energy in each case to be zero. He consequently proposed that gravitational waves don’t carry energy and thereby cannot really exist in nature. His words carried special weight, since he and Einstein had worked on that very subject. Rosen offered his hypothesis at a Bern conference celebrating the jubilee of special relativity.

Few physicists believed Rosen’s conjecture. All forms of radiation carry energy; why should gravity be different? As Richard Feynman argued two years later in his “sticky bead argument,” presented at a general relativity conference in Chapel Hill, a gravitational wave could jostle a bead on a stick, moving it up and down and, through friction, generate heat—a form of energy—in the process. If the gravitational wave didn’t carry the energy, he argued, where else could it have arisen? Something must have made the bead hot. Feynman did not try to explain why the energy-momentum complexes yielded a value of zero for the energy of gravity waves; presumably, he simply thought they were incomplete or wrong.

Canadian physicist Fred Cooperstock, who had worked for a year with Rosen, takes the value of zero seriously. But while Rosen argued that gravitational waves don’t exist at all, Cooperstock argues that they are real, but carry no energy. Cooperstock’s unorthodox hypothesis is that gravitational energy exists only where the material side of Einstein’s equations is non-zero; that is, in places with matter or (non-gravitational) energy in the first place. Consequently, all empty regions of spacetime have zero gravitational energy. That precludes gravitational waves carrying energy through the void. (If there is no true void, such a point may be moot.) In his view, gravitational waves convey geometric information (ripples in curvature), but not energy, from one point to another. Fluctuations ripple through spacetime, causing notable effects, while somehow carrying no energy.

“I’ve never seen anyone prove that information must carry energy,” Cooperstock says. It is like an elderly woman texting her daughter to bring home a sizable bag of groceries. Even though the text message carries information, but not energy, it triggers some heavy lifting.

A breakthrough came in the 1970s, when Russell Hulse and Joseph Taylor detected and measured the properties of the first-known binary pulsar system, PSR 1913+16. They demonstrated that the system’s orbital period is declining with time, matching a prediction made by Einstein about the transmission of gravitational waves between two masses. It was the first indirect indication of gravitational waves, and it won them the Nobel Prize. But skeptics like Cooperstock argue that fluctuations in the curvature of spacetime caused the results without actually conveying energy through space.

Today, several laboratories around the world are racing to detect gravitational waves directly. Leading the pack is the LIGO (Laser Interferometer Gravitational-wave Observatories) project, recently upgraded to Advanced LIGO , based in Hanford, Washington and Livingston, Louisiana. MiniGRAIL, a spherical gravitational wave detector based in Leiden, Holland, is trying to detect gravitational waves using an ultra-cold, 1,300 kilogram copper alloy sphere. A space-based mission called LISA (Laser Interferometer Space Antenna) is currently being planned.

Despite numerous efforts, as we celebrate the 100th anniversary of general relativity, gravitational energy remains an elusive construct. It has become an even weightier matter than Einstein first thought. However, if astronomers discover gravitational waves and can map out their energy, the burden of proof will finally be lifted. Understanding gravitational energy would help place it on the same footing as other natural interactions, such as electromagnetism, and will bring science closer to a modern-day “holy grail”: uniting gravity with the other forces of nature.

Go Deeper

*Editor’s picks for further reading
*

Nature of Reality: There’s More Than One Way to Hunt for Gravitational Waves

Jennifer Ouellette explores the diverse methods with which researchers are searching for direct evidence of gravitational waves.

TED: The Sound the Universe Makes

In this video, astrophysicist Janna Levin explains how gravitational waves are made and LIGO’s role in searching for them.

Wikipedia: The Sticky Bead Argument

The history of the sticky bead argument and its influence on physicists’ thinking about gravitational waves and energy.

In 1937, the science fiction writer Olaf Stapledon imagined one answer: an enormous, spherical solar collector, built to encircle an energy-hungry civilization’s home star like a giant mylar balloon. This hypothetical mega-structure would grab every last photon of sunlight, providing enough energy to run whatever future technologies engineers could dream up. In 1960, physicist Freeman Dyson fleshed out the scheme: instead of a giant balloon, he speculated, an advanced civilization might crumble up its solar system’s uninhabited planets to create a swarm of rocks that could gather solar energy more efficiently. Dyson also pointed out that, if such a sphere or swarm existed, it would look to us like an unusually dark star, radiating waste heat in the infrared.

“Dyson spheres,” as they’re called (to Dyson’s chagrin), have become sci-fi staples. But they have also gotten some (semi) serious attention from scientists searching for evidence of intelligent life beyond Earth. In two studies, published in 2004 and 2008, Richard Carrigan, a researcher at Fermilab, searched for lopsided, infrared-heavy spectra among some quarter-million infrared sources in a database amassed by the IRAS satellite. IRAS, launched in 1983, surveyed about 96% percent of the sky. The result: no Dyson spheres–or, at least, none that he could confidently distinguish from other potential lookalikes.

If a civilization is sophisticated enough to build a Dyson sphere around one star, though, why should it stop there? Why not outfit a whole galaxy with Dyson spheres? As Jason Wright, assistant professor of astronomy and astrophysics at Penn State, wrote:

Consider a space-faring civilization that can colonize nearby stars in ships that travel at “only” 0.1% the speed of light (our fastest spacecraft travel at about 1/10 this speed). Even if they stop for 1,000 years at each star before launching ships to colonize the next nearest stars, they will still spread to the entire galaxy in 100 million years, which is 1/100 of the age of the Milky Way.

That is, an advanced civilization can fan out across its home galaxy pretty quickly, cosmically speaking, and a galaxy overrun with Dyson spheres and other energy-collecting super-structures would have a global surplus of mid-infrared radiation. With that in mind, Wright and his colleagues have been searching for evidence of such supercivilizations by looking for galaxies whose spectra skew to the infrared. Their campaign, called Glimpsing Heat from Alien Technologies Survey (G-HAT), scoured some 100 million objects observed by NASA’s Wide Field Infrared Survey Explorer (WISE) satellite. In a paper published in April, lead author Roger Griffith reported that, from all those millions, they found 50 galaxies showing infrared excesses that could maybe, possibly be due to alien technology–but, far more likely, are due to natural astrophysical processes. (Incidentally, as Lee Billings reported in Scientific American, the G-HAT team wasn’t able to secure funding from the usual government sources; their work is supported by a grant from the private Templeton Foundation.)

Things may be looking a little bleak for Dyson spheres—and intelligent ET in general, if you’re guided by the Fermi paradox—but there’s some consolation from a pair of researchers in Turkey, who point out that alien engineers might not choose to put their Dyson spheres around sunlike stars in the first place. Instead, they argue, superintelligent engineers would build their Dyson spheres around dim stellar embers called white dwarfs. These mini-Dyson spheres would be all-but undetectable.

Why white dwarfs? First, they’re cooler than stars like the sun, so, assuming that you want to live on or near the Dyson sphere and that you don’t want to be burned to a crisp, a Dyson sphere should be placed much closer to a white dwarf than to a sun-like star. That means that the sphere itself could be a lot smaller and, potentially, easier to build.

Meanwhile, Zaza Osmanov, a researcher at the Free University of Tbilisi in Georgia, has proposed that super-advanced extraterrestrials might build Dyson spheres around pulsars, rapidly rotating neutron stars that emit focused beams of radiation from their poles. To capture this energy, you wouldn’t need an entire sphere: a smaller ring, coinciding with the path of the pulsar’s beam, would do the job.

It’s all extremely speculative, of course, and many would argue that searches for the signature of Dyson spheres, rings, and swarms are so unlikely to turn up any answers that they aren’t worth the computing time. But, as Wright puts it, there’s only one way to make a discovery: “You gotta look.”

**Go Deeper**

*Editor’s picks for further reading
*

NOVA: Eavesdropping on ET

In this NOVA podcast, SETI astronomer Seth Shostak explains why he thinks it’s just a matter of time before we find evidence of other intelligent life in the universe.

Popular Mechanics: Cosmic Megastructures

Read up on the engineering challenges behind imagined cosmic megastructures, including Dyson spheres, space colonies, and more.

SETI Institute: SETI 101

A short history of the search for extraterrestrial intelligence, with links to more information on the Fermi paradox and the social implications of a confirmed detection.

That’s the story of inflation, and it’s the prevailing narrative for how our universe came to be. It has a lot going for it: It explains why the universe has such a uniform temperature, why spacetime appears to be flat, and why physicists can’t find any magnetic monopoles. In the last decade, inflation’s predictions have lined up neatly with observations from telescopes like Planck and WMAP, which have mapped miniscule deviations in cosmic microwave background radiation, an electromagnetic “echo” from near the time of the Big Bang. Though inflation has its critics, it remains the leading theory of how our universe came to be.

Follow inflation to what many theorists think is its logical conclusion, though, and things get very strange. That’s because many versions of inflation lead straight to a multiverse: that is, a cosmos in which our universe is just one of many universes, each with different laws and fundamental constants of physics. The idea is controversial, not least because there is no guarantee that we would ever be able to prove or disprove the existence of these other universes. Now, a team of theorists has shown that a collision between universes would create gravitational waves that could imprint a unique polarization signal on the sky, potentially providing observational evidence for the existence of other universes.

Jonathan Braden, who worked on the paper while he was a graduate student at the University of Toronto, compares the multiverse to a pot of simmering water. As the water boils, air bubbles big and small spontaneously pop into existence and jiggle about. Now imagine that our entire known universe is one of those air bubbles, swimming through the “water” of the universe’s native vacuum energy, as other bubbles emerge around it. The analogy isn’t perfect: For one thing, the energy that drives the creation of new bubble universes isn’t thermal energy, like the heat of a stove, but the inevitable fluctuations that are built in to the principles of quantum mechanics. Even stranger, the “pot”—the space in which the bubble universes are emerging—is constantly getting bigger, and the water supply always being replenished.

In this ever-simmering universe, bubbles may occasionally bump in to each other. If our universe was part of such a collision some time in the distant past, it could leave a telltale circular “bruise” on the cosmic microwave background (CMB) radiation. Astronomers first scanned the CMB’s tiny temperature variation for this telltale mark back in 2011, using measurements from NASA’s Wilkinson Microwave Anisotropy Probe, but found nothing. A second analysis also came up empty-handed.

Now, Braden and his collaborators Dick Bond (University of Toronto) and Laura Mersini-Houghton (University of North Carolina-Chapel Hill) argue that, when it comes to finding evidence for bubble universe collisions, those temperature variations may be only half the picture. In a new paper, they show that the collision should also imprint a distinctive polarization pattern in the CMB. Like the signal famously found and lost by the BICEP2 experiment last year, which astronomers initially thought came from the inflation process itself, the polarization would take a form called B-modes, and would be generated by gravitational waves. But, unlike the primordial B-modes from inflation, which should be coming from everywhere at once, the collision signal would be localized to a single disk of sky.

The new prediction comes from considering nuances in the shape of colliding bubbles. Traditionally, researchers have approximated the bubbles as perfect spheres. In reality, though, those spheres would have little bumps and ridges. Braden compares each bubble to a raised relief globe: from far away, it looks perfectly smooth, but up close mountain ranges and valleys become visible.

“Those mountains and valleys begin to grow very quickly once the bubbles collide,” says Braden. In the disk where the two bubbles intersect, “It looks like someone just splatter-painted it–it’s like a Jackson Pollock painting.” Most exciting for scientists, the collision should also produce gravitational radiation. That radiation could be observed today as localized B-mode polarization that matches up with the temperature “bruise” in the CMB.

The odds of picking up such a signal from a telescope like BICEP2, which only observes a small piece of the sky, are low, points out Braden. That’s because the collision, if it happened, would have to be coincident with the telescope’s field of view. “Ideally an all-sky satellite experiment designed to very precisely measure polarization is the best hope,” says Braden. Today, Braden is working at University College London in the laboratory of Hiranya Peiris, who has participated in previous searches for evidence of bubble collisions. He anticipates that she and her colleagues will be eager to take up the search for the new polarization signal.

If the signal is a no-show, that doesn’t rule out the existence of other universes. But if observers do detect and confirm it, it would be revolutionary. “You might not see anything,” says Braden. “But if you do, it’s giving you an observational handle on physics you probably can’t get in any other way.”

**Go Deeper**

*Editor’s picks for further reading
*

arXiv: Eternal Inflation and Its Implications

Alan Guth, who pioneered the theory of cosmic inflation, provides an in-depth look at its implications for the creation of a multiverse of “bubble” or “pocket” universes.

Early Universe @UCL: Eternal Inflation and Colliding Universes

An introduction to inflation, eternal inflation, and what goes in to the modeling of bubble collisions.

Quanta: Multiverse Collisions May Dot the Sky

Science writer Jennifer Ouellette goes inside the search for evidence of bubble collisions.

At the center of this quagmire is the “wave function.” Using the wave function, better known by its mathematical nickname, ψ (“psi”), physicists can calculate the probability that a quantum measurement will have a particular outcome. The success of this procedure has allowed us to control the subatomic world with unprecedented precision: You can thank (or curse) quantum theory for your iPads, smartphones, and laptops. Yet, unlike classical physics, quantum mechanics can’t deliver a single, definite answer to a simple question about the outcome of a measurements. Instead, it returns a probability distribution representing many different possible outcomes. It’s only after you make a measurement that you observe a stable, predictable, classical outcome. At this point, the wave function is said to have “collapsed.”

To some, this suggests that there is a gap between the real, physical universe and whatever it is that the wave function is describing. So, what does the wave function actually represent? And what, if anything, is actually collapsing? Now, theorists and experimenters are bringing new insight (and new data) to this devilishly complex debate.

**The wave function debate**

Philosophers use the word “ontic” to describe real objects and events in the universe, things that exist regardless of whether anyone observes them. If you think of the universe as a video game, the so-called “ψ-ontic” view holds that the wave function is the source code. From this perspective, the wave function does indeed correspond directly to physical reality, containing a complete description of what philosophers call “the furniture of the world.” For these “ψ-ontologists” (as their opponents playfully call them), quantum theory, and reality itself, is ultimately about how the wave function unfolds over time, according to the Schrödinger Equation. In the quantum realist view, ψ is, in some sense, “all there is.”

To many thinkers in this camp, nothing extraordinary happens at the moment the wave function collapses. The apparently instantaneous collapse is actually just a very rapid process that occurs as a formerly-isolated quantum system interacts with its surrounding environment.

By contrast, the alternative “ψ-epistemic” view holds that the wave function represents at most our limited knowledge of the state of the system—not the source code, but just what you can learn about the source code, if it exists, from a particular round of the game. Some ψ-epistemologists believe an actual ontic state still exists even if the wave function is just a convenient computational tool that doesn’t capture all of the underlying reality. Others in the ψ-epistemic camp contend that the physical ontic state may not even exist in a meaningful way without an observer present: the game doesn’t exist if there’s no one there to play it. Most of the following discussion will adopt a “realist” position, which holds that there is a real, physical, world that exists independent of the observer, regardless of whether or not the wave function captures the whole story.

In the ψ-epistemic view, wave function collapse is not an actual physical process. Instead, it represents the near-instantaneous updating of our knowledge about the state of the system. This seems to give the observer some kind of special status, which may or may not be desirable, depending on your perspective. As a bonus, in this view, uncomfortable quantum superpositions, like those that put Schrödinger’s cat into mortal purgatory, are mere mathematical mirages, sums of possibilities, not actualities. Even if we are temporarily ignorant of it, there may really be only one actual fact of the matter, at a given time, about the questionable vital status of Schrödinger’s cat. It is only our knowledge that seems to change discontinuously, not the cat’s actual state.

**New insights**

Is the wave function objective reality or just subjective knowledge? With such diametrically opposing views, it is no wonder that the two camps can’t “collapse” onto the same meaning for ψ. Now, recent theoretical work by the British physicists Matthew Pusey, Jonathan Barrett, and Terry Rudolph (PBR) has presented the strongest theoretical evidence to date in favor of the ψ-ontic view. The trio of theorists have shown that—with certain assumptions—the ψ-epistemic view contradicts the predictions of quantum mechanics. In light of the astounding empirical success of quantum theory, this seems to suggest that the wave function really does correspond to an objective physical reality, and the ψ-epistemic team is out of luck.

Not so fast, say the skeptics. Remember those “certain assumptions” I mentioned? One of those assumptions is that systems prepared independently have independent physical ontic states; that is, that a photon in Vienna, for example, has absolutely nothing to do with a photon in Cambridge. But almost everything we have ever been able to access experimentally has a fairly recent shared causal history. Even if you agree that, in practice, a quantum system prepared in Vienna is approximately independent of a quantum system prepared in Cambridge, the Earth is a cosmically small place and light takes only a few milliseconds to cross it. Furthermore, the atoms in everything on Earth all emerged from a shared cosmic causal past, stretching all the way back to the big bang, nearly 14 billion years ago.

So, how can we know for sure that no parts of each experimental apparatus or quantum system are quantum mechanically entangled with one another, even if only to a tiny degree? Each system is certainly entangled with its local environment, and by considering larger and larger parts of the surrounding environment, it doesn’t take long until the wider environment encompasses both experiments.

While these concerns might not affect the results of most quantum experiments in a noticeable way, the PBR theorem requires that they be prepared completely independently. Any tiny violation of this, no matter how small, would invalidate the conclusions. In fact, questioning the seemingly reasonable assumption of preparation independence, and even whether scientists have complete free will to set up their experiments, is one of the main motivations that led my colleagues at MIT and I to propose an experiment to use causally disconnected quasars to choose experimental settings in a test of Bell’s inequality.

**Back to the laboratory**

Late last year, a team of experimental physicists including lead author Martin Ringbauer, working in the group of Professor Andrew White at the University of Queensland, performed an experiment designed to test whether the ψ-ontic or ψ-epistemic picture gives the best explanation for certain quantum experiments, without having to make the same assumptions that PBR did. The key issue is that certain quantum states called “orthogonal” are relatively easy to distinguish experimentally: for example a photon with “horizontal” polarization versus another with “vertical” polarization. Other “non-orthogonal” quantum states, like two different combinations of both horizontal and vertical polarization, cannot be distinguished perfectly, even if the experimenter knows what the possibilities are in advance.

The ψ-ontic and ψ-epistemic views tell very different stories about why non-orthogonal quantum states are so hard to tell apart in the lab. In the ψ-ontic view, the quantum state is uniquely determined by the ontic state. But in the ψ-epistemic picture, more than one quantum wave function can represent the same ontic reality. Think of the old chestnut about the tree falling in the forest: assuming for the moment that the tree does have an ontic state even in the absence of an observer, that ontic state can be either “fallen” or “not fallen,” and the quantum state can be “sound” or “no sound.” A quantum state of “no sound” can correspond to two different realities—one in which it didn’t fall, and one in which no one was there to hear it—so knowing the quantum state alone doesn’t tell you the true ontic state.

We can show this visually using a graph like the one below. Assuming that there really is some underlying “reality” (a subject for another day), the ψ-ontic model says that the wave functions of two independent states can’t overlap. But in the ψ-epistemic model, on the right, two different wave functions can correspond to the same ontic state, represented by the purple area where the curves of the wave functions *do* overlap.

Now, imagine that, instead of overlapping two-dimensional curves, we had overlapping three-dimensional spheres. (For extra credit, and a guaranteed headache, you can even try imagining four-dimensional overlapping hyperspheres.) The more dimensions you add, the smaller the relative size of the overlap. In quantum mechanics, this means that as you measure more parameters of your system—not just polarization but the direction of motion, for instance—it’s harder to find two wave functions that represent the same ontic reality.

Ringbauer and his colleagues tested this out by measuring several states of specially-prepared photons, each with either three or four parameters. Adding a new quantum state is like adding an extra sphere to the set. When adding more spheres, and/or increasing the number of dimensions, it becomes even harder to find places where all the spheres overlap. With this analogy in mind, the Queensland group found that as they increased the number of parameters for each quantum state and increased the number of states they were trying to distinguish between, their experimental results increasingly diverged from the predictions of a well defined ψ-epistemic model. Their experimental results thus strongly conflict with the ψ-epistemic picture’s “overlap” model—a major strike against the ψ-epistemic viewpoint.

The new results aren’t totally free of controversial assumptions, though. For example, Ringbauer and colleagues have to assume that there are such things as objective physical properties, independent of observers. (That is, that the moon exists even when you’re not looking at it, as Einstein once said.) Their argument also hinges on the specific way they define a physical model to be ψ-ontic or ψ-epistemic, adapting an expanded framework originally developed by John Bell in 1964 when deriving his famous Bell’s theorem. But they do avoid the assumption of preparation independence that was required for the PBR theorem. Overall, this is an elegant approach to attacking a deep foundational issue with experimental data.

If new results like these help us to better understand the nature of reality, many physicists will undoubtedly utter a ψ of relief. In all seriousness, I expect (and hope) that a combination of new theoretical ideas and real-world experiments will help reconcile these two seemingly incomparable views on the wave function. Both camps have many points in their favor and both seem to be at least partially right.

In the quest to understand the true Nature of Reality, we must continually question our most basic assumptions, admit and quantify our ignorance, and be explicit about what we are assuming. All of this is required in order to edge ever closer to finally grasping the meaning of the complex mathematical workhorse of quantum theory: the century-old, yet still misunderstood, wave function.

**Go Deeper**

*Author’s picks for further reading*

arXiv: Measurements on the reality of the wavefunction

In this preprint of their Nature Physics paper published in 2015, Martin Ringbauer and his colleagues in the group of Andrew White (Queensland) describe their experiment bolstering the ψ-ontic view of the wave function.

arXiv: QBism, the Perimeter of Quantum Bayesianism

A broad overview of Quantum Bayesianism, both philosophical and technical, by one of its leading proponents, Christopher Fuchs (UMass Boston).

arXiv: On the reality of the quantum state

In this 2012 paper, later published in Nature Physics, Matthew Pusey (Perimeter Institute), Jonathan Barrett (Oxford) and Terry Rudolph (Imperial) present their novel “PBR” no-go theorem supporting the ψ-ontic view.

arXiv: A Synopsis of the Minimal Modal Interpretation of Quantum Theory

Jacob Barandes (Harvard) and David Kagan (UMass Dartmouth) present a synopsis of an explicitly realist quantum interpretation with both ψ-ontic and ψ-epistemic features.

Matt Leifer: Can the Quantum State be Interpreted Statistically?

An excellent explanation of the PBR theorem, and the basic issues surrounding ψ-ontic and ψ-epistemic models, by quantum foundations expert Matt Leifer (Perimeter Institute).

Nature News: Physics: QBism puts the scientist back into science

An accessible article by eminent quantum theorist, and converted Quantum Bayesian, N. David Mermin (Cornell) about how one of the most prominent ψ-epistemic views, QBism, helps demystify both quantum mechanics and classical physics, including our subjective perception of time.

Quanta: Is the Quantum State Real? An Extended Review of ψ-ontology Theorems

A thorough recent review article also by Matt Leifer (Perimeter Institute) on the most important results regarding ψ-ontic and ψ-epistemic models in the technical literature.

This passage from the 2012 book “The Grand Design” set off a firestorm (or at least a brushfire) of controversy. Has philosophy been eclipsed by science in the quest for understanding reality? Is philosophy just dressed-up mysticism, disconnected from scientific understanding?

Many questions about the nature of reality cannot be properly pursued without contemporary physics. Inquiry into the fundamental structure of space, time and matter must take account of the theory of relativity and quantum theory. Philosophers accept this. In fact, several leading philosophers of physics hold doctorates in physics. Yet they chose to affiliate with philosophy departments rather than physics departments because so many physicists strongly discourage questions about the nature of reality. The reigning attitude in physics has been “shut up and calculate”: solve the equations, and do not ask questions about what they mean.

But putting computation ahead of conceptual clarity can lead to confusion. Take, for example, relativity’s iconic “twin paradox.” Identical twins separate from each other and later reunite. When they meet again, one twin is biologically older than the other. (Astronaut twins Scott and Mark Kelly are about to realize this experiment: when Scott returns from a year in orbit in 2016 he will be about 28 microseconds younger than Mark, who is staying on Earth.) No competent physicist would make an error in computing the magnitude of this effect.

But even the great Richard Feynman did not always get the *explanation* right. In “The Feynman Lectures on Physics,” he attributes the difference in ages to the acceleration one twin experiences: the twin who accelerates ends up younger. But it is easy to describe cases where the opposite is true, and even cases where neither twin accelerates but they end up different ages. The calculation can be right and the accompanying explanation wrong.

If your goal is only to calculate, this might be sufficient. But understanding existing theories and formulating new ones requires more. Einstein arrived at the theory of relativity by reflecting on conceptual problems rather than on empirical ones. He was primarily bothered by explanatory asymmetries in classical electromagnetic theory. Physicists before Einstein knew, for instance, that moving a magnet in or near a coil of wire would induce an electric current in the coil. But the classical explanation for this effect appeared to be entirely different when the motion was ascribed to the magnet as opposed to the coil; the reality is that the effect depends only on the relative motion of the two. Resolving the explanatory asymmetry required rethinking the notion of simultaneity and rejecting the classical account of space and time. It required the theory of relativity.

Comprehending quantum theory is an even deeper challenge. What does quantum theory imply about “the nature of reality?” Scientists do not agree about the answer; they even disagree about whether it is a sensible question.

The problems surrounding quantum theory are not mathematical. They stem instead from the unacceptable terminology that appears in presentations of the theory. Physical theories ought to be stated in precise terminology, free of ambiguity and vagueness. John Bell provides a list of insufficiently clear concepts in his essay “Against ‘measurement’”:

Here are some words which, however legitimate and necessary in application, have no place in a formulation with any pretension to physical precision:

system, apparatus, environment, microscopic, macroscopic, reversible, irreversible, observable, information, measurement.

Textbook expositions of quantum theory make free use of these forbidden terms. But how, in the end, are we to determine whether something is a “system”, or is large enough to count as “macroscopic,” or whether an interaction constitutes a “measurement?” Bell’s fastidiousness about language is the outward expression of his concern about concepts. Sharp physical theories cannot be built out of vague notions.

Philosophers strive for conceptual clarity. Their training instills certain habits of thought—sensitivity to ambiguity, precision of expression, attention to theoretical detail—that are essential for understanding what a mathematical formalism might suggest about the actual world. Philosophers also learn to spot the gaps and elisions in everyday arguments. These gaps provide entry points for conceptual wedges: nooks where overlooked alternatives can take root and grow. The “shut up and calculate” ethos does not promote this critical attitude toward arguments; philosophy does.

What philosophy offers to science, then, is not mystical ideas but meticulous method. Philosophical skepticism focuses attention on the conceptual weak points in theories and in arguments. It encourages exploration of alternative explanations and new theoretical approaches. Philosophers obsess over subtle ambiguities of language and over what follows from what. When the foundations of a discipline are secure this may be counter-productive: just get on with the job to be done! But where secure foundations (or new foundations) are needed, critical scrutiny can suggest the way forward. The search for ways to marry quantum theory with general relativity would surely benefit from precisely articulated accounts of the foundational concepts of these theories, even if only to suggest what must be altered or abandoned.

Philosophical skepticism arises from the theory of knowledge, the branch of philosophy called “epistemology.” Epistemology studies the grounds for our beliefs and the sources of our concepts. It often reveals tacit presuppositions that may prove wrong, sources of doubt about how much we really know. Having started with Hawking, let’s let Einstein have the last word:

How does it happen that a properly endowed natural scientist comes to concern himself with epistemology? Is there no more valuable work in his specialty? I hear many of my colleagues saying, and I sense it from many more, that they feel this way. I cannot share this sentiment….

Concepts that have proven useful in ordering things easily achieve such an authority over us that we forget their earthly origins and accept them as unalterable givens. Thus they come to be stamped as “necessities of thought,” “a priori givens,” etc. The path of scientific advance is often made impassable for a long time through such errors. For that reason, it is by no means an idle game if we become practiced in analyzing the long commonplace concepts and exhibiting those circumstances upon which their justification and usefulness depend, how they have grown up, individually, out of the givens of experience. By this means, their all-too-great authority will be broken.

**Go Deeper**

*Editor’s picks for further reading
*

3:AM Magazine: on the foundations of physics

Tim Maudlin talks about the relationship between physics and philosophy in this interview with Richard Marshall.

The Nature of Reality: Debating the Meaning of Quantum Mechanics

Discover some of the many competing ways physicists interpret the equations of quantum mechanics.

New York Academy of Sciences: Transcending Matter: Physics and Ultimate Meaning

Panelists Tim Maudlin, Priya Natarajan, Adam Frank, and David Kaiser discuss the intersection of physics and philosophy.

Eugene Wigner wrote these words in his 1960 article “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” The Nobel prize-winning physicist’s report still captures the uncanny ability of mathematics not only to describe and explain, but to predict phenomena in the physical world.

How is it possible that all the phenomena observed in classical electricity and magnetism can be explained by means of just four mathematical equations? Moreover, physicist James Clerk Maxwell (after whom those four equations of electromagnetism are named) showed in 1864 that the equations predicted that varying electric or magnetic fields should generate certain propagating waves. These waves—the familiar electromagnetic waves (which include light, radio waves, x-rays, etc.)—were eventually detected by the German physicist Heinrich Hertz in a series of experiments conducted in the late 1880s.

And if that is not enough, the modern mathematical theory which describes how light and matter interact, known as quantum electrodynamics (QED), is even more astonishing. In 2010 a group of physicists at Harvard University determined the magnetic moment of the electron (which measures how strongly the electron interacts with a magnetic field) to a precision of less than one part in a trillion. Calculations of the electron’s magnetic moment based on QED reached about the same precision and the two results agree! What is it that gives mathematics such incredible power?

The puzzle of the power of mathematics is in fact even more complex than the above examples from electromagnetism might suggest. There are actually two facets to the “unreasonable effectiveness,” one that I call *active* and another that I dub *passive*. The active facet refers to the fact that when scientists attempt to light their way through the labyrinth of natural phenomena, they use mathematics as their torch. In other words, at least some of the laws of nature are formulated in directly applicable mathematical terms. The mathematical entities, relations, and equations used in those laws were developed for a specific application. Newton, for instance, formulated the branch of mathematics known as calculus because he needed this tool for capturing motion and change, breaking them up into tiny frame-by-frame sequences. Similarly, string theorists today often develop the mathematical machinery they need.

Passive effectiveness, on the other hand, refers to cases in which mathematicians developed abstract branches of mathematics with absolutely no applications in mind; yet decades, or sometimes centuries later, physicists discovered that those theories provided necessary mathematical underpinnings for physical phenomena. Examples of passive effectiveness abound. Mathematician Bernhard Riemann, for example, discussed in the 1850s new types of geometries that you would encounter on surfaces curved like a sphere or a saddle (instead of the flat plane geometry that we learn in school). Then, when Einstein formulated his theory of General Relativity (in 1915), Riemann’s geometries turned out to be precisely the tool he needed!

At the core of this math mystery lies another argument that mathematicians, philosophers, and, most recently, cognitive scientists have had for a long time: Is math an invention of the human brain? Or does math exist in some abstract world, with humans merely discovering its truths? The debate about this question continues to rage today.

Personally, I believe that by asking simply whether mathematics is discovered or invented, we forget the possibility that mathematics is an intricate combination of inventions *and* discoveries. Indeed, I posit that humans invent the mathematical concepts—numbers, shapes, sets, lines, and so on—by abstracting them from the world around them. They then go on to discover the complex connections among the concepts that they had invented; these are the so-called theorems of mathematics.

I must admit that I do not know the full, compelling answer to the question of what is it that gives mathematics its stupendous powers. That remains a mystery.

**Go Deeper**

*Editor’s picks for further reading
*

NOVA: The Great Math Mystery

Is math invented by humans, or is it the language of the universe? NOVA takes on this question in a new film premiering April 15, 2015 at 9pm on most PBS stations.

NOVA: Describing Nature with Math

How do scientists use mathematics to define reality? And why? Peter Tyson investigates two millennia of mathematical discovery.

The Washington Post: The Structure of Everything

Learn more about the “unreasonable effectiveness of mathematics” in this review of Mario Livio’s book “Is God a Mathematician?”

Who will be the next Einstein? Will his ingenious contributions ever be surpassed? Is there anyone brilliant enough to complete his dream of a unified theory of nature? Despite being an accomplished physicist, Nobel laureate, and Renaissance man, Einstein’s sometime ally, sometime adversary Erwin Schrödinger never came close to Einstein’s fame internationally. If anything, his cat has taken all the glory—at least as a cultural meme.

Though Einstein was his collaborator and mentor, Schrödinger briefly imagined himself successor to the throne when, in January 1947, he thought he had found a theory of everything. Jumping the gun, he announced to the Royal Irish Academy that his new General Unitary Theory superseded Einstein’s general theory of relativity. The international press picked up on Schrödinger’s startling announcement, asking if he might have fulfilled Einstein’s quest. However, Schrödinger’s unification attempt turned out, like so many others, to be a false start, with no experimental evidence to support it.

Schrödinger certainly hasn’t been the only one to try to fill Einstein’s shoes. Since 1919, when the public first tasted the theory of relativity through the announcement of the solar eclipse measurements, it has had an insatiable appetite for news about Einstein and possible successors. While he was alive, as we’ve seen, the press trumpeted every unified field theory he proposed as if it were a major breakthrough. After his death, stories about brilliant individuals tantalizingly close to completing his mission have continued to make headlines. All in all, Einstein, his unfinished quest, and the question of who might inherit his throne have served as touchstones for almost a century.

Research scientists know that progress in any field is usually incremental, taking place over years or even decades. Groundbreaking discoveries are few and far between. Often a scientist needs to be lucky enough to be in the right place at the right time to make a mark. Most scientific research today is completed by large teams, rather than by single individuals.

Yet the myth persists of the lone genius changing everything around us. Type “next Einstein” into any Internet search engine and expect to be bombarded with results—everything from recipes for educational success to claims made in resumes or personal ads. Here are a few assorted examples of recent media musings: Will the next Einstein be a “surfer dude”? Is he a child prodigy with an exceptional IQ? What if the next Einstein is a computer? Could a smartphone application identify him? Or might an old-fashioned DVD designed for little ones do the trick? As a 2009 New York Times headline advised with tongue firmly in cheek, “No Einstein in Your Crib? Get a Refund!”

The formula that produced Einstein was a perfect match between crucial scientific problems that demanded radical solutions, exceptional insights that often overturned commonly held beliefs, an ironically photogenic visage (who knew that rumpled sweaters, a Brillo pad mustache, and a mop of unruly gray hair could be so compelling?), and the omnipresent glare of the camera. His rise to fame coincided, more or less, with the golden age of Hollywood, when cinematic newsreels projected the latest fashions, feats, and foibles of celebrities. Like Douglas Fairbanks, Mary Pickford, Charlie Chaplin, the Barrymores, and countless other stars of the cinema in the twenties, thirties, and forties, Einstein traipsed across the screens of thousands of Main Street movie houses. The public viewed him stopping on his strolls to wave to admirers, giving speeches about current affairs, headlining benefits for various charities, and occasionally reporting progress in his research. Hungry to fill their quota of human interest stories, reporters lapped up news about the German Jewish scientist like scrawny cats with spilled milk.

It is not clear if that formula will ever be repeated. For one thing, there has been an explosion of publications. Many theories vie for prominence—far more than in the days of Einstein and Schrödinger. Yet the energies required to test these approaches have required increasingly expensive and time-consuming projects, such as the Large Hadron Collider (LHC) near Geneva, Switzerland. Unlike, for example, the eclipse measurements, experimental science has generally proceeded far more slowly and cautiously, requiring far greater quantities of data before announcing results. In high-energy physics, teams typically involve hundreds of researchers rather than lone pioneers. At the same time, the media have diversified, so not everyone’s eyes are fixed on the same scientific celebrities.

Peter Higgs, one of the recipients of the 2013 Nobel Prize in physics, has become a rare contemporary example of a well-known, accomplished theorist. Yet his name recognition hardly rivals Einstein’s. The particle named after him, the Higgs boson, has come to be known colloquially as the “God particle.” When it was discovered in 2012 at the LHC, much of his press coverage was shared with a divine being. (To India’s dismay, its native son Satyendra Bose hardly got any mention.)

As the LHC is restarted with collision energies amped up to 13 TeV, the physics community will be sifting through its collected data searching for hints of new theories. If interesting results are found, undoubtedly eager theorists will propose new unification ideas. Let’s hope that the media will look at these suggestions with a critical eye and wait for solid evidence, if it emerges, before declaring that the next Einstein has arrived.

]]>