In this video pencast, theorist Delia Schwartz-Perlov explains what physicists are really talking about when they talk about extra dimensions of space. Could our universe actually contain unseen dimensions, and could these extra dimensions help unify quantum theory and gravity?
On July 4, 2012, the CERN auditorium was full. That’s not unusual; the room often hosts scientific presentations to packed houses. What was unusual was that this seminar was watched by millions of people worldwide, including reporters from high-impact media outlets like BBC, CNN, and The New York Times.
So what was the announcement that caused a hectic world to briefly pause and listen? A new subatomic particle had been discovered, and its properties were consistent with those predicted for the long-sought Higgs boson. The Higgs boson, if it exists, is the experimental evidence needed to confirm the existence of the Higgs field, which is thought to give fundamental subatomic particles their mass.
Physicists were careful to not claim that they had conclusively discovered the Higgs boson. The Higgs boson was predicted in 1964 to have a litany of very specific properties. Until scientists are able to demonstrate that the newly-discovered particle matches all of the predictions, there remains the possibility that the new particle is something wholly unexpected. Of the properties that had been tested prior to the seminar, all of them pointed to this being the Higgs, which is why scientists said “consistent with the Higgs boson.” Using a metaphor involving the senses, what was found looked and smelled like the Higgs boson, but nobody had been able to taste, feel and touch it. So some uncertainty remained. This uncertainty still remains today, and it will be some time before scientists can definitively state that the observed particle was the Higgs boson.
But let’s imagine that the discovered particle, which is a boson of mass about 125 times that of the proton, is the Higgs boson. What then?
You’d think scientists would celebrate (and we did…more than a few champagne corks were popped), but once the confetti settled, there were some furrowed brows. Nobody understood why the mass of the Higgs boson was so low. Here’s the source of the conundrum.
A Higgs boson doesn’t always exist as a Higgs boson. Like other quantum particles, it can change forms. For instance, it can briefly convert into a pair of top quarks before coalescing back into a Higgs boson. These evanescent top quarks are called “virtual particles” and are just an example of the several kinds of particles into which a Higgs boson can temporarily fluctuate. So, if you want to predict the mass of the Higgs, you have to take all of these possible forms into account.
Higgs bosons can spontaneously convert into pairs of other subatomic particles. These pairs exist only for a very short time, but their existence will alter the mass of the Higgs boson.
Mathematically, we split the mass of the Higgs into two parts: its “theoretical” mass—that is, the mass it would have if didn’t fluctuate into different particles—plus the effect of the fluctuations. (For the technically brave, I put the equation that describes this in a footnote1.)
To make things even more complicated, the effect of the fluctuations also comes in two pieces. These two terms are multiplied, not added, together. The first term involves the maximum energy for which the Higgs theory applies. This works out to be a huge number, about 1038 GeV2.
The second term is, roughly speaking, the sum of the effect of the bosons (W, Z & Higgs) minus the sum of the effect of the fermions (top quark). Let’s call this the fermion/boson sum.
So, let’s take a birds-eye view of the whole equation. The mass of the Higgs is equal to the theoretical mass plus a monstrously large number multiplied by the fermion/boson sum. Unless the fermion/boson sum is practically zero, the observed mass of the Higgs boson should be huge.
The only way to escape this conclusion is to somehow balance the fermion/boson sum to be exceedingly small. And to have the balance so perfect is utterly unnatural, as if we added up all the monthly paychecks of everyone in the United States and subtracted their monthly bills and those two huge numbers canceled out neatly.
That doesn’t happen in bookkeeping, and it shouldn’t happen in physics, either; unless, that is, there is some new and as-yet-undiscovered physical principle that enforces it. Thus, the small mass of the Higgs boson all but ensures that there is new physics to be discovered. Otherwise, we have to “tune” the masses of these particles to very precise values. Such precise balancing is utterly unnatural in physics theories, leading theoretical physicists to propose a series of ways in which this cancellation could occur naturally.
The most popular is a principle called supersymmetry. At the core of supersymmetry is the idea that, for every known fermion (quarks and leptons), there is a cousin boson (called squarks and sleptons) that we haven’t yet discovered. Similarly, for every known boson (e.g. photon, W, Z, gluon and Higgs boson), there is a cousin, also-undiscovered, fermion (called a photino, wino, zino, gluino and Higgsino). Because every fermion has a cousin boson (and vice versa), the fermion/boson sum is identically zero. Each particle cancels out exactly the effect of the cousin particle predicted by supersymmetry.
There are many technical issues that need to be addressed, not the least of which is that the predicted cousin particles have never been observed. But, so far, scientists can get around that problem. Thus supersymmetry remains an interesting idea.
If the particle found in July of 2012 is the Higgs boson, it definitely brings with it a very puzzling problem. As physicists begin to accept that the Higgs boson has likely been found, they are turning their attention to this most unnatural quandary. The main focus of the LHC is now becoming a search for a natural solution to this difficult question: Why is the Higgs so light?
The actual equation is the following: Mass(Higgs, observed)2 = Mass(Higgs, theoretical)2 + [k Λ]2 × [Mass(Z boson)2 + 2 × Mass(W boson)2 +Mass(Higgs, theoretical)2 – 4 × Mass(top quark)2 ]. k is a technical constant and Λ is the maximum energy that the theory applies.
A new eye is now open to the cosmos. The Dark Energy Camera, which saw first light on September 12, 2012, is a spectacular new scientific facility with the grandest of goals: no less than understanding the evolution and fate of the entire universe.
For every telescope, “first light” is the moment when the optics and camera are assembled into a single instrument and turned to the night sky for the first time. But first light is just the beginning. While it often yields a spectacular photo or two, single photographs rarely lead to substantive results. Modern measurements require a subtle understanding of the equipment’s idiosyncrasies and the operators and scientists must spend a while familiarizing themselves with their instrument’s performance. After the facility has been put through its paces, real research begins. On January 9, Joshua Frieman, leader of the Dark Energy Survey (DES) collaboration, announced at the 221st meeting of the American Astronomical Society in Long Beach, California, that the team is wrapping up this getting-to-know-you phase, known as the commissioning period. They have already made interesting scientific observations, including discovering distant supernovae and clusters of galaxies.
The 570 megapixel Dark Energy Camera is hooked up to the venerable four-meter Blanco Telescope at the Cerro Tololo Inter-American Observatory, located in the Chilean Andes. Together, they will complete a study of the sky called the Dark Energy Survey, which may bring us closer to an answer to one of the deepest mysteries in cosmology: What is dark energy?
This question has been vexing astronomers since 1998, when astronomers discovered that, contrary to their expectations, the expansion of the universe wasn’t slowing down—it was speeding up! Cosmologists accounted for this surprising behavior by invoking a form of repulsive gravity first imagined by Einstein. But Einstein abandoned the idea when Hubble’s observation of the expanding universe made it seem unnecessary. Today, in the absence of a specific explanation, astronomers describe it with the generic term “dark energy.”
The Dark Energy Survey will help scientists probe the nature of dark energy. Over the course of 525 nights over five years, astronomers will survey a quarter of the southern sky to a depth of billions of light years, revealing the how the cosmic expansion rate has changed over nearly nine billions of years.
The Dark Energy Survey studies the universe in four distinct ways:
It looks for 4,000 distant supernovae. By comparing their distance (determined by simultaneously observing their brightness and their redshift, the change in their color due to the expansion of the universe, and comparing these two numbers), astronomers will get a good handle on the cosmic expansion history.
The camera will also study patterns in the spatial distribution of galaxies that are set by a phenomenon called baryonic acoustic oscillations. When the universe was smaller and hotter, the explosion of the Big Bang caused the universe to ring like a bell as the sound of the Big Bang rippled across the cosmos. About 370,000 years after the Big Bang, the universe cooled below a critical temperature, freezing these vibrations into patterns we can still see in microwave radiation and distribution of galaxies that are blazoned across the sky. This process is analogous to flash freezing the ripples on the surface of a pond. By comparing the apparent size of the ripples with their initial size, which can be calculated using information about the conditions that prevailed in the early cosmos, astronomers can provide crucial data on the shape of space itself: whether it is flat or curved and, if curved, exactly how.
The camera will also have the capacity to study the size and makeup of vast clusters of galaxies. Because the properties of dark energy help determine how and when these clusters formed, by studying their history, we can gain new insight into dark energy.
Finally, the Dark Energy Camera will see how light from distant clusters of galaxies is being bent by mass between those clusters and our telescopes. This information will tell us more about how dark energy has shaped the distribution of matter throughout the universe by studying the size and shape of clusters of galaxies over time. In total, the camera will be able to track three hundred million galaxies!
Through these four distinct strategies—each with different strengths and weaknesses—the survey will provide independent measurements of the dark energy of the universe.
The portion of the sky that the DES will study in detail is observable from Chile from September to February. Since first light, the collaboration has put their equipment through its paces. To get an early glimpse at a complete set of data, the DES collaboration will spend the rest of the 2012-2013 observation season studying a little under 5% of the region they will eventually explore. Using this strategy, they will have as good a measurement on a small portion of the sky after just a few nights of observation as they will over their entire target after five years. This will allow a relatively quick analysis of a subset of the sky and the caliber of this small study will already be world-class. The final shakedown is expected to be completed in February and in September of 2013, the survey will start in earnest, hopefully leading to new insights into the nature of dark energy.
Stay tuned. This is a very exciting time.
The center of the Milky Way galaxy lends its awesome beauty to the skyline above the telescope domes at Cerro Tololo. The Greater and Lesser Magellanic Clouds grace the upper left corner of the photo. (Photo credit: Reidar Hahn)
Minute Physics: 2011 Nobel Prize: Dark Energy
In this video explainer, guest narrator Sean Carroll explains dark energy and cosmic expansion in honor of the 2011 Nobel Prize in physics.
NOVA scienceNOW: Cosmic Perspective: Dark Matter
In this short video, astrophysicist Neil deGrasse Tyson muses on just how much we don't know about the mysterious components of the universe, dark energy and dark matter.
Embedded within the laws of physics are roughly 30 numbers—including the masses of the elementary particles and the strengths of the fundamental forces—that must be specified to describe the universe as we know it. Why do these numbers take the values that they do? We have not been able to derive them from any other laws of physics. Yet, it’s plausible that changing just a few of these parameters would have resulted in a starkly different universe: one without stars or galaxies and even without a diversity of stable atoms to combine into the fantastically complex molecules that compose our bodies and our world. Put another way, if these fundamental parameters had been different from the time of the Big Bang onward, our universe would be a far less complex universe. This is called the “fine tuning observation.” The fine-tuning problem is to find out why this is.
As someone whose thought is both fueled and constrained by the scientific tradition, I am only interested in explanations that are scientific. This means a candidate explanation must be three things: confirmable, falsifiable, and unique. By “confirmable,” I mean that the hypothesis must lead to further consequences, otherwise unexpected or surprising, which could be confirmed by novel but possible experiments. “Falsifiable” means that it is possible to specify a novel but doable experiment that would invalidate the hypothesis if the experimental result contradicted the predictions of the hypothesis. “Unique” means that there are no other simpler or more plausible hypotheses that make the same prediction.
Any explanation that fails these tests should be abandoned. After all, it is possible to imagine a multitude of possible non-scientific explanations for almost any observation. Unless we accept the stricture that hypotheses must be confirmable, falsifiable, and unique, no rational debate is possible; the proponents of the various explanations will never change their minds.
Yet several of the most popular explanations for the fine-tuning problem fail these tests. One such hypothesis is that there is a god who made the world and chose the values of the parameters so that intelligent life would arise. This is widely believed, but it fails the test for a scientific explanation.
Another hypothesis that fails the test is what we may call the “anthropic multiverse.” Though there are many variations on this theme, the essential idea is that our universe is one of a large or infinite set of worlds that exist simultaneously, each with different, random values for those 30-some physical parameters. Hence, our universe has the very rare property of having parameters that give rise to sufficient complexity to make it hospitable to intelligent life. To connect this hypothesis with observations we have to limit ourselves to the study of the subpopulation of universes in which we could live. This is called using the anthropic principle.
The anthropic multiverse cannot make any falsifiable predictions, though. Here is one proof: We can divide all the parameters that define each universe into two classes. First, there are those that matter to the existence of life—change one of those, and your universe is no longer hospitable to life. But since we already know, or could deduce from our existence, the values of these parameters, they can’t be used to falsify predictions of the anthropic multiverse. Second, there are parameters that don’t matter to the development of intelligent life. Those parameters can take any value and still yield up a universe teeming with life. These parameters are distributed randomly, so they might take any value in our universe. Because any and every value is allowed, this second set of parameters can’t be used to falsify predictions of the anthropic multiverse either.
Even if an anthropic multiverse is fundamentally unscientific, though, that does not mean we need to throw out all multiverse theories. One way to make a multiverse theory scientific is to suggest that complex universes like ours must be typical in the population of universes. Now we can make predictions without invoking the anthropic principle. For instance, in my first book, “Life of the Cosmos,” I proposed the “theory of cosmological natural selection,” which predicts that the parameters of physics are fine-tuned to produce many black holes, which is the case in our universe, as we see by its great chemical and astrophysical complexity. It turns out that a universe that makes many stars, and hence many black holes, is also filled with the oxygen and carbon needed for life. This theory made a few falsifiable predictions that have so far held up, despite several opportunities to contradict them with real observations in the last two decades. One of these predicts that no neutron star can have more than twice the mass of the sun. These predictions involve properties that are otherwise very surprising; were they to be confirmed there would be no other explanation on the table.
There is one property of the inflationary multiverse which just plausibly could be confirmed—were certain parameters very delicately tuned—which is the observation of patterns in the cosmic microwave background that could be explained by other bubble universes having collided with ours. But if this is not observed it doesn’t falsify the hypothesis. It just means those parameters are not so finely tuned.
Readers of popular science may have encountered claims that some predictions of the anthropic principle have been confirmed. However, I argue that those claims are based on at least three different kinds of fallacies. First, there is the assumption that the properties we observe around us—for example, the fact that carbon and galaxies are plentiful in the universe—are essential for life. However, we can seek an explanation for why galaxies are plentiful without needing to assert that galaxies are helpful for life. We can see by observation that galaxies are plentiful without having to be in one. Second, many of these anthropic arguments make untestable claims about properties of hypothetical universes that will remain forever unobserved. Finally, there is the “inverse gambler’s fallacy”: Observing a single trial with an improbable outcome and deducing that it must be one of a large number of trials.
Defenders of theistic explanations assert that it might be the case that there is a god who made the universe and tuned its parameters so that we could exist. It might. Similarly, defenders of anthropic multiverse scenarios assert that it might just be the case that our universe is one of a vast collection of worlds with random laws and parameters. This also might be true. But science is not about what might be true, it is about what can convincingly be argued for by rational argument from public evidence. If we weaken this standard to admit the anthropic multiverse, we open the door to equally unscientific theistic explanations. The proponents of each can (and do) argue with each other, but they will never convince each other, for they have given up the method and criteria that are necessary to make a convincing case for a claim in science. Meanwhile, the fine-tuning observation is a challenge that requires a scientific explanation.
Go Deeper Editor's picks for further reading
FQXi: Our Not-So-Special Universe
In this blog post, Zeeya Merali reviews recent papers questioning whether our universe is really as fine-tuned as we thought.
These days it often seems that if a theory has loose ends, its dangling threads are surreptitiously tied together out of view within the hidden fabric of a parallel universe. While some researchers recoil from introducing unseen aspects to a theory, others find that the invisible knots create an irresistibly pretty package.
Depending on one’s taste, there are so many types of parallel universes to choose from—alternative cosmos galore. If extra dimensions are not your thing, maybe bifurcating timelines would work. If an endless array of gigantic bubble universes seems intimidating, then perhaps a nursery of baby universes is more endearing. While there is not yet a GPS device or app to navigate through the cartography of scientifically sanctioned parallel possibilities, perhaps this guide to all things alternative will help.
Detlev van Ravenswaay / Photo Researchers, Inc
Let’s start with the oldest, most basic idea and work our way toward newer, more complex models:
What if? Here is the simplest way to transport yourself to a parallel universe: Just imagine all the ways in which our universe might have turned out differently. Each of these might-have-been realities represents a parallel universe. The mathematician Gottfried Leibniz posited that we live in the “best of all possible worlds” (famously satirized by Voltaire in "Candide") and that all these other, unrealized, possibilities for creation would have been less desirable. His perspective has persisted for three centuries as a way of explaining why the cosmos is the way it is. Contemporary physicists who make use of the so-called Anthropic Principle argue that if the universe’s conditions were slightly different, it couldn’t have supported intelligent life, and we wouldn’t be here today to speculate about it. For example, if the inflationary era, a fleeting period of ultra-rapid growth in the very early universe, had continued for a long enough time, the stable structures we see in the cosmos today, such as stars and galaxies, couldn’t have formed. The super-quick expansion would have ripped them apart.
Alternative realities made possible by time travel: Science fiction writers relish the intricate plots woven by introducing time travellers into a story. Einstein’s general theory of relativity does not distinguish between space and time and hence hypothetically permits travels to the past, though the mechanics of such a journey are still largely beyond us. In recent decades, backward time travel ideas have been explored in serious articles published in reputable physics journals. If journeying back in time is possible, what would happen if someone changed history? Would they launch a new timeline, and hence a new universe, in which the chain of events was different? The answer won’t be known until backward time travel is either developed or ruled out.
Sum over histories: Physicist Richard Feynman had a practical, no-nonsense approach to physics, supporting notions that are potentially testable. Yet his approach to quantum field theory introduced the startling concept of reality as a weighted sum of alternative histories. For example, according to Feynman’s formulation, if two electrons approach each other, deflect and scatter, their overall behavior from start to finish must take into account every possible intermediate path—weighted according to each path’s likelihood. It is like assessing how tired someone will be after taking a walk in the woods by assuming that they somehow split up and took every possible route from entrance to exit—assigning more weight to the shortest (and therefore likeliest) paths, but still taking all of them into account.
Many-worlds interpretation of quantum mechanics: While Feynman did not assert that the ghostly alternative histories he described represented actual parallel universes, a young graduate student, Hugh Everett III (who shared the same research advisor as Feynman, John Wheeler), made the case that they are. Everett proposed a fundamental reinterpretation of quantum mechanics in which each time that particles interact, reality bifurcates into a set of parallel streams, each representing a different possible outcome. Researchers observing the outcome of such quantum experiments would similarly split up into multiple selves—each thinking that he or she is the only one. For example, suppose a physicist named Eve wants to measure the position of an electron and there are three possible outcomes. Upon taking the measurement, she would instantly divide into three distinct selves, each recording a different result. Each version of Eve would be convinced that she was the real one—wholly unaware of her near-doppelgangers.
Copycat regions of the universe: We now turn from the exceedingly small to the incomprehensibly large. If the universe is infinite, as many cosmologists surmise, then if you travel far enough you will eventually reach regions nearly identical to ours. That’s because if you take a finite number of elements and mix them into an infinite number of combinations, eventually chance will reproduce one of the previous arrangements. It is like playing tic-tac-toe—play enough times and you are bound to repeat yourself. Hence somewhere, by pure chance, there could be a near-parallel Earth where a nearly-identical version of you is reading this article on a parchment scroll illuminated by a glowworm.
Bubble Universes and Baby Universes: In general relativity, an energy field of the right variety can trigger space to grow explosively. Researchers use this phenomenon to explain how the universe expanded so rapidly during the inflationary era. However, they’ve come to realize that if explosive expansion took place in one part of space, it probably happened elsewhere, too. Hence, myriad bubble universes could have emerged from the primordial cosmic sea of energy. We would never have access to other bubble universes, though, because they would have since moved away from us well beyond the limits of observation. Baby universes represent a related idea, in which universes would be seeded in the extreme conditions of black holes. The embryonic regions of space would then grow into successor universes in their own right.
Higher Dimensions: For this type of parallel universe, we move beyond the three dimensions of space itself and consider the possibility of a higher, unseen dimension. While such a scenario sounds a bit like "The Twilight Zone," higher dimensions are a vital part of string theory and other attempts at unifying the natural forces. If a higher dimension exists beyond space and time, why can’t we travel through it? Theorists hypothesize that the particles of matter and light cling to our three-dimensional space, preventing us from entering or even observing the extra dimension.
While our bodies have remained in our own universe, our minds have completed an excursion through a weird assortment of parallel universe possibilities. Do any of these types of parallel universes exist? If so, how are they connected? Suggestions for testing these various hypotheses are too numerous to recount in this post. I refer the reader to several interesting proposals:
I was once interviewed by a local radio station about particle physics, gravitation, cosmology, things like that. It was 2005, the centenary of Albert Einstein’s “miraculous year” of 1905, in which he published a handful of papers that turned the world of physics on its head. I did my best to explain some of these abstract concepts, waving my hands up and down, which I can’t help but do even when I know I’m on the radio.
The interviewer seemed happy, but after we finished and he was packing up his recording gear, a lightbulb went off in his head. He asked if I would answer one more question. I said sure, and he once again deployed his microphone and headphones. The question was simple: “Why should anybody care?” None of this research is going to lead to a cure for cancer or a cheaper smartphone, after all.
The answer I came up with still makes sense to me: “When you’re six years old, everyone asks these questions. Why is the sky blue? Why do things fall down? Why are some things hot and others cold? How does it all work?” We don’t have to learn how to become interested in science—children are natural scientists. That innate curiosity is beaten out of us by years of schooling and the pressures of real life. We start caring about how to get a job, meet someone special, raise our own kids. We stop asking how the world works, and start asking how we can make it work for us. Later I found actual studies showing that kids love science up until the ages of ten to fourteen years old.
These days, after pursuing science seriously for more than four hundred years, we actually have quite a few answers to offer the six-year-old inside each of us. We know so much about the physical world that the unanswered questions are to be found in remote places and extreme environments. That’s physics, anyway; in fields like biology or neuroscience, we have no difficulty at all asking questions to which the answers are still elusive. But physics—at least the subfield of “elementary” physics, which looks for the basic building blocks of reality—has pushed the boundaries of understanding so far that we need to build giant accelerators and telescopes just to gather new data that won’t fit into our current theories.
Over and over again in the history of science, basic research—pursued just for the sake of curiosity, not for any immediate tangible benefit—has proven, almost despite itself, to lead to enormous tangible benefits. Way back in 1831, Michael Faraday, one of the founders of our modern understanding of electromagnetism, was asked by an inquiring politician about the usefulness of this newfangled “electricity” stuff. His apocryphal reply: “I know not, but I wager that one day your government will tax it.” (Evidence for this exchange is sketchy, but it’s a sufficiently good story that people keep repeating it.) A century later, some of the greatest minds in science were struggling with the new field of quantum mechanics, driven by a few puzzling experimental results that ended up overthrowing the basic foundations of all of physics. It was fairly abstract at the time, but subsequently led to transistors, lasers, superconductivity, light-emitting diodes, and everything we know about nuclear power (and nuclear weapons). Without this basic research, our world today would look like a completely different place.
Even general relativity, Einstein’s brilliant theory of space and time, turns out to have down-to-earth applications. If you’ve ever used a global positioning system (GPS) device to find directions somewhere, you’ve made use of general relativity. A GPS unit, which you might find in your cell phone or car navigation system, takes signals from a series of orbiting satellites and uses the precise timing of those signals to triangulate its way to a location here on the ground. But according to Einstein, clocks in orbit (and therefore in a weaker gravitational field) tick just a bit faster than those at sea level. A small effect, to be sure, but it builds up. If relativity weren’t taken into account, GPS signals would gradually drift away from being useful—over the course of just one day, your location would be off by a few miles.
But technological applications, while important, are ultimately not the point for me or any of the experimentalists who spend long hours building equipment and sifting through data. They’re great when they happen, and we won’t turn up our nose if someone uses the Higgs boson to find a cure for aging. But it’s not why we are looking for it. We’re looking because we are curious. The Higgs is the final piece to a puzzle we’ve been working on solving for an awful long time. Finding it is its own reward.
Einstein’s special theory of relativity calls for radical renovation of common-sense ideas about time. Different observers, moving at constant velocity relative to one another, require different notions of time, since their clocks run differently. Yet each such observer can use his “time” to describe what he sees, and every description will give valid results, using the same laws of physics. In short: According to special relativity, there are many quite different but equally valid ways of assigning times to events.
Einstein himself understood the importance of breaking free from the idea that there is an objective, universal “now.” Yet, paradoxically, today’s standard formulation of quantum mechanics makes heavy use of that discredited “now.” Playing with paradoxes is part of a theoretical physicist’s vocation, as well as high-class recreation. Let’s play with this one.
First, some background. Despite special relativity’s freedom in assigning times, for each choice there is a definite ordering of events into earlier and later. In a classic metaphor, time flows like a river through all space, and the flow never reverses.1 Figures 1, 2, and 3 tell the central story.
To organize our thoughts, let us make a definite choice of time; in the jargon, let us fix a frame of reference. Then we can frame the history of the world as shown in Figure 1. Here time runs vertically, while space runs horizontally. Since we’re going to be considering several versions of time, we’ll name this one t1. For convenience in drawing, we are restricting attention to a one-dimensional slice of space—in other words, a line. One-dimensional “spaces” of events sharing the same value of time t1 would appear as horizontal lines (which I haven’t drawn). The meaning of the colored regions and their labels will be elucidated presently.
Observers moving at constant velocity with respect to our frame of reference will need to use their own physically appropriate, different versions of “time,” corresponding to how their clocks run. Figures 2 and 3 display the lines for which two different versions of time, t2 and t3, are constant. t2 is the appropriate measure of time for observers moving at a certain constant velocity toward the right, while t3 is the appropriate measure of time for observers moving at a certain velocity toward the left—that is, in our figures, in the horizontal, “spatial” direction—relative to our reference frame. For observers with higher speeds, the tilt of these lines will be steeper. But the tilt never exceeds 45 degrees, because 45 degrees corresponds to the limiting speed, namely the speed of light.
With this background, we are ready to appreciate the distinctions shown in Figure 1. In the center of the diagram is a blue point b representing a specific event. Some events—those that lie in the green future region of space-time—occur at a later time than b, whether we use t1, t2, t3, or any other allowed observer’s measure of time. We say that these events are in b’s causal future (or, if there is no danger of confusion, simply b’s future). What happens at b can affect events in b’s causal future, without upsetting any observer’s sense that a cause—b—must occur before its effect. Closely connected is the fact that signals from b can reach events in b’s future without ever exceeding the speed of light. We call such physically allowed signals “subluminal” signals.
Similarly, we can define b’s causal past, depicted in red. It consists of all events that can affect b. There is a nice symmetry here: If we draw cones emanating from an event a in b’s causal past, we will find b in the upper colored region. An event a is in b’s causal past, if and only if b is in a’s causal future.
But many events fall into neither of those regions; they are neither in b’s causal future, nor in b’s causal past. We say that such events are “space-like” with respect to b. The event a, which appears in Figures 2 and 3, is of that kind. According to t2, a occurs after b; but according to t3, a occurs before b. Neither a nor b can send subluminal signals to the other.
In a similar way, we can consider the regions that are future, past, or space-like with respect to a. This leads us to a more elaborate division of space-time, illustrated in Figure 4. The orange region contains events in the common (causal) past of both a and b, the purple region their common future, and so forth. This colorful diagram hints at a potentially rich subject, the geometry of causation, that could be developed much further. (Specifically, it could add some spice to high-school geometry and analytical geometry courses, and provide material for independent projects.)
As we’ve seen, if a and b are space-like separated, then either can come before the other, according to different moving observers. So it is natural to ask: If a third event, c, is space-like separated with respect to both a and b, can all possible time-orderings, or “chronologies,” of a, b, c be achieved? The answer, perhaps surprisingly, is No. We can see why in Figures 5 and 6. Right-moving observers, who use up-sloping lines of constant time, similar to the lines of constant t2 in Figure 2, will see b come before both a and c (Figure 5). But c may come either after or before a, depending on how steep the slope is. Similarly, according to left-moving observers (Figure 6), a will always come before b and c, but the order of b and c varies. The bottom line: c never comes first, but other than that all time-orderings are possible.
These exercises in special relativity are entertaining in themselves, but there are also serious issues in play. They arise when we combine special relativity with quantum mechanics.
Two distinct kinds of difficulties arise as we attempt to combine those two great theories. They are the difficulties of construction and the difficulties of interpretation.
The difficulties of construction dominated 20th century physics. (One measure of this: By my conservative count six separate Nobel Prizes, shared by 12 individuals, were awarded primarily for advances on this problem.) The tough issues that arose here, in the construction of relativistic quantum theories, are in some sense technical. Combining special relativity and quantum mechanics leads to quantum field theory, and the equations of quantum ﬁeld theory are dicey to solve. If you try to solve those equations in a straightforward way, you find nonsensical results—for example, inﬁnitely strong forces. In fact it emerged, after many adventures, that most quantum ﬁeld theories really don’t make sense! They are mathematically inconsistent. Those that do make sense can only be defined using tricky mathematical procedures. Passing in silence over that epic, we reach the bottom line: After heroic struggles, the difficulties of construction were eventually (mostly) overcome, and today quantum ﬁeld theory forms the foundation of our immensely successful Standard Model.
The difficulties of interpretation have a different flavor. Closely related to our issues with time-orderings, they arise because labeling events by time plays an absolutely central role in the conventional formulation of quantum mechanics.
The quantum state of the world is represented by its wave function, which is a mathematical object defined on surfaces of constant time. Furthermore, measurements “collapse” the wave function, introducing a drastic, discontinuous change. Suppose, for example, that we decide to use t1 as our time. Then a measurement at t1 = 0 changes the wave function everywhere at all times subsequent to t1 = 0.
But what if we had chosen t2 or t3? The occurrence of that sort of collapse implies that there is a drastic difference between the formal descriptions of quantum mechanics based on our choice of reference frame. If we work with t2, then measurements at b will collapse the wave function seen at a, since b comes before a. For the same reason, measurements at b do not collapse the wave function at a. But if we work with t3, since the time-ordering between a and b is reversed, the situation is just the opposite!
Yet special relativity demands that either t2 or t3 can be used in a valid description of nature. Have we discovered a contradiction?
The point is that quantum-mechanical wave functions are tools for describing nature, rather than nature herself. Mathematically, quantum-mechanical wave functions contain a lot of excess (unobservable) baggage and redundancy, so that wave functions that look drastically different can nevertheless give the same results for most, or possibly all feasible physical observations.
While it falls short of outright contradiction, there remains, it seems fair to say, considerable tension at the interface between quantum mechanics and special relativity. During the long struggle to construct quantum ﬁeld theories, several physicists speculated that the inﬁnitely strong forces they calculated were surface symptoms of a fundamentally rotten core, whose rottenness was indicated more directly by the difficulties with interpretation. It didn’t work out that way. We have been able to construct theories that are not only consistent but also immensely successful, despite their near-contradictions and excess baggage.
As new technologies for probing the nano-world render possible what were once purely thought experiments, we have wonderful new opportunity to ask creative questions, confronting the paradoxes of quantum mechanics head on. Maybe we’ll ﬁnd some surprising answers—that’s what makes paradoxes fun.
1 There are more speculative possibilities: that time exhibits cycles, or branches, or even has several dimensions of it own. In general relativity we let time bend together with space, and in describing the Big Bang and black holes we encounter singularities, where time begins or ends. This is fascinating stuff! But “flat, unidirectional” time is the basis for almost all practical physics, and it already provides rich food for thought, so that’s what I’ll be considering here.
Go Deeper Editor's picks for further reading
arXiv: Constraints on Chronologies
Read the author's technical paper on chronologies, written with theoretical particle physicist Alfred Shapere.
FQXi: Cheating the Causal Game In this article, discover how researchers at the University of Vienna are deconstructing the physics of cause and effect.
Quantum physicists regularly ask you with a straight face to accept what seems to be complete nonsense. Particles are also waves; cats are alive and dead at the same time. But some of the most incredible creatures of the quantum realm get far less attention than Schrödinger’s famous cat. They’re called virtual particles, and they might be the reason the universe exists in the first place. In the pencast below, I’ll explain the basics of virtual particles. Then read on to learn more.
While the Big Bang theory explains how the universe has expanded and cooled since it began, it is quite silent on what “pulled the trigger,” so to speak. We simply don’t know what started the process. How there could be nothing at one moment and an entire baby universe the next?
It turns out that getting something from nothing is just business as usual for virtual particles. The most straightforward way to explain virtual particles is by an example. Consider a particle collision in which one electron hits another and the two scatter. In the classical view, the electric field from one electron interacts with the other and the two feel a repulsive force. However, this approach neglects Einstein’s Nobel Prize realization that light—and, by extension, every electromagnetic field—is quantized. So a quantum treatment of electron scattering needs to include not only the quantum nature of the electrons, but also the quantum nature of the photon. We now treat electron scattering as the two quantized electrons exchanging a quantized photon and, in the process, changing their directions.
So how do virtual particles enter in? Well, you can calculate the properties of the photon that must be emitted to scatter the electrons. Simple energy and momentum conservation considerations tell us what the energy and momentum of the photon must be. However, when you do the calculation, you find that the photon has a mass! Since photons are massless particles, this seems to invalidate the whole idea. It sure sounds like physicists are pulling your leg, just to see how long it will be before somebody is willing to say that the subatomic Emperor has no clothes.
As crazy as this seems, it is true. To see how, we need to invoke another hard-to-swallow axiom of quantum mechanics: the Heisenberg Uncertainty Principle, named after its inventor, Werner Heisenberg. In classical physics, energy and momentum are always conserved. But Heisenberg spotted a loophole in this rule: in the quantum realm, energy and momentum don’t have to be conserved, as long as the non-conservation doesn’t persist for very long. It’s kind of like having a shady accountant. If you audit the books, the amount of money you send him has to agree exactly with the amount of money he uses to pay your bills. But, while he has your funds, he is free to temporarily lend or borrow money so that momentarily he will have the “wrong” amount of money. Further, the larger the amount of money loaned or borrowed, the shorter the period of time it will occur. Similarly, in the quantum realm, energy and momentum can briefly be “wrong,” but the larger discrepancy, the shorter the period of time for which it is allowed.
So in our example of electrons scattered by exchanging photons, the photon can briefly have the “wrong” amount of energy and momentum. Now, it is understandable if you find this a bit hard to take; perhaps an instance of physicists making stuff up to save their theories. And, truth be known, that would be my reaction if there were not an extensive list of experimental measurements that demonstrate that virtual particles exist. In fact, virtual particles play a critical role in most of the experiments performed at large particle physics laboratories like CERN, Fermilab and many other similar facilities.
While I’ve described the idea of a single virtual particle, the idea is actually much richer than that. Virtual particles also exist in association with real particles. For instance, suppose you have an ordinary, garden-variety, electron. A reasonable mental image of the electron would be a little subatomic marble, carrying electrical charge, mass and spin. Anyone with even a cursory understanding of quantum mechanics know that image is a bit dodgy, as electrons exhibit lots of crazy quantum behavior.
The life of an electron is much more complex than that, though. In addition to the usual quantum craziness, where an electron is both a particle and a wave and the position of the electron is generally indeterminate, electrons are surrounded by virtual particles. For instance, an electron can briefly emit a photon. That photon will be reabsorbed quickly in such a way that the energy and momentum conservation laws aren’t violated. But it gets crazier than that. The virtual photon can also turn into a virtual electron/positron pair. Thus, for a brief moment, what was once just an electron becomes an electron plus an additional electron and positron. As long as the virtual particles coalesce before the universe notices, it’s all within the rules. Indeed an electron never exists as a single “bare” electron. Rather, it is always enshrouded in an ephemeral cloud of virtual particles, flickering in and out of existence, and vastly complicating what an electron “really” is.
It might seem far-fetched, but experiments can actually detect the presence of this cloud. That is because every electron acts like a mini-magnet. We can calculate exactly how strong the magnet should be. But when we make very precise measurements of its strength, we find that the measured magnetic moment is about 0.1% off from the simple prediction. It turns out that when you take into account the virtual cloud around the electron, it exactly matches this small 0.1% discrepancy, showing that the cloud is definitely present. Further, the data and prediction exactly match to nine digits!
If your mind isn’t blown, wait…it gets crazier still. Empty space—that is, space that contains nothing—no energy, no charge, no matter, nothing—is filled with a writhing, active population of virtual particles that physicists call “the quantum foam,” with bubbles appearing and popping in wild abandon. At the subatomic level, space is never truly empty.
You’d think that if empty space were filled with a constant roiling boil of quantum activity, you’d see it. The fact that you don’t could give you yet more reason to disbelieve, yet the effects of the quantum foam have been directly observed.
The first observation of the quantum foam came from tiny disturbances in the energy levels of the electron in a hydrogen atom. A second effect was predicted in 1947 by Hendrik Casimir and Dirk Polder. If the quantum foam was real, they reasoned, then the particles should exist everywhere in space. Further, since particles also have a wave nature, there should be waves everywhere. So what they imagined was to have two parallel metal plates, placed near one another. The quantum foam would exist both between the plates and outside of them. But because the plates were placed near one another, only short waves could exist between the plates, while short and long wavelength waves could exist outside them. Because of this imbalance, the excess of waves outside the plates should overpower the smaller number of waves between them, pushing the two plates together. Thirty years after it was first predicted, this effect was observed qualitatively. It was measured accurately in 1997.
Quantum foam also has astrophysical implications. In 1974, Stephen Hawking was thinking about quantum mechanics and black holes. He realized that the quantum foam would exist near the event horizon of the black hole. If an electron/positron virtual pair popped into existence just outside the event horizon, one of the two particles might spiral down and get trapped in the black hole, while the other would escape. As it happens, more energy would escape than be captured, so the energy of the black hole would get slightly smaller. Over the eons, this “Hawking Radiation” would cause the black hole to evaporate until it totally disappeared.
Virtual particles and the quantum foam are one of the craziest of the quantum phenomena. They have no classical analog and they certainly seem like something that physicists dreamed up to save the counterintuitive world of quantum mechanics. Borrowing from the movie “The Maltese Falcon,” quantum mechanics is said to be the dreams that stuff is made of, but virtual particles are no dreams. They have been experimentally observed, and indeed it could be that a quantum fluctuation similar to virtual particles was the thing that pulled the trigger on the creation of the universe itself: a crazy start for a universe where, we’re learning, the bizarre is the norm and dreams are reality.
Quantum mechanics is one of the most devilishly confusing theories ever devised. Cats that are simultaneously alive and dead, objects that are both particles and waves, subatomic particles that know whether you are looking at them or not—and, most bafflingly, these quantum effects can be erased when individual atoms, electrons, and photons interact with their environment.
That is what makes Serge Haroche and David Wineland’s Nobel Prize-winning work in physics so remarkable: They have achieved mastery of the microrealm. Both of them have spent decades trying to generate systems in which a single atom or a single photon can be studied.
David Wineland, at the National Institute of Standards and Technology (NIST) and the University of Colorado at Boulder, is an expert at trapping individual atoms using electric fields and by keeping them in an ultra-high vacuum. The mastery of individual atoms is possible by artfully employing laser beams and laser pulses. The laser beams can cool the motion of the atoms and even can transfer quantum information about the atom’s location to the location of electrons inside the atom. This is an extraordinary achievement.
Serge Haroche, of the Collége de France and the Ecole Normale Supérieure, Paris, essentially does the opposite. He uses atoms to study individual photons. Using superconducting niobium, he creates the two most reflective mirrors ever achieved. With the mirrors placed about an inch apart, he introduces a single photon, which bounces back and forth for over a tenth of a second until it eventually hits an imperfection in the mirror and is absorbed. While the photon is bouncing, it travels a distance equivalent to circling the entire globe.
In order to measure the photon, Haroche fires single rubidium atoms through his equipment. These atoms are of a special class called “Rydberg atoms,” in which the electrons “orbit” very far from the atomic nucleus. (Though we now know that atoms do not operate as mini solar systems, the analogy of orbits can still be a useful one.) By measuring the configuration of the Rydberg atom before and after it travels through his apparatus, Haroche can determine if there is a photon inside his equipment without absorbing or altering the photon.
These techniques have made it possible to probe quantum mechanics in more detail than ever before, with Haroche’s work making it possible to effectively make a movie of the transition of a photon from one state to another, a process that scientists call “the collapse of the wave function." But these two scientists’ work has more practical applications. For instance, if we are able to put equipment into a quantum state and read that state without destroying it, this opens the possibility of quantum computing. The potential power of quantum computing is enormous and if we are able to actually accomplish it, this will change computing in the same degree that ordinary computing has changed the world since the 1940s. Quantum computing is still a ways in the future, but Haroche and Wineland’s work has brought it closer to fruition.
Wineland's work has also made possible a new generation of clocks that are 100 times more accurate than the best timekeepers in the world. These new clocks are precise to one part in 1017. To give some context, if these clocks were started when the universe began 13.7 billion years ago, by now they would be off by a mere four seconds. Such accuracy is useful for communication and navigation, and could also enable even more stringent tests of Einstein’s theory of general relativity, which states that time runs slower in stronger gravitational fields. When people think about this effect, they usually invoke the mind-bending gravitational fields surrounding black holes, but these new clocks are so precise that the effect of time dilation due to gravity would be obvious if one of them were raised a mere foot off the surface of the Earth.
Haroche and Wineland’s work is of the highest caliber, with potential society-changing implications. In this year’s Nobel Ceremony on December 10, they will rightfully join their peers in the pantheon of great scientists.
If you were born on an isolated desert island in the middle of the ocean and had no communication with the outside world, your knowledge of geography would be limited. Peering through binoculars, gazing out in any direction, your view would be bounded by the sea’s horizon. Although you might speculate about what lies beyond the edge, you’d lack tangible evidence to support your hypothesis.
Confined to our planet and its environs, we face the same situation: We can see a portion of the universe, but we can only speculate about its full extent. We might surmise through its flat geometry that it continues indefinitely in all directions, like a prairie stretching out as far as the eye can see. (Flat in this context refers to a straight three-dimensional space, like an endless box.) However, our understanding of the actual universe is bounded by the edge of the observable universe. We cannot know for sure what lies beyond the enclave our instruments can detect.
Accordingly, we might wonder: How large is the part of the universe we’re potentially able to observe directly? At first glance, the answer might seem like a simple calculation. The speed of light is approximately 186,282 miles per second, or about 5.9 trillion miles per year. The time that has elapsed since the Big Bang is 13.75 billion years. Multiple the two figures and—voilà—we find that over the entire history of the universe, light could have travelled 13.75 billion light-years, or 81 billion trillion miles. But, in fact, that answer would be wrong.
Let’s think about when the light was produced. From the time of the Big Bang to the era of recombination (when neutral hydrogen atoms formed) some 380,000 years later, the universe was opaque to light. Photons bounced between charged particles and didn’t travel very far. The reason is that charged particles interact with photons—either absorbing or emitting them. Only after the era of recombination could light journey through space. That is because photons can pass through neutral hydrogen gas without being diverted. Therefore, any estimate of the size of the observable universe must assume that the farthest light we see was released after that pivotal era when space became transparent. (We may someday be able to detect neutrinos and other particles from before that era, pushing the timeline earlier and enlarging the realm of what is observable, but for now we are still limited.) The difference between the two times doesn’t change the calculation much, but is important to note.
Another adjustment is far more important. Since the primordial burst of creation, space has been stretching as the universe expands. A galaxy’s distance from us today is far greater than it was when it released the light. We can think, by analogy, of a relay race in which a girl tosses a ball to her teammate and then runs away from him. If the coach later asks the teammate what is the farthest throw he has caught he would give a very different answer than if he is asked where is the farthest player he has caught a ball from. Similarly, the distances traveled by the photons hurled by light sources do not reflect the much greater extent of the sources’ current positions. Thus, we could potentially observe light sources that are much farther out than 13.75 billion light-years, if their light was released when they were close enough to Earth.
Yet another factor that expands the limit of the observable universe is its acceleration. Not only is the universe expanding; its growth has been speeding up. Data from the Hubble Space Telescope, the WMAP (Wilkinson Microwave Anisotropy Probe) satellite and other instruments have been used to pin down the rate of acceleration, along with the current expansion rate, the age of the universe, and other important cosmological parameters.
Taking advantage of this wealth of information, in 2005 a team of astrophysicists led by J. Richard Gott of Princeton performed a detailed calculation of the radius of the observable universe. Their answer was 45.7 billion light-years—more than three times bigger than our first, naïve estimate! Within this sphere lie hundreds of billions of galaxies, each with hundreds of billions of stars.1
Image credit: Andrew Colvin
Gott’s team calculated this radius by figuring out how far away from us a source would be today if the light we now observe from it was emitted during the recombination era. In our relay race analogy, that’s determining where someone must have stood if she threw a ball and we caught it, and then using her running speed to figure out where she must be right now.
Interestingly, as the universe expands, the size of the observable portion will grow—but only up to a point. Gott and his colleagues showed that eventually there will be a limit to the observable universe’s radius: 62 billion light-years. Because of the accelerating expansion of the universe, galaxies are fleeing from us (and each other) at an ever-hastening pace. Consequently, over time, more and more galaxies will move beyond the observable horizon. Turning once again to our relay race analogy, we imagine that if the players get faster and faster as the race goes on, there will be more and more who were so far away when they first threw the ball that the light would never have had time to reach us.
Naturally not everything within the observable universe has been identified. It represents the spherical realm that contains all things that could potentially be known through their light signals. Or to draw from a famous comment by former Secretary of Defense Donald Rumsfeld, the observable universe contains “known unknowns,” such as dark matter, that could eventually be analyzed. Beyond the observable universe lie “unknown unknowns”: the subject of speculation rather than direct observation.
1The 45.7 billion light-year radius includes only light sources. If neutrinos and other particles that could penetrate the opaque conditions of the early universe are included the value becomes 46.6 billion light-years.
What is The Nature of Reality? It's a blog, and it's pretty much the biggest question we could think of. We can't promise that we'll answer the question once and for all – in fact, we can pretty much guarantee that we won't – but we promise a space that welcomes big ideas about space, time, and the universe; where we take questions as seriously as we take answers; where physics is about way more than inclined planes and levers and pulleys. Stick around; we just might expand your cosmic horizons.