For decades, physicists have been trying to combine quantum physics and general relativity into a single, unified theory. One of the leading contenders is string theory, an elegant vision in which matter and the very forces of nature are vibrating and interacting filaments of energy. It sounds great, but there’s a problem: No one can really figure out a way to test string theory.
Now, we may finally be on the verge of experimentally confirming—or refuting—some key facets of string theory.
String theory describes nature on extremely small size scales and high energies that are all but inaccessible to modern physics. The ideal experiment would provide direct evidence of these strings behaving in ways uniquely predicted by the theory, but that’s not as easy as it sounds, for two reasons. First, Heisenberg’s uncertainty principle puts some fundamental limits on how precise measurements can be. On the tiny scale of string theory, these limits may make it impossible to point at data and declare, “Right there, that’s a string!” as we can (or can approach) with the Higgs boson.
Complicating matters further, string theory has so many variants that there are very few unique predictions from the theory, so scientists don’t even know what to look for. Because of the flexibility physicists have in defining the exact parameters of string theory in the high energy realm, some have predicted that there might be as many as 10500 different variants of the theory, far too many to explore one by one.
Still, there are three fundamental pieces of the theory that could be put to the test in the near future. These results would not “prove” string theory, but could certainly be claimed as successes by many string theorists—and would help define some of those parameters.
The Search for Supersymmetry
Supersymmetry is one of the central concepts of string theory. Without supersymmetry, string theory is unable to describe the full range of particles observed in our universe. It can deal with photons, but not electrons. Supersymmetry bridges this divide. It predicts that all of the known particles possess supersymmetric partner particles, or superpartners. These superpartners are unstable and mostly vanished when the universe cooled down from the dense soup of the early universe, but as we crash particles together at ever-higher energies the Large Hadron Collider, we should eventually stumble upon them.
Actually, we may already have our first evidence that can lead us toward confirming supersymmetry, with the potential discovery of the Higgs boson. Supersymmetry predicts not just a single Higgs, but an entire family of Higgs-like particles. In her slim volume "Higgs Discovery: The Power of Empty Space," Harvard theoretical physicist Lisa Randall describes a variant in which “some superpartners have big masses, whereas others do not.” As Randall explains it, under supersymmetry, “… if the Higgs boson exists, it is most likely part of a larger sector of new particles.” So if scientists are successful at discovering multiple Higgs-like particles, it’s very possible that we’ll end up with direct experimental evidence to support supersymmetry.
Measuring Extra Dimensions
String theory also claims that the universe contains extra dimensions, curled up on the same very tiny distances at which the strings exist—and subject to those pesky uncertainty principle limitations.
Or at least that’s the traditional stance. In 1998, a group of string theorists put forth the bold idea that these extra dimensions may not be so miniscule after all. They suggested at the time that they could potentially be as large as a millimeter! At this size scale, the LHC might have had a chance of exposing them despite the uncertainty principle.
Unfortunately, December 2010 results from the Large Hadron Collider have placed serious constraints on this intriguing model. If extra dimensions do exist, they must be smaller than a millimeter—but perhaps could still be large enough to be detected at the LHC. If discovered, the properties of these extra dimensions could help narrow in on the correct version of string theory.
The Holographic Principle and Superconductors
An important idea at work in string theory is the holographic principle, especially a version called the Maldacena duality, named for the theorist Juan Maldacena ,who first proposed it in 1997.
The Maldacena duality is is a specific way of relating two theories that, at first glance, seem quite different mathematically. You can sort of picture this by imagining a box that contains an entire three-dimensional universe. (For the purposes of this analogy, I’m just going to ignore the dimension of time.) Now imagine that the box’s two-dimensional surface contains information about what’s going on inside the box. The holographic principle basically tells us that the description on the two-dimensional surface can contain all of the same information as in the whole three-dimensional universe itself. There is a perfect correspondence between these two models.
Maldacena’s duality proved that if you had a quantum theory without gravity on a surface, it corresponded to a full string theory and gravity on the space contained within the boundary: a huge boon to theorists, but not something that anyone—including Maldacena himself—would have thought had real practical applications.
And that just goes to show how little “anyone” knows when predicting the future course of science. In 2009, physicists showed that Maldacena’s duality could describe behaviors in high-temperature superconductors. While physicists understand low-temperature superconductors, they still couldn’t explain how materials become superconductive at warmer temperatures. They knew it was linked to electrons entering a quantum critical state, which is the quantum phase change that turns the material into a superconductor, but couldn’t fully understand or model this. As condensed matter theorist Jan Zaanen described the situation, “It has always been assumed that once you understand this quantum critical state, you can also understand high temperature superconductivity. But, although the experiments produced a lot of information, we hadn't the faintest idea of how to describe this phenomenon.”
Then Zaanen’s team tried to explain quantum critical states with string theory. They created a string theory model, then applied the Maldacena duality to get a related version of the model—one which matched the experimental results surprisingly well. Maldacena has called this the most impressive and surprising outcome of his conjecture.
But for Zaanen, it is just the beginning. It “should be … viewed as the starting point of a novel line of enquiry for [the Maldacena duality] in general,” says Zaanen. Ideally, this approach will eventually result in testable predictions that could become the focus of experiments.
Even if, ultimately, the results of these experiments do not support string theory, they will have proven something important: That the pursuit of an interesting idea—even a wrong idea—can yield amazing insight into how the universe works.
Editor's picks for further reading
FQXi: Tying Down the Multiverse with String
Physicists Andrei Linde and Renata Kallosh are working at the intersection of string theory and cosmology.
NOVA: The Elegant Universe
Author-physicist Brian Greene presents the nuts, bolts, and sometimes outright nuttiness of string theory in this four-part NOVA special.
Not Even Wrong: Is String Theory Testable?
On his blog "Not Even Wrong," mathematician and physicist Peter Woit takes a critical eye to string theory.
We think of gravity as weighty—its omnipresent grasp pulling us down to the ground. Try to lift a piano up a flight of stairs and you can feel gravity’s resistance. (Laurel and Hardy showed this best!) Yet in a match with the other fundamental forces of nature—electromagnetism, the weak force and the strong force—gravity gets pummeled.
You can see gravity’s relative weakness simply by using an ordinary bar magnet to pick up paper clips from a desk. Battling the gravitational pull of all of Earth, the tiny magnet wins! In fact, gravity is a staggering 1040 times weaker than electromagnetism. But why, among the fundamental forces, is gravity the runt of the litter? Explaining gravity’s relative feebleness is a profound challenge for physics, and an essential milestone on the road to a unified theory of all four forces.
Uniting the four fundamental forces into a single unified theory is a longstanding scientific dream. On the face of it, these forces are very different. They each operate across different distances, with different strengths, determined by the properties of a special class of particles, called “exchange particles,” whose job it is to convey the forces.
Exchange particles are like Frisbees thrown between particle players; the process of tossing the Frisbee brings ordinary particles together—or in some cases pushes them apart. In electromagnetism, for example, electrons interact by exchanging photons. Because the photons have no rest mass, they travel through the vacuum at the speed of light, making electromagnetism a long-range force. Long-range means that it can operate over great distances. For example, terrestrial receivers can pick up radio signals (a type of electromagnetic radiation) transmitted by Voyager 1, situated at the edge of the solar system more than 11 billion miles away. The weak force, in contrast, is conveyed by massive exchange particles called the W+, W- and Z bosons. Because these exchange particles are so heavy, particles feel the weak force only on very short ranges—within atomic nuclei. The strong force, too, operates only on short ranges. But gravity, which theorists believe is carried by particles called gravitons traveling at the speed of light, is a long-range force.
Physicists have made great strides toward unifying electromagnetism with the weak interaction and, to some extent, with the strong interaction, by looking at how the forces behave at very high energies—energies that existed just moments after the Big Bang. Theoretical and experimental discoveries of the past few decades suggest that at these energies—above about 100 GeV (gigaelectronvolts)—the weak and electromagnetic forces behave as a single type of interaction, called electroweak, with identical range and strength. During this brief, hot period, the “Frisbees” that convey the weak force had no mass at all. But as the universe cooled, interactions with the Higgs field caused the W+, W- and Z bosons to acquire mass, differentiating them from massless photons and splitting what was once one force, the electroweak force, into two, electromagnetism and the weak force.
Although physicists have yet to develop a “grand unified” model that includes the strong force, they believe that, at even higher energies, it too may merge with the weak and electromagnetic forces.
But what about gravity? If you could draw the evolutionary family tree of the physical forces, you might see the split between electromagnetism and the weak force as something like the division of primates into various species. Gravity, though, represents a far more radical diversification. Reconciling gravity along with the other forces is something like showing how viruses and whales have common ancestry. Spanning the differences is possible, but tricky.
How far up the energy scale must we climb to find a point at which gravity could be unified with the other forces? The answer is a number called the Planck energy: around 1028 eV. That’s about 1017 times greater than the energy of electroweak unification—an enormous difference. One would need a machine almost a quintillion times more powerful than the Large Hadron Collider or Fermilab’s (retired) Tevatron to probe the Planck energy, putting experimental tests of this kind of unification well out of reach for the conceivable future! The sheer magnitude of the Planck-to-electroweak energy ratio, related to the stark weakness of gravity, is called the hierarchy problem.
String theorists have proposed an innovative solution to the hierarchy problem: Perhaps gravity is weakened through its ability to travel across an extra dimension. What sounds like science fiction has become an active branch of scientific inquiry called the “braneworld hypothesis.” The braneworld hypothesis suggests that the observable universe lives within a four-dimensional (three dimensions of space and one dimension of time) membrane, or “brane.” Beyond this brane, extended along a fifth dimension, is a region called the “bulk.” Unique among exchange particles, gravitons are free to wander the bulk; everything else is stuck on our brane. So while most particles are like ants confined to the surface of a wooden picnic table (the brane), gravitons are like termites that can bore within (and visit) the bulk. (In the language of string theory, the difference is that gravitons are represented by closed strings; other particles are open strings and have ends that stick to our brane like a curved handle attached to a door.) Because photons cannot enter the bulk, it remains invisible.
An advantage of explaining gravity’s weakness through its dilution into the bulk and making the brane the venue for the other forces is that unification can take place at energies not much higher than the electroweak scale, rather than at the Planck scale, rendering the search for a unified theory much easier.
There are a number of variations of the braneworld idea. One model, proposed in 1998 by physicists Nima Arkani-Hamed, Savas Dimopoulos and Gia Dvali envisions a “large extra dimension” of about one millimeter throughout which gravity is spread. (String theorists are typically concerned with such tiny size scales that one millimeter does indeed qualify as “large.”) A second brane, parallel to ours, would constitute the opposite boundary of the bulk, confining gravitons to the region between the two limits. You can imagine this second brane as something like the underside of the picnic table. The extra dimension would then comprise the thickness of the table—the distance between its top and bottom—limiting the graviton “termites” to travel through a finite amount of wood.
However, this proposal modifies the law of gravity in measurable ways, which have since been ruled out experimentally. Another version, proposed by physicists Lisa Randall and Raman Sundrum, includes only one brane and posits that the warping of the structure of the bulk, along the direction of the extra dimension, would be enough to confine gravity to a limited region and dilute its strength. The theory predicts a measurable leakage of gravitons from our brane into the warped bulk, which could potentially be detected in collisions at the Large Hadron Collider through unexpected energy loss that finds no other explanation. Researchers have sought such telltale clues to test the Randall-Sundrum idea and other braneworld approaches. The tricky part is using statistical models to rule out more mundane effects that could mimic the leakage of energy (for example, the release of neutrinos). While LHC results have not yet confirmed the braneworld idea, the jury is still out as to whether or not gravity’s weakness is a result of its slick ability to leave the visible universe and travel through a higher dimension.
Editor's picks for further reading
CERN Courier: The Nobel path to a unified electroweak theory
In this article, CERN marks the anniversary of the 1979 Nobel Prize in physics, which honored the foundations of electroweak unification.
The Particle Adventure: What holds it together?
Explore the physics of force-carrying particles on this interactive web site.
Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions
In this book, physicist Lisa Randall explores extra dimensions, braneworlds, and the bulk.
Look out solid, liquid and gas: There’s a new form of matter in town. Actually, this “new” matter isn’t new at all—it is one of the most ancient forms of matter in the universe. Last seen more than 13 billion years ago, just millionths of a second after the Big Bang, this exotic stuff is making a comeback thanks to particle accelerators like the Relativistic Heavy Ion Collider (RHIC) on Long Island and the Large Hadron Collider (LHC) in Europe, where physicists can generate temperatures of more than a trillion degrees centigrade. These enormous temperatures allow scientists to push back the clock of the cosmos and witness matter in the extreme energy environment that existed within microseconds of the Big Bang.
At such high temperatures, the protons and neutrons inside atomic nuclei literally melt, releasing the quarks and gluons inside them and creating a form of matter called a quark-gluon plasma. You can think of the quarks as the “matter” particles and gluons as the particles of force that hold the protons and neutrons together. A reasonable mental image of a proton or neutron would be like a few flecks of Styrofoam (quarks) inside a lottery ball machine. The wind in the lottery machine is analogous to the force field, while the air molecules represent the gluons.
Under ordinary conditions, quarks and gluons are forever locked inside protons and neutrons. They’re like the water held frozen into ice cubes in a glass. But just as ice can be melted into water when energy is added to the system (by pouring hot tea over them, for instance) allowing the molecules from one ice cube to mix with molecules from other cubes, so too it is possible to melt protons and neutrons and have the quarks and gluons scamper around willy-nilly.
To melt protons and neutrons, you need to heat them up to about a trillion degrees. The only way to generate this kind of temperature is to smash together atomic nuclei at high velocities in huge particle accelerators. That’s what physicists are now doing at the LHC and the RHIC, accelerators that take atoms (lead and gold, as well as some others), strip off all of the electrons, and then slam the bare atomic nuclei together. The most violent of these collisions can heat up the nuclear matter enough to free the quarks and gluons to wander as they will. Though experimental calibration issues add some uncertainty to the mix, the current temperature record seems to belong to the ALICE experiment at the LHC, which measured an astounding 5.5 trillion degrees centigrade.
Studying the phase transitions of quark-gluon plasma allows us to understand the behavior of matter in the early universe, just fractions of a second after the Big Bang, as well as conditions that might exist inside neutron stars. The fact that these two disparate phenomena are related demonstrates just how deeply the cosmic and quantum worlds are intertwined. Credit: Brookhaven National Laboratory
While the first lab-made quark-gluon plasma was created in 2000, physicists are only now beginning to understand how this form of matter behaves. At the LHC and RHIC, they are mapping out in more detail the temperatures and pressures at which ordinary matter transforms into quark-gluon plasma. They are also tracing the boundaries between quark-gluon plasmas and even more exotic forms of matter like the stuff of neutrons stars, which is thought to be so dense that, at the center of the stars, quarks get “smooshed” together into an exotic kind of solid.
In fact, physicists believe that there are many different phases of matter involving quarks. While I’ve focused on two states of matter, atomic nuclei and quark-gluon plasma, this just scratches the surface of the possible.
By studying quark-gluon plasma, physicists are able to explore a period in the history of the universe that has thus far eluded us—a period in which protons and neutrons, the basic ingredients of ordinary matter, were coalescing for the first time. Thanks to accelerators like the LHC and the RHIC, we are finally beginning to probe this pivotal chapter in the story of cosmos.
Editor's picks for further reading
Brookhaven Lab: Quark-Gluon Plasma: a New State of Matter
In this video, physicist Peter Steinberg explains the nature of a quark-gluon plasma.
Physics Central: Quark-Gluon Plasma
Learn the basics of quark-gluon plasmas in this explainer from the American Physical Society.
Quark Matter 2012
Explore highlights from the August 2012 Quark Matter conference, held in Washington, DC.
The Nobel Prize may be the most prestigious award in science, but the new Fundamental Physics Prize is by far the world's most lucrative scientific award, instantly making its first winners this August multimillionaires. But the size of the payout isn’t the only difference between the two prizes: Unlike the Nobel, the Fundamental Physics Prize can be awarded for research that has not yet been verified by experiment. Is it foolhardy to extol work later findings might prove wrong?
The Fundamental Physics Prize is the brainchild of Russian tycoon Yuri Milner, a one-time physics graduate student turned billionaire investor in internet companies such as Facebook, Twitter, Groupon and Zynga. Milner personally selected the inaugural class of nine winners, each of whom received $3 million, roughly three times as much as a Nobel grant.
Although some of the work recognized by this year’s awards has been experimentally verified (for example, the principles of quantum computers are firmly grounded in experiment), others, like string theory, which compares elementary particles to loops of vibrating string, and the holographic principle, which suggests that our three-dimensional reality is a projection of information stored on a far-off two-dimensional surface, live further out on a theoretical limb.
Ideas like these may not get experimental verification any time soon, either. Take string theory. As Fundamental Physics Prize winner Ashoke Sen, a string theorist at the Harish-Chandra Research Institute in India, explains, "Unfortunately the strings are so small that the energy required for seeing these structures is huge, much larger that what we have achieved in the present day accelerators.”
Yet many physicists (including, unsurprisingly, some Milner honorees) argue that unverified ideas, and even ideas that are unverifiable with today’s technology, are prizeworthy—even if future tests should prove them wrong.
"Many of the most important developments in physics involve subjects for which there is little hope of experimental verification anytime soon," argued cosmologist Alan Guth at the Massachusetts Institute of Technology, one of the winners of the Milner prize. Guth invented the theory of cosmological inflation, which suggests our universe expanded staggeringly just a sliver of a second after it was born. This rapid expansion that would help explain, among other things, why the cosmos is so extraordinarily uniform on large scales, with only very tiny variations in the distribution of matter and energy.
In fact, two of the greatest theoretical breakthroughs of the 20th century—Albert Einstein's theories of special and general relativity—were never honored by a Nobel Prize, said Stanford cosmologist Andrei Linde, another winner of the Fundamental Physics Prize. These ideas changed the world by showing that mass and energy are equivalent and that gravity is a result of mass curving the fabric of space and time. Although Einstein was awarded the Nobel Prize in 1921, he was given the prize not for relativity but for describing how light was composed of discrete packets of energy now called photons, because the Nobel committee felt that relativity had not at the time been verified.
"More recently, we have the case of the Higgs particle, with almost 50 years between the theoretical advance and the experimental verification," Guth added. "If this kind of theoretical work is not respected, then progress in fundamental physics would suffer tremendously."
Even if theoretical research does lead to a dead end, it can help inform what ultimately prove to be successful ideas, said theoretical physicist Nima Arkani-Hamed at the Institute for Advanced Study in Princeton, New Jersey, a winner of the new prize who has investigated ideas such as extra dimensions of reality and new theories regarding the Higgs particle.
"One of the great developments in physics in the 20th century was the Standard Model of particle physics, which explains particles such as electrons and quarks and gluons," Arkani-Hamed said. "But before the Standard Model was known to work, there were people exploring lots of other theoretical possibilities that might be consistent with our world. Even if they didn't pan out, collecting ideas that have a chance of working may help lead to developments like the Standard Model."
“Wrong” ideas can advance science in other ways, too. "We should keep in mind that Newtonian mechanics was ultimately found to be incorrect, but it nonetheless was a momentous force in driving science forward," Guth said. "Today there are many developments in physics that are recognized by the community as being important, even though we cannot prove that they are correct."
As to how the prize-winners might spend their gains, other than paying taxes and mortgages, they often said they were still in shock over the award. "I continue to remind my students that they should not go into physics for the money," Guth said.
Editor's picks for further reading
Fundamental Physics Prize
The official web site of the Fundamental Physics Prize Foundation.
Nature: Theoretical physicists win massive awards
Geoff Brumfield talks with winners—and critics—of the new prize.
New York Times: 9 Scientists Receive a New Physics Prize
Kenneth Chang reports on the announcement of the Fundamental Physics Prize winners.
Where will you be in 10100 years?
Yes, I know, we'll all be long gone by then. But if you could somehow stick around around to experience the universe ten thousand trillion trillion trillion trillion trillion trillion trillion trillion years from now, what would it be like?
Answering that question is a professional hobby for astronomers Fred Adams and Gregory Laughlin. They divide the life of the universe into five distinct stages, beginning with, well, the beginning—the Big Bang and the short period of explosive expansion that followed, all the way through to the formation of the very first stars about one million years later. That’s followed by the second stage, which Adams and Laughlin dub the “stelliferous era”—the era during which stars generate most of the universe’s energy. We are creatures of the stelliferous era; this is the universe we recognize as home.
But while the stars are hitting their stride during the stelliferous era, dark energy—the mysterious energy that is causing the expansion of the universe to accelerate—is well on its way to cosmic domination. If the acceleration continues at its present rate, in another hundred billion years or so, most of the visible universe will pass beyond our cosmic horizon. Future denizens of the Milky Way will turn their telescopes to the sky and see just one galaxy: their own.
As Lawrence Krauss and Robert Scherrer pointed out in a 2007 paper, these future astronomers will see no evidence of cosmic expansion or the Big Bang. They will probably conclude that their universe is static; that it is as it has always been and always will be. Ironically, the very force that sculpted their universe—dark energy—will have erased its own fingerprints.
This idea troubled Harvard astronomer Avi Loeb, who imagined a future in which astronomers would look back on today’s cosmology textbooks (which would then be 100 billion years old) with the same combination of reverence and skepticism with which we view biblical origin stories today. “You will have all these textbooks, but their claims will be unverifiable,” says Loeb.
Loeb went looking for a way in which future astronomers could tease out the history of their universe. He found the answer in hypervelocity stars, stars traveling so fast that they escape the gravity of their home galaxy. Using their advanced telescopes to monitor these stars, says Loeb, future astronomers just might be able to probe the universe beyond their galactic boundaries.
But even those galactic boundaries will be erased in the course of time. Astronomers estimate that the longest-lived stars will begin to burn out some ten trillion years from now, throwing our universe into an era of cosmic twilight. Here, the universe is lit only by the feeble embers of white dwarfs and neutrons stars, stellar corpses that will give off energy as they “hoover up” dark matter particles, says Adams. Though galaxies and galaxy clusters have managed to hold themselves together until now, a slow and steady stream of stars—the very same hypervelocity stars Loeb saw as cosmic ambassadors—will absent themselves from their galaxies until, over a period of about 1020 years, galaxies will “evaporate” entirely, Adams and Laughlin calculate. The finely-woven tapestry of the universe will come undone.
A computer simulation of the cosmic web of dark matter and ordinary matter. Image credit: NASA, ESA, and E. Hallman (University of Colorado, Boulder)
Given sufficient time, even the protons and neutrons that make up the stuff of universe will fall to pieces. How long will it take? That is still a mystery, though a combination of experiment and theory suggests that it will happen some time between 1033 and 1045 years after the Big Bang.
At that point, all that’s left of the stars and galaxies that once illuminated our universe will be a smattering of black holes. But even the reign of the black holes won’t last forever. As Stephen Hawking showed theoretically, black holes slowly leak out their contents via a process we now call Hawking radiation. Given enough time—as long at 10100 years—even the biggest black holes will evaporate away.
Only now will we enter what Adams and Laughlin dub the “dark” era. The dark era isn’t just very, very dark; it is also very, very boring. Next to nothing actually happens in the dark era. Thanks to the accelerating expansion of the universe, even humdrum particle collisions will become rarities.
Will the lonely monotony of the dark era ever end? Maybe. The same energy that has been driving the accelerating expansion of the universe could suddenly change character, a phenomenon theorists call vacuum energy decay. It happened once before—when the era of inflation ground to a halt soon after the Big Bang—and theorists believe that it should happen again.
“You could imagine a new start” for the universe, says Adams, in which matter gets a second chance to coalesce into stars, planets, even people. Or, the vacuum energy could decay before the universe ever makes it to the dark era. “If that happens,” says Loeb, “we’re back to a situation where once again we can see all those galaxies that we lost.”
Of course, these scenarios are a strong cocktail of science and speculation—and the further we look into the future, the more speculation is poured into the mix. So why study a universe that even our most distant descendants will never live to see?
The numerical models scientists use to project into the distant future can yield new insights into stellar life cycles—like how small, long-lived stars evolve into red giants—that we can’t observe progressing over the course of one (or many) lifetimes, says Adams. It also gives us a way “to gauge the cosmic importance of various aspects of the standard model,” says Loeb, by watching how they play out over time.
“It is part of our worldview to want to know what will happen,” adds Loeb. Yet I don’t think I’m alone in enjoying the fact that the next plot twist is, ultimately, a mystery.
Editors picks for further reading
Astrobites: Avi Loeb and Freeman Dyson on the future of the universe
Can the universe be saved from the "dark era"? Astronomy blogger Nathan Sanders shares a conversation between Freeman Dyson and Avi Loeb on the prospect of "cosmic engineering."
FQXi: Predicting the End
Science writer Govert Schilling talks with Fred Adams and Greg Laughlin about how they became the authors of the future-biography of our universe.
The Five Ages of the Universe: Inside the Physics of Eternity
Fred Adams and Greg Laughlin had the bad fortune to publish this book just around the time that dark energy was discovered; their predictions therefore don't account for dark energy. Most of their conclusions about the distant future remain valid, though.
First, the good news: Despite earlier doomsday prognoses, the cosmos is not fated to implode. If you stay up late at night worrying that the entirety of creation will cave in on you in a universal Big Crunch, rest assured. Astronomical evidence suggests that the Big Bang expansion will never reverse itself; the universe will not collapse back down to a point. Rather, the universe, like ever-sprawling suburbs, will burgeon forever, its galaxies receding farther and farther away from each other.
Now, the bad news: Cosmic expansion, discovered in 1998 through supernova observations conducted by Nobel Prize-winning physicists Saul Perlmutter, Brian Schmidt, and Adam Riess and their research teams, is not only continuing, it is picking up its pace. Though we don’t know what is causing the acceleration, one leading idea, called “phantom energy,” has ominous implications. If phantom energy continues to drive the universe faster and faster outward, it could literally tear space into shreds—a doomsday scenario called the “Big Rip.”
Phantom energy, proposed by Dartmouth physicist Robert Caldwell in 1999, is one possible variety of the mysterious stuff researchers call dark energy. Dark energy is the catchall phrase for the hidden agent of cosmic acceleration, and encompasses many different approaches. The simplest proposal adds an extra “antigravity” term, called a “cosmological constant,” to Einstein’s equations of general relativity. Einstein himself had proposed the term to stabilize his equations, but then hastily removed it after Edwin Hubble showed in 1929 that the universe was expanding—and not stable at all. Adding the term back into the equations, while assuming that the geometry of the universe is flat, leads to a prediction that the cosmic growth rate will gradually and steadily increase.
While a cosmological constant seems to model the accelerating universe nicely, at least according to current astronomical observations, many researchers are seeking a more tangible explanation. Instead of an extra term arbitrarily added to the gravitational equations, they are looking for an actual physical substance with repulsive properties. This substance must have a bizarre property called negative pressure. Though we never encounter negative pressure in everyday life, you can picture how it works by imagining poking a soap bubble and seeing it inflate rather than pop.
Gravitational researchers model the behavior of substances using an equation, called an equation of state, that relates the material’s pressure to its density. For conventional substances such as fluids and gases, both pressure and density are positive, so their ratio, usually called “w,” is positive as well. However, if a substance were to mimic a cosmological constant, its w ratio would be negative one. That is, if you try to squeeze it, it will get bigger, not smaller.
In 1998, shortly after the discovery of cosmic acceleration, Caldwell, along with astrophysicists Rahul Dave, then at Penn, and Paul Steinhardt of Princeton, argued for the greatest amount of flexibility in trying to describe dark energy. Borrowing the ancient Greek term for the “fifth essence,” they dubbed it “quintessence” and asserted that its equation of state could vary over time and space. Rather than assuming that w is negative one, they urged that its actual value be pinned down through astronomical observation.
This flexibility spurred Caldwell to think about the worst-case scenario imaginable for dark energy. He wondered what would happen if the amount of negative pressure exceeded the density—that is, if w was less than negative one. In our soap bubble analogy, larger negative pressure would mean that pressing on its exterior would cause it to puff up even faster. In the far future, he realized, gravity and all other forces would eventually lose potency, overwhelmed by the hulking menace of what he dubbed phantom energy. The reason is that unlike the steady density of the energy associated with a cosmological constant, the density of phantom energy would grow greater and greater, rising along with its negative pressure, building to a lethal crescendo—a Big Rip.
In Caldwell’s end game scenario, our distant descendants (relocated to other planets, perhaps) will notice the first sign of trouble billions of years from now as other galaxies recede beyond detection and disappear from the sky. (That would happen under the cosmological constant picture too, but is exacerbated in the case of phantom energy.) Eventually our local group of galaxies, including Andromeda, will become cosmic hermits. Then, tens of millions of years before the ultimate doomsday, the local group and then the Milky Way itself will break up like a dropped box of peanut brittle.
In the cosmic twilight times, all planetary systems will be torn apart and the stars and planets will explode. About thirty minutes later, all atoms will burst like fireworks. Then, in a mere fraction of a second, space itself will tear to shreds in the all-destroying Big Rip.1
If the devouring of reality by a ravenous shredder sounds revolting, you can console yourself with an alternative developed by physicist Pedro Gonzalez-Diaz of the Institute of Mathematics and Fundamental Physics in Madrid. He conjectures that phantom energy could feed one of the many wormholes that could exist in the fabric of spacetime. Over billions of years, the wormhole would swell up faster than the cosmic expansion rate and eventually engulf the material content of whole universe—galaxies and such—sparing it from a Big Rip. Theoretically, then, everything could pass unscathed out the other end of the wormhole, emerging in another sector of reality, wherever that would be. Not exactly a joy ride, but at least it wouldn’t leave you in tatters!
The jury is still out on what causes cosmic acceleration. It could well be the case that the culprit is not phantom energy, but rather a gentler form of dark energy—such as a type of quintessence with a w value greater than or equal to negative one. If so, the expansion of space would increase more gradually, leaving the local group and the Milky Way gravitationally bound together as other galaxies eventually recede from view. While our enclave of space would be lonely, it would remain intact. In short, we’d experience a “Big Stretch” rather than a “Big Rip.”
1A recent calculation by a team of researchers led by Chinese physicist XiaoDong Li uses current data to estimate that the Big Rip will take place 16.7 billion years from now. The dissolution of atoms would take place a mere 30 quintillionth of a second before the Big Rip.
Editor's picks for further reading
NBCNews.com: Will universe end in a ‘big rip’?
Space writer Robert Roy Britt talks with cosmologists about the possibility of a Big Rip.
Open Yale Courses: Dark Energy, the Accelerating Universe, and the Big Rip
In this video, Yale astronomer Charles Bailyn reviews the evidence for dark matter and dark energy and presents the possibility of a Big Rip.
When I see those victorious Olympic athletes all bedecked on the podium, beaming their gold-medal smiles and crying their gold-medal tears, I can’t help thinking: Now what?
And now that the coming-out party for the Higgs (or the Higgs-like boson, if you must) is over—the bubbly popped, the headlines receded—are physicists asking themselves the same question?
Certainly, physicists are not crying into their champagne. The discovery of a new boson right where the Higgs should be is a scientific tour-de-force. “It confirms, as it completes, the Standard Model of fundamental physics,” Frank Wilczek wrote here on the morning of the announcement.
And yet, science thrives on observations that don’t match up with predictions. Dark energy and dark matter, two of the greatest discoveries in a century of astrophysics, were hit upon because of the yawning gap between prediction and observation. If the universe is a puzzle, dark energy and dark matter are odd-shaped pieces that puckishly refuse to be wedged into place and, in their refusal, open up the possibility that the puzzle is actually richer and more complex than we ever anticipated. The Higgs, on the other hand, snaps right into place with a satisfying “Eureka!”
But if the puzzle of the Standard Model is now complete, where does that leave physics?
“There’s this huge looming question: The Standard Model works impeccably, but it leaves a lot of things unexplained,” says David Kaiser, a physicist and science historian at MIT. The Standard Model does not account for gravity, for instance, and it provides no explanation for why the physical constants take the particular values that they do. Like the periodic table of the elements, the Standard Model is an utterly faithful census of the ingredients that make up our universe. But while we know the elegant atomic underpinnings of the motley periodic table, we are still seeking the deeper laws that are expressed in the Standard Model.
“I always felt the best possible thing for the LHC would be to not see the Higgs,” says Peter Woit, a theorist at Columbia University. That would have cracked the Standard Model wide open, perhaps giving scientists a glimpse of the deeper physics underlying it. In this sense, says Woit, “The Standard Model is a victim of its own success.” Though it fails to answer some fundamental questions about out universe, it is so impervious to experimental contradiction—so perfect in its predictions—that physicists may soon find themselves at an impasse.
“If this is really the Higgs, then we have completed the Standard Model,” says physicist Peter Fisher of MIT. “We have created this model that describes exquisitely the world around us. We could legitimately say that, as a field of endeavor, we’ve done all there is to be done, and ask: Is this a place to stop and reassess?”
Physicists do have some guesses at what may lie beyond the Standard Model. There’s supersymmetry, for one, which suggests that elementary particles have mirror-image “superpartners” that differ in spin. Yet, to the surprise of some physicists, even the LHC has been unable to turn up any evidence of these superpartners. That suggests that, if superpartners are out there, they don’t possess the neat mirror-image symmetry we expected. Instead, the mirror that divides “us” from “them” may be warped.
“With the Higgs, you knew exactly what to look for,” says Woit. But the mirror of supersymmetry, if it exists, “could be warped in any arbitrary way,” leaving physicists to pursue an almost limitless game of hide-and-seek. And what if the superpartners—or other hints of new physics—are hiding where the LHC can’t find them?
But the story of the Higgs isn’t over yet. Over the coming months, physicists on the CMS and ATLAS teams will look to see whether this thing they have found decays in the ways they expect. Perhaps the new boson will turn out to be not so “vanilla” after all. Historically, it is often the “one last measurement to nail it down” that ends up taking physics in a new direction, Kaiser points out.
To Nobel prize-winning physicist Frank Wilczek, finding the new boson is just the beginning. “Having won this glorious battle, I'm psyched up for complete victory. We need to see some of the new particles that low-energy supersymmetry predicts. I think that will eventually happen at the LHC.”
“There is also room for gratuitous, but not perverse, speculation about the Higgs being a ‘portal’ into hidden sectors—hypothetical worlds of particles that have neither strong nor weak nor electromagnetic interactions,” adds Wilczek.
Yet Steve Ahlen, a Boston University physicist who helped build the ATLAS detector, thinks that the story of the quest for the Higgs has a somewhat different moral: “The most impressive thing about the success of the LHC, CMS and ATLAS is that thousands of people from all over the world, supported by tax dollars from many hundreds of millions of people, achieved success without the promise of fortune, power or fame, but for the simple joy of observing the beautiful world we live in. I think there is an important lesson to be learned from that.”
The recent buzz over the discovery of a new boson that might be the long-sought quantum of the Higgs field has led some to forget that the Large Hadron Collider at CERN isn’t a single-purpose facility. Two large experiments each engage approximately 3,000 physicists in a concentrated effort to better understand the rules that govern the universe in which we live. These collaborations can study many different phenomena. One of the most tantalizing of these phenomena is called supersymmetry—and it could pick up where the Higgs left off.
There are many mysteries remaining in physics. While the Standard Model states that the Higgs boson gives subatomic particles their mass, it is quite silent on the specific masses held by each particle. Further, it doesn’t explain why so many different types of particles exist, nor does it explain why there are three forces and not two or 20.
Subatomic particles have a property called spin which can be usefully (and misleadingly) imagined as each particle being a tiny spinning ball, though the reality is that spin is an inherent property of these particles in the same way that electric charge is. Particles are divided into two classes based on their spin: Particles with half-number spin (1/2, 3/2, 5/2 and so on) are called fermions, and particles with whole-number spin are called bosons.
Supersymmetry proposes a new rule to govern the relationship between fermions and bosons. According to supersymmetry, the equations that describe the universe should work in exactly the same way if all fermion and boson terms are swapped. This implies that, for every particle known in the Standard Model, there should be an as-yet-undiscovered cousin particle. These cousin particles are identical to the known particles in every way except that they have different spin.
If supersymmetry is right, then the existing fermion quarks have cousin bosons called “squarks”; the lepton has a supersymmetric cousin called a slepton. For bosons, the naming convention is a little different: The bosons of the standard model (the gluon, photon and W and Z boson) have supersymmetric fermion cousins called the gluino, photino, wino and zino.
Though none of these particles has yet been observed, their very obscurity does offer us one important insight: If supersymmetry exists, it is not, in fact, symmetric. Recall that I said that the supersymmetric cousins of the familiar particles of the Standard Model were the same in every way except for their spin. This means that the selectron would have the same mass as the familiar electron and the up squark would have the same mass of the up quark. However, were this true, we would have discovered them already. Given that we haven’t, we can categorically say that supersymmetry in its ideal form has already been falsified.
However it could be that supersymmetry is mostly true, but “broken.” In the same way that an imperfect top might spin reasonably well, only to wobble a bit and end with a preferred side always touching the ground when it stops, perhaps the universe might have a supersymmetry that is mostly true. Just what mechanism breaks the symmetry between the Standard Model particles and the supersymmetric cousins is not known, although many ideas have been proposed.
So with this additional consideration, you are to be forgiven if you are suspicious of the whole idea. What is the reason for the interest in the idea of supersymmetry? Why have over ten thousand scientific papers (both experimental and theoretical) been written on the subject?
While there are several reasons to find the idea intriguing, one topical example is the way in which supersymmetry is thought to be linked to the Higgs boson. While we remain unsure if the boson we found in July is the Higgs boson, the new boson has a mass of about 125 GeV, or about 133 times heavier than a proton. This is an utterly unnatural value for the mass of the Higgs boson.
Why unnatural? The Higgs boson gains its own mass (in part) through its interaction with the other subatomic particles: the quarks and leptons and force carrying bosons. These particles should have a huge influence on the mass of the Higgs—on the scale of 1015 GeV. That’s over ten trillion times the observed mass of the new boson. So why isn’t the Higgs weighing in at that enormous mass?
First, we are helped because the contribution from the fermions and bosons are of opposite sign, so they can cancel each other out. But without invoking supersymmetry, it seems pretty suspicious that they would be so close in value. It’s uncanny, like a big bank simultaneously taking in a deposit of about a trillion dollars and making a loan of almost exactly the same amount down to a few bucks.
Supersymmetry can explain this quite easily, though. After all, for each particular fermion (say an electron), there is a corresponding boson (a selectron). Given the symmetry and the fact that fermions and bosons contribute with opposite signs, it is easier to see how these two corresponding particles could cancel each other out exactly. If supersymmetry were in fact perfectly symmetric, they would cancel each other perfectly and mass of the Higgs boson would be caused solely by its interaction with other Higgs bosons.
This example is but one in the myriad of phenomena which can be explained by supersymmetry. You should remember that we don’t know that supersymmetry is actually present in the universe; just because it works on paper doesn’t make it real. It makes it a cool idea. However scientists at the Large Hadron Collider are hot on supersymmetry’s trail. If supersymmetry is the answer to why the mass of the Higgs boson is small but not zero, we will find it at the LHC.
Editor's picks for further reading
Nature's Blueprint: Supersymmetry and the Search for a Unified Theory of Matter and Force
Theoretical astrophysicist Dan Hooper's book on supersymmetry and the LHC's role in the search for evidence of supersymmetry.
Scientific American: Is Supersymmetry Dead?
Davide Castelvecchi asks what it will mean for physics if the LHC does not turn up evidence of supersymmetry.
Supersymmetry: Unveiling the Ultimate Laws of Nature
Physicist Gordon Kane's accessible 2001 book on supersymmetry.
When a team of astronomers in 1992 released the first full-sky map of the cosmic microwave background—also known as the afterglow of the big bang—George Smoot, one of the group’s leaders and later a Nobel laureate, said, “If you’re religious, this is like looking at God.”
Mystical undertones stir passions and risk muddying our understanding of science. But whatever one’s views, it is an intriguing coincidence that a possible key to reading Smoot’s words comes to us from none other than Dante Alighieri’s “Paradiso,” written in the early years of the 14th century. The cosmic microwave background, or CMB, shows us a slice of the universe as it looked more than 13.7 billion years ago, and the structure of that universe bears a striking resemblance to that of Dante’s heaven—at least according to some commentators. It is as if the poet had presaged some of the most striking developments of modern mathematics and cosmology six centuries before they emerged.
“Paradiso,” the third and final part of the “Divine Comedy,” narrates an allegorical journey in which Dante ascends from Earth, visits heaven, and eventually gets to behold the creator himself. First, Dante crosses a series of concentric spheres, all centered at Earth, which hold the planets, Sun, moon, and the stars. The next sphere he reaches is one that encloses the entire physical universe. As he crosses it, he steps into the spiritual realm.
The otherworld however also has a geometric structure, and it is completely symmetrical to that of the physical world, with nine concentric spheres, which are inhabited by angels and the souls of the most virtuous dead. But instead of growing ever larger, these spheres grow ever smaller. And at the center, Dante says, sits God, occupying a single point and emanating a blinding light.
Thus Dante’s entire universe—both physical and spiritual—consists of two sets of concentric spheres, one centered at Earth, the other at God. If you were to point a laser vertically up toward the sky from any point on Earth, you’d be pointing it straight at that single point where Dante places God.
In a sense, then, the successive spheres of the spiritual world enclose all of the physical spheres, Dante seems to imply, even though they get smaller and smaller as you move farther away from Earth and closer to God. Such a geometry seems impossible, and the passage has mystified commentators for centuries. In fact, these bizarrely nested spheres are both mathematically and physically possible. To discover why, we have to turn to mathematics that wouldn’t be discovered until centuries after Dante’s death.
In the geometry of our everyday experience, also known as Euclidean geometry, if we draw a sphere around us, the larger the sphere’s radius, the larger its circumference; more precisely, doubling the radius of a sphere doubles its circumference. But this is an empirical fact and not a logical necessity: there is such a thing as non-Euclidean geometry, in which it is perfectly allowable for a sphere to have a circumference that is not proportional to its radius.
Moreover, non-Euclidean geometry is not just a bizarre, abstract invention of mathematicians. In fact, Einstein showed in his theory of general relativity that the geometry of the universe itself is fundamentally non-Euclidean. This is what allows space to twist and bend like a cosmic contortionist.
The discrepancy between the real world and Euclidean geometry is tiny in ordinary situations—a satellite’s orbit around Earth, for example, may be a few inches shorter compared to what you would expect from Euclidean geometry—but becomes substantial in extreme situations such as around black holes.
Dante’s universe, then, can be interpreted as an extreme case of non-Euclidean geometry, one in which concentric spheres don’t just grow at a different pace than their diameters, but at some point they actually stop growing altogether and start shrinking instead. That’s crazy, you say. And yet, modern cosmology tells us that that’s the structure of the cosmos we actually see in our telescopes.
We can think of the observable universe as being made of concentric spheres, just like Dante’s universe. Because light travels at a finite speed, we see distant galaxies as they were in the past, at the time when they emitted the light that we now receive from them. By definition, light covers one light-year of distance every year. Thus, for example, we can picture all galaxies that we see as they were one billion years ago as residing on a sphere centered at our position and of radius one billion light-years. (These spheres are of course not solid objects, and they not absolute but relative to the observer, contrary to those in Dante’s 14th-century cosmology.)
Now, the universe we see all sprang up from a very small region of space, and has been expanding ever since. Cosmologists have placed the beginning of time at about 13.7 billion years ago. That means that our game of drawing concentric spheres cannot be pushed to an arbitrary distance. But it also has another consequence. As the radius of the spheres pushes close to that magic number of 13.7 billion and change, we are looking at smaller and smaller regions of space, despite the fact that those regions still span our entire field of view, in all directions of the sky.
In fact, when astronomers map the CMB, they are mapping a sphere that surrounds us and that is very close to that initial moment—at roughly 400,000 years after the big bang—and thus has a “radius” of around 13.7 billion light-years. But its circumference is a lot smaller than what you would expect from Euclidean geometry—more than a thousand times smaller. Spheres that are even closer to the big bang are even smaller, until our field of view converges to that single point we call the big bang. Theoretically, we could cast a laser in any direction and still aim at that single point.
One very bizarre consequence of the non-Euclidean nature of the observable universe is that distant objects appear larger than their true size. For the first 10 billion light years or so, galaxies look smaller if they are farther away, but beyond that distance they instead start taking up a larger and larger field of view in the sky, as if space itself acted like a magnifying lens. In practice, the effect is exceedingly difficult to actually observe in our telescopes, because at those distances galaxies look extremely faint. But in recent years astronomers have begun several projects to detect the magnification effect in their observations, not by looking at the apparent size of galaxies but at their spacing. To do so, they map hundreds of thousands of galaxies over a range of distances spanning many billions of light-years. “You look at where the galaxies formed, not at how big they are,” explains astronomer Tamara Davis of the University of Queensland, who participates in one such mapping effort called WiggleZ.
Of course, Dante lived five centuries before any mathematicians ever dreamed of notions of curved geometries. We may never know if his strange spheres were a mathematical premonition or esoteric symbolism or simply a colorful literary device.
Editor's picks for further reading
Non-Euclidean Geometry Online: a Guide to Resources
Mircea Pitici's brief introduction to non-Euclidean geometry.
The Poetry of the Universe
Mathematician Robert Osserman's volume of "math for poets."
The World of Dante
Explore Dante's writing with interactive maps, images, music, and more.
You’ve just started reading this post. The decision is done. Seconds have ticked by and you’ve chosen to use them clicking on this article, rather than pursuing hang gliding, water skiing, mountain climbing, chocolate sampling, or countless other options. Sure you could do those things later, but what was “now” is already gone. If only you had a wormhole time machine and could go back in time to undo your choice! But how to make a wormhole time machine? Read on if you’d like some suggestions from the world of theoretical physics.
Step in to my time machine. Credit: NASA/Les Bossinas (Cortez III Service Corp.), via Wikimedia Commons
Flash back to the late 1980s—with your imagination, not a time machine just yet. The extraordinary astronomer and science communicator Carl Sagan, fresh off his award-winning PBS series Cosmos, decided to write a science fiction novel about interstellar travel, "Contact." Needing a way for his protagonist to travel quickly to another planet, he asked his friend Caltech astrophysicist Kip Thorne for advice.
Thorne is an expert in general relativity, Einstein’s masterful theory of gravity. The equations of general relativity serve as a recipe for how nature kneads the dough of spacetime (space and time combined) into various shapes—from as flat as a pancake to as curvy as a croissant. These shapes determine how other things move. Just as an ant at a picnic would take a more winding route around an apple than across a napkin, objects in the universe (planets, comets, and so forth) veer along curved paths in warped regions. What distorts these sectors of spacetime is the amount and distribution of mass and energy. For example, the gravitational well of the solar system is carved out by the mass of the Sun.
In extreme cases, a glop of mass concentrated in a small enough region will tear the fabric of spacetime, causing what is called a singularity—a point of infinite density where spacetime seems to reach a dead end. Such is the case with what is called the Schwarzschild solution of Einstein’s equations of general relativity, used to describe the ultra-dense, collapsed stellar cores known as black holes. However, as Einstein and his assistant Nathan Rosen showed in 1935, one can mathematically extend the Schwarzschild solution across an “Einstein-Rosen bridge” and link it to another region of spacetime. In the 1960s, the creative Princeton physicist John Wheeler, who was Thorne’s PhD advisor, dubbed these connections “wormholes,” imagining a worm taking a shortcut by crossing an apple’s interior. (Wheeler also coined the term “black hole.”)
When Sagan contacted Thorne he was envisioning something like a Schwarzschild wormhole connecting two otherwise distant parts of space—an interstellar Chunnel, so to speak. But Thorne realized that a Schwarzschild wormhole wouldn’t do. For one thing, it was unstable to matter, meaning that the gravitational effect of even the slightest drop of mass would cause it to collapse. Therefore it would close off if a spaceship tried to enter—that is, if the space voyagers could make it that far. If the wormhole entrance lay in the bowels of a black hole, the travelers would encounter deadly radiation, bone-crushing gravitational forces, and enough stomach-churning acceleration to make even the Dangerous Sports Club give it a miss.
Thorne asked his then-student Michael Morris to help him come up with an alternative. They crafted a novel solution of Einstein’s equations of general relativity that would represent a wormhole that could be traversable by human voyagers, such as the fictional heroine of "Contact." The solution was custom-designed to eliminate the nasty aspects of navigating into a black hole and allow for a relatively quick, comfortable ride. After passing into the wormhole’s “mouth” (as its entrance was called) and journeying through its “throat” (as its passageway was called), a voyager would find herself emerging from another mouth somewhere in another part of space. Instead of traveling hundreds of years or more to reach another star, if all went well, she’d swiftly arrive in its vicinity.
Morris and Thorne realized that their scheme was extremely hypothetical—requiring a virtually inconceivable engineering feat. For one thing, the amount of mass needed to create the wormhole was comparable to that of a galaxy. Moreover, a new type of negative mass material, called “exotic matter,” would be necessary to prop open the wormhole’s throat and prevent it from collapsing. No known substance has negative mass.
Offering some cause for optimism, physicist Matt Visser of Victoria University of Wellington soon found a way to minimize the amount of exotic matter required. As he and others have pointed out, exotic matter has features in common with the energy of the quantum vacuum, the bedrock state of particle physics, which has a repulsive pressure. Perhaps a future civilization could mine enough of this energy to suffice for wormhole construction. A hypothetical energy called “phantom energy,” a type of dark energy with a considerable amount of negative pressure, used to explain the acceleration of the universe’s expansion, also holds promise as a potential way to stabilize wormholes.
Shortly after Morris and Thorne published their first paper they collaborated with Ulvi Yurtsever, another of Thorne’s PhD students at Caltech, on another remarkable article showing how a wormhole could be used as a time machine. The key would be to speed up one of the mouths of the wormhole to close to the speed of light while leaving the other one fixed. According to the phenomenon of time dilation, an aspect of Einstein’s special theory of relativity, time in the vicinity of a near-light-speed object will slow down significantly relative to a stationary observer. Therefore, while the fixed mouth ages 100 years, the high-speed mouth, if it is fast enough, might experience only one year. If the calendar reads 2112 for the former, it would read 2013 for the latter. Now suppose a space traveler sails into the fixed mouth in 2112. If passage through the throat is quick enough, she would emerge through the moving mouth in 2013.
If you are still thinking about all the things you could have done if you hadn’t clicked on this post, you now know the answer. Assuming you have an advanced spaceship and a CPS device (Cosmic Positioning System), simply find a wormhole, journey through it, go back to the time before you started reading this, and convince yourself to go surfing instead. You are cautioned however that your actions would create a paradox1, because if you never read the article you wouldn’t know how to go back in time (or at least wouldn’t have the need). Proceed to the past at your own risk!
1 To avoid paradoxes such as meeting yourself in the past and convincing yourself never to pursue time travel, or going back in time and accidently eliminating your ancestors, some physicists have asserted that backward time travel is impossible. Stephen Hawking, for example, postulated the Chronology Protection Conjecture to shield the past from tampering. Others such as Igor Novikov of Moscow State University and the Lebedev Physics Institute in Russia have argued, in what he called the Self-Consistency Principle, that past-directed temporal voyages are fine as long as the altered past is consistent with the present—that is, it was really supposed to happen. For example, if you go back in time and convince Carl Sagan that wormholes wouldn’t fit into his novel, maybe that’s just the incentive he needed to contact Kip Thorne and check if they would, leading to what actually happened. Finally, there are some who speculate that backward time travel could lead to a bifurcation of time into parallel realities.
In any case, the work of Thorne, Morris, Yurtsever, Novikov, Hawking, Visser and others has propelled the discussion of time travel and wormholes from fanciful science fiction into serious, peer-reviewed—albeit highly speculative—science. Who knows, perhaps someday our civilization will be advanced enough to test such far-reaching hypotheses and create or find actual wormholes. Only time will tell—and if wormholes exist, we have all the time in the world.
Editor's Picks for Further Reading
Daily Mail: Stephen Hawking: How to Build a Time Machine
Stephen Hawking on wormholes and the paradoxes of time travel.
Space Time Travel: Flight Through a Wormhole
Explore computer-generated images of a hypothetical trip through a wormhole.
Wikipedia: Wormholes in Fiction
From "A Wrinkle in Time" to "Fringe," discover how writers of books, television, and movies have used wormholes in their storytelling.