Imagine describing our universe to an alien from an alternate dimension. Where would you start?
You might reasonably begin by explaining that we live in three dimensions of space and one dimension of time. Space and time are so fundamental to our understanding of the universe that they are woven into nearly every equation in physics. They are the words in which we speak the language of nature—so tried, tested, and true that we don’t even know how to talk about the cosmos without engaging space and time in the conversation.
But what if it turns out that space and time are not the fundamental infrastructure of our cosmos—what if they are themselves products of some deeper physics?
This idea is called emergence. We see it in nature, as when fish school or birds flock. If you were only to study an individual fish or bird, you would never predict how they would come together as a group. Yet each one “knows” simple rules that, when combined, create a wide range of agile and elegant behaviors. Could it be that physicists have been studying flocks all along, not realizing that it’s the birds that are truly fundamental?
“There aren’t many things in quantum gravity that everyone agrees on,” says Eleanor Knox, a philosopher at King’s College London who specializes in the philosophy of physics. “Yet the one thing many people seemed to agree on in quantum gravity was that we were going to have to cope with space and time not being fundamental.”
It sounds radical, but physics has a long and proud history of spearheading exactly this kind of coup. “Historically, whenever we thought something was fundamental, it turns out that it is not,” says Nathan Seiberg, a theoretical physicist at the Institute for Advanced Study. Kepler, for instance, believed that the Platonic solids were the fundamental constituents of the universe. Today we know better. In the 17th century, scientists thought that cold was a substance that could flow from one place to another, chilling your doorstep or tip of your nose. Now we understand that heat and cold are just another way of talking about the statistical properties of a collection of molecules. Of course, that doesn’t mean that it feels any less real when you burn your tongue on your hot cocoa.
So why are physicists picking on space? Relativity delivered the first strike. “In relativity, space and time are not rigid. They are dynamic,” says Seiberg. Building all of physics on such a malleable infrastructure is akin to constructing your house on a foundation of Jello.
More alarmingly to theorists, our ability to measure features in space is intrinsically limited. A ruler can’t measure distances smaller than the width of its painted markings; the resolution of a microscope is constrained by the wavelength of the light in which it makes images; even scanning tunneling microscopes are limited by the physical size of their probe tips.
Can’t we just build a better microscope? “It’s not because we don’t have the budget to build a powerful enough machine,” explains Seiberg. If we somehow tried to make an infinitely small measuring device, that device would become so dense that it would warp the fabric of space. The conclusion: “Space itself is ambiguous,” says Seiberg. Strike two.
Space also took a hit from an unlikely foe: the hologram. We think of holograms as the dazzling, silvery images on postcards and credit cards: two-dimensional objects that project three-dimensional pictures. More generally, though, a hologram is anything—even an equation—that encodes an extra dimension’s worth of information. It turns out that you can write equations that describe our universe perfectly well using different combinations of spatial dimensions, creating mathematical holograms that are indistinguishable from reality. Like a book that can be translated into many disparate languages without losing a syllable of meaning, our universe seems to tell a story that is independent of the words in which we have always chosen to express it.
Finally, physicists have known for some time that their descriptions of space start to break down when they’re applied to the strange-but-true environments inside black holes and close to the time of big bang. In such cases, the familiar equations start popping out infinities—nonsense answers that suggest that the equations are missing some essential machinery. “Something else should kick in,” says Seiberg.
But what is that something else? “I don’t think I have an answer to that,” says Seiberg. Knox also leaves the door open to as-yet-unknown possibilities: “Whatever it is that’s fundamental, it’s not the stuff we have a handle on right now.” Morever, Seiberg adds that though theorists have assembled a strong case that space is emergent, time presents a more difficult problem. “In order to understand emergent time, we need a complete revolution in the way we think about physics.”
Letting go of space and time without ready replacements may seem like a surefire way to plunge into the abyss of abstraction. But it may be only by loosening our grip that we can come to grasp what is truly fundamental.
Editor's picks for further reading
Discover Magazine: Newsflash: Space and Time May Not Exist
If time isn't fundamental, what is it?
FQXi: Breaking the Universe's Speed Limit
John Donoghue investigates the possibility that the speed of light is not a constant.
FQXi: Melting Spacetime
Joanna Karczemarek investigates how space and time could emerge from deeper physics.
Are we living in someone else’s fantasy?
The Chinese philosopher Zhuangzi posed this question more than two thousand years ago when he recalled waking from a dream unsure whether he was a man who dreamed he was a butterfly or a butterfly dreaming that he was a man. Today, with the advent of computers that can simulate cells, cities, and even solar systems, philosophers and scientists are asking this ancient question in a new way: Are we living in a computer simulation?
This question is more than just the premise of "The Matrix." It's a conjecture that lives at the intersection of humanity and technology—and though it might seem like philosophy, it spurs ambitious new questions about what computers are capable of and about the nature of reality itself. As theorists begin to think of our universe as nothing more than a vast collection of information, can we ever truly know whether our reality is as “real” as we think it is?
The philosopher Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, posed the latest iteration of this ancient question in a 2003 paper. His "simulation argument" begins with the observation that modern computers have improved at an exponential rate since their invention. If computing power continues to grow at this pace, advanced civilizations will one day be able to build titanic, densely-packed supercomputers capable of doing everything from beating the stock market to predicting the weather months or years in advance. “Post-human” programmers might even use these machines to simulate entire civilizations, vast electronic worlds that would put today’s computer games to shame.
What would it take to create this kind of simulation?
When it comes to simulating a person, scientists estimate it might take 1017 operations per second—that's one followed by 17 zeroes—to simulate a human brain, based on the number of neurons in the brain and rate of which those neurons “talk” to each other. Assuming that simulating the sensory events a person experiences—every taste, sound, smell, touch and sight that is coded in our neurons—takes about 100 million bits per second, and that approximately 100 billion humans have lived on Earth to date, Bostrom estimates it might take 1036 calculations in total to create a simulation of the whole of human history that is indistinguishable from reality.
That’s just to simulate the parts of the universe that humans can sense. What about the microscopic structure of the Earth's interior or the subtle features of distant stars? These little details could be safely omitted until a simulated person needed to observe them. In addition, to save computing power, maybe not every person in a simulation would be fully simulated. Perhaps some of the characters in the simulation would be "zombies or 'shadow-people'—humans simulated at a level sufficient for the fully simulated people to not notice anything suspicious," Bostrom writes in his paper.
So how close are we to achieving this dream (or nightmare)? Today’s most powerful supercomputers are capable of operating at roughly 10 petaflops per second—that is, 1016 calculations per second. A planet-sized computer based on current electronics might carry out 1042 operations per second. Bostrom also notes that quantum physicist Seth Lloyd of MIT has calculated that a 1-kilogram "ultimate laptop" that operates at the known limits of physics might be capable of 5 × 1050 operations per second. So, the planet-sized computer might be able to simulate all of human history in a millionth of a second; the ultimate laptop, a hundredth of a billionth of a second.
Given that fully simulating every person who has ever lived might only take a tiny fraction of an advanced civilization's resources, Bostrom reasons that the number of computer-generated minds buzzing away inside simulations could vastly outnumber the total sum of real minds that have ever lived. If that is true, the odds are that we are simulated, not real. It may even be possible that our simulators are themselves simulated, and their simulators are simulated, and so on. "Reality may thus contain many levels," Bostrom says.
This does not prove that we live in a simulation, Bostrom emphasizes. There are a number of caveats that could stop this bizarre future before it starts. One glum possibility is that civilizations might very well go extinct or collapse—say, by annihilating themselves in a nuclear war—before they can develop supercomputers of such immense power. Another thought is that civilizations simply have no desire to commit the vast resources needed to create supercomputers. Or perhaps advanced civilizations might not indulge in such simulations—maybe they would be ethically opposed to simulating minds and their suffering, or they might prefer to entertain themselves with machines that directly stimulate their brain's pleasure centers. "Personally, I assign less than 50 percent probability to the simulation hypothesis—rather something like in the 20 percent region, perhaps, maybe," Bostrom writes, although he describes this as a gut feeling rather than part of his logical argument.
Unless the simulators decide to make themselves known, there may be no way to prove or disprove the simulation argument. Some have suggested looking for "glitches" in the simulation, but such glitches would be more plausibly explained as hallucinations, visual illusions, fraud or self-deception. Even if errors did pop up, a smart simulator could simply wipe any memory of the anomaly from our simulated brains.
If we are living in a computer simulation, how should we live our lives? "The simulation hypothesis currently does not seem to have any radical implications for how one should live," Bostrom said. Still, "it helps to shed light on, among other things, the prospects of our species."
Also, thinking of the universe as a computer may actually be a helpful approach in science. "You can start thinking about what kind of computer it is, what kind of operations can it do, what kinds of problems can it solve," said theoretical computer scientist Scott Aaronson at MIT. "That's an extraordinarily fruitful way of thinking about the universe that has led to the whole field of quantum computers—devices based on the quantum physics that explains how the fundamental building blocks of the universe behave."
We may never know whether we are living in someone else's fantasy; whether we’re the man or the butterfly. But if we do one day develop supercomputers capable of simulating minds and universes, perhaps our creations will be able to answer the question for us.
ONE night in June 2007, I got to watch astronomer Sandra Faber put the 10-meter Keck II telescope through its paces. She was observing galaxies in a region of the sky called the Extended Groth Strip, in the direction of the constellation Ursa Major. We sat in the cozy confines of the telescope control room, far below the telescope’s perch near the 13,796-foot-high summit of the Mauna Kea volcano in Hawaii.
Around midnight, Faber wrapped up her observations and we stepped out for a few minutes under the night sky. “I take comfort in the fact that it is a beautiful universe, and we belong here and that we fit,” Faber mused. “This is our home.”
Faber, a professor at the University of California, Santa Cruz, was referring to the idea that there is something uncannily perfect about our universe. The laws of physics and the values of physical constants seem, as Goldilocks said, “just right.” If even one of a host of physical properties of the universe had been different, stars, planets, and galaxies would never have formed. Life would have been all but impossible.
Take, for instance, the neutron. It is 1.00137841870 times heavier than the proton, which is what allows it to decay into a proton, electron and neutrino—a process that determined the relative abundances of hydrogen and helium after the big bang and gave us a universe dominated by hydrogen. If the neutron-to-proton mass ratio were even slightly different, we would be living in a very different universe: one, perhaps, with far too much helium, in which stars would have burned out too quickly for life to evolve, or one in which protons decayed into neutrons rather than the other way around, leaving the universe without atoms. So, in fact, we wouldn’t be living here at all—we wouldn’t exist.
Examples of such “fine-tuning” abound. Tweak the charge on an electron, for instance, or change the strength of the gravitational force or the strong nuclear force just a smidgen, and the universe would look very different, and likely be lifeless. The challenge for physicists is explaining why such physical parameters are what they are.
This challenge became even tougher in the late 1990s when astronomers discovered dark energy, the little-understood energy thought to be driving the accelerating expansion of our universe. All attempts to use known laws of physics to calculate the expected value of this energy lead to answers that are 10120 times too high, causing some to label it the worst prediction in physics.
“The great mystery is not why there is dark energy. The great mystery is why there is so little of it,” said Leonard Susskind of Stanford University, at a 2007 meeting of the American Association for the Advancement of Science. “The fact that we are just on the knife edge of existence, [that] if dark energy were very much bigger we wouldn’t be here, that’s the mystery.” Even a slightly larger value of dark energy would have caused spacetime to expand so fast that galaxies wouldn’t have formed.
That night in Hawaii, Faber declared that there were only two possible explanations for fine-tuning. “One is that there is a God and that God made it that way,” she said. But for Faber, an atheist, divine intervention is not the answer.
“The only other approach that makes any sense is to argue that there really is an infinite, or a very big, ensemble of universes out there and we are in one,” she said.
This ensemble would be the multiverse. In a multiverse, the laws of physics and the values of physical parameters like dark energy would be different in each universe, each the outcome of some random pull on the cosmic slot machine. We just happened to luck into a universe that is conducive to life. After all, if our corner of the multiverse were hostile to life, Faber and I wouldn’t be around to ponder these questions under stars.
This “anthropic principle” infuriates many physicists, for it implies that we cannot really explain our universe from first principles. “It’s an argument that sometimes I find distasteful, from a personal perspective,” says Lawrence Krauss of Arizona State University in Tempe, Arizona, author of A Universe From Nothing. “I’d like to be able to understand why the universe is the way it is, without resorting to this randomness.”
And he’s not the only one who feels this way. Nobel laureate Steven Weinberg of the University of Texas at Austin once told me, “I would, and most physicists would, prefer not to have to rely on anything like the anthropic principle, but actually to be able to calculate things.”
Nonetheless, there is growing and grudging acceptance of the multiverse, especially because it is predicted by a theory that was developed to solve one of the most frustrating of fine-tuning problems of all—the flatness of our universe.
Spacetime today is flat, not curved—meaning that two rays of light that start out parallel stay parallel, neither converging nor diverging. This has been confirmed to exquisite precision by measurements of the cosmic microwave background, the radiation left over from the big bang. That means that a cosmological parameter called Omega, which dials in the curvature of spacetime, is very close to one. But for today’s universe to have an Omega anywhere near one, its value just one second after the big bang had to be exactly one to precision of about fourteen decimal places. This smacked of fine-tuning.
But in 1979, the physicist Alan Guth, now of MIT, discovered a way to get that value of Omega without fine-tuning. Guth showed that in the instants after the big bang, the universe would have undergone a period of exponential expansion. This sudden expansion, which Guth called “inflation,” would have rendered our observable universe flat regardless of the value of Omega before inflation began.
Imagine starting with a small balloon whose surface is curved and blowing it up some forty orders of magnitude. Any small piece of the balloon’s surface will now look flat. In the inflationary view, that’s what happened to our universe—our local patch of spacetime looks flat regardless of the curvature of spacetime before inflation began.
Some physicists believe that inflation continues today in distant pockets of spacetime, generating one new universe after another, each with different physical properties. Inflation, therefore, walks both sides of the fine-tuning line: It lends credence to the anthropic principle by predicting a multiverse, but it also reminds us that parameters we once thought were fine-tuned, like Omega, can be explained by a more fundamental theory. “The history of physics has had that a lot,” says Krauss. “Certain quantities have seemed inexplicable and fine-tuned, and once we understand them, they don’t seem to so fine-tuned. We have to have some historical perspective.”
We’ll gain such perspective only after we have a fundamental theory of everything—or perhaps when we detect signs of other universes. The urge to understand our universe from first principles and not ascribe it to some divine force compels us to seek scientific explanations for what seems to be an incredible stroke of luck.
Editor's picks for further reading
FQXi: The Patchwork Multiverse
Raphael Buosso examines links between string theory, dark energy, and the multiverse.
FQXi: Testing the Multiverse
Hiranya Peiris looks for evidence of other universes in the cosmic microwave background radiation.
Skeptical Inquirer: Anthropic Design: Does the Cosmos Show Evidence of Purpose?
Victor Stenger provides a critical analysis of the "so-called anthropic coincidences."
TED: Why Is Our Universe Fine-Tuned for Life?
In this video, Brian Greene asks why our universe appears so exquisitely tuned for life.
There is a moment in a dream when you realize that things don’t add up: You know that geese don’t speak English, and yet there you are chatting away with one about the price of gasoline. You’re sure that you never went to flight school, so why are you piloting this Cessna over Dubuque?
Then you wake up.
But sometimes, you don’t. Sometimes, as you confront two seemingly unassailable, clashing truths, you realize that you’re not dreaming at all—you’ve just encountered a paradox, and it’s a wake-up of an entirely different kind.
For centuries, paradoxes have helped physicists and philosophers challenge and deepen their understanding of how our world works. Paradoxes reveal assumptions and prejudices we never knew we had and open hidden hatches into new physics.
“There’s no getting around it: The universe is really strange, and paradoxes hit you with that,” says Anthony Aguirre, a physicist at the University of California, Santa Cruz.
For Aguirre, that’s a good thing: “That feeling of mystery is really what’s exciting in physics. You know there is something fun, interesting, and potentially important to be gained by going down that road.”
“Sometimes I consider that my knowledge is broken up into tectonic plates of understanding on the Earth of my total knowledge--a small part of the total universe of possible understanding,” says physicist Robert Nemiroff, who gives a special lecture on paradoxes to his students at Michigan Technological University. “Sometimes, I learn something that demands that two plates collide--both plates cannot be used to understand this new thought. This new thought can frequently be coined as a paradox. If resolved, these plates can lock into a larger plate of greater understanding, if I am lucky.”
“Paradoxes heighten what’s at stake conceptually,” says MIT science historian David Kaiser, who adds that physicists like Niels Bohr, John Wheeler, and Albert Einstein all deployed paradoxes strategically to underline mathematical contradictions that others deemed inconsequential. “Paradoxes are one way of grappling with what the equations really say.”
Here are the stories of three paradoxes from far-flung times and places in the history of physics and math. Though they have all been resolved, they remind us just how weird the universe really is. It is a dream from which we will never wake up. But who would want to?
How is it that anyone or anything ever moves from point A to point B? This simple question is the crux of a paradox first posed by the ancient Greek philosopher Zeno, and it has made generations of math students question the nature of reality every time they walk across a room.
Here’s one rendition of the paradox: Say you want to walk down the hallway from your bedroom to your bathroom. First, you have to cover half the distance between the rooms. Next, you’ll need to cross half the remaining distance. As you continue down the hall, you will always have half the previous distance left to cross. Though you will move ever closer to the threshold of the bathroom door, you will never actually reach it.
Obviously, we don’t spend our entire lives stranded in hallways. Why not? The answer is at the heart of calculus: It turns out that infinitely long sequences of numbers can actually have finite sums. This means that even though we must cross an infinite number of progressively smaller “chunks” of space on our way to the bathroom, the time it takes to do so is finite. That’s why we eventually get there.
Yet Zeno’s paradox also reflects one of the biggest questions in physics today: Is space—or spacetime—continuous, or is it broken up into discrete chunks? In Zeno’s world, space was continuous: It could be subdivided into smaller pieces on and on into infinity. Yet we know this isn’t how matter works. If you split a cookie in half over and over again (as many guilty sweet-tooths have no doubt tried at home) you will eventually be left with the indivisible components (electrons and quarks) of one atom. Eat them or don’t, but you cannot divide them in half.
This is also the moral of the story of quantum mechanics: The energy contained in all the particles that make up the universe is quantized. Why should spacetime be any different? In fact, some of the leading theories of quantum gravity predict that, on the tiniest scales, spacetime should break down into discrete chunks. Like a pointillist painting, spacetime may look perfectly smooth from afar, but up close it dissolves into pixels. There are currently experiments in the works to test this prediction.
Why is the sky dark at night? Before you say that it’s because the sun has set, consider that it took the greatest minds in science more than three hundred years to resolve this paradox. Everyone from Einstein to Edgar Allan Poe was swept up by this apparently simple puzzle.
The history of Olbers’ paradox goes back at least as far as Johannes Kepler, who posed it in 1610. If the universe is infinite, argued Kepler, containing an infinite number of stars distributed evenly across the sky, then every point on the night sky should be illuminated by starlight. The brightness of any individual star, as seen from Earth, fades in proportion to the square of its distance from Earth, but the number of stars at a given distance from Earth increases in proportion to the square of the distance from Earth, so it is a wash. The night sky, therefore, should be just as bright as the daylight sky. To Kepler, this meant that the universe must not be infinite after all.
In 1823, Heinrich Olbers drew up a different solution to the paradox that now bears his name. Olbers argued that as the light from each star makes its way toward Earth, it runs into interstellar dust and gas that absorb some its energy. Stars that are sufficiently far enough away from Earth would therefore be “cut off” from us.
The problem with Olbers’ logic is that dust and gas must spit back out the energy that they have absorbed, leaving us with the same problem we started with.
This is where Poe enters the picture. In his prose poem “Eureka,” published in 1848, he inserted time into the equation: What if light from distant stars just hasn’t had enough time to reach us yet? Poe wrote:
...yet so far removed from us are some of the "nebulae" that even light, speeding with this velocity, could not and does not reach us, from those mysterious regions, in less than 3 millions of years.
He may have had the details wrong, but the idea was right: Because stars have not been shining forever, and because it takes time for light to travel from here to there, there is a certain horizon from beyond which light has not yet reached us. Today, we know that the universe had a beginning (the Big Bang) and that the universe is expanding, causing light from distant stars to get stretched out (“redshifted”) beyond the visible part of the spectrum and into the infrared and radio bands, compounding the dark-sky effect.
“Olbers’ paradox is based on such a mind-numbingly simple observation,” says Aguirre. “What impresses me is the sheer amount of time that went by as people came up with one complicated and wrong solution after the next.”
The Twin Paradox
When Einstein proposed that time and space were elastic, it was weird enough. But the twin paradox challenges our understanding of Einstein’s rules even further. Imagine two twins, one a space traveler and the other an avowed homebody. The traveler sets out on a mission to a distant planet in a newfangled rocket that zips along at close to the speed of light. It’s a round-trip journey, so when she gets back, she is eager for a reunion with her twin. She wants to share the amazing stories of her voyage, of course, but she’s also looking forward to gloating about one favorable side effect of life in the cosmic fast lane: Because time passes more slowly for objects traveling close to the speed of light, the traveling twin anticipates that though she has hardly aged a bit, her stay-at-home sister will be sporting many years’ worth of new wrinkles. (Both twins are a bit vain.)
The stay-at-home sister is just as excited to see her twin. She knows her special relativity, too, and reasons that from her point of view, she was traveling close to light speed aboard spaceship Earth, while her sister sat complacently aboard her stationary vessel. Therefore, she will be the young-looking one, and will have fun counting up all her sister’s gray hairs.
And here is the paradox: Einstein tells us that no observer is “more correct” than any other, but the sisters can’t both be younger than each other. Still, Einstein wasn’t wrong. The solution to the paradox is that the sisters’ journeys were not actually identical. The traveling twin did not keep up a constant velocity throughout her entire trip. She accelerated to get up to speed, and then she changed directions—another kind of acceleration—before decelerating to get back into orbit around Earth. So the traveler, not her stay-at-home sister, gets the anti-aging benefits of time dilation.
Paradox solved? In one sense, yes. But in another sense, says Aguirre, the twin paradox reveals a deeper conundrum within the laws of relativity. The crux of the resolution, he says, is that even while velocity is relative, acceleration is absolute. “Where does this absolute non-accelerated reference frame come from? Einstein tried to do away with it in general relativity, but even a century later the question is largely open.”
Will today’s paradoxes be tomorrow’s truisms? Paradoxes arise when equations clash with our intuition about reality, says Kaiser. But intuition can change. “Newtonian mechanics did not look or sound ‘intuitive’ in the 17th century,” Kaiser points out. Today, we take it for granted that Newton’s conceptions of speed, gravity, and mass are in harmony with human intuition, but perhaps intuition itself has been reshaped by Newton’s view of the world. Will the apparent paradoxes of quantum theory and relativity one day feel just as natural as Newton’s laws? Is it possible to “grow out” of a paradox?
“Even once you know the golden thread that unravels the seeming contradiction, paradoxes are still appealing,” says Kaiser, who thinks of them as a kind of mental bodybuilding. “Paradoxes are like a much more satisfying version of Sudoku or a crossword puzzle.” The best part: “In the process we might really learn something about how the universe works.”
Editor's picks for further reading
arXiv: Paradox in Physics, the Consistency of Inconsistency
In this article, Dragoljub A. Cucic classifies paradoxes in physics and reviews their utility.
Edge: The Paradox
In this essay, Anthony Aguirre argues that a better understanding of paradoxes would improve our cognitive toolkit.
FQXi: Black Holes: Paradox Regained
Stephen Hawking conceded his bet on the black hole information paradox, but the debate continues.
Why are qubits like cats?
They’re strange and contradictory—and they don’t like to be herded.
Qubits are the essential elements of quantum computers. Conventional computers symbolize data as a series of ones and zeroes—binary digits known as bits. Quantum computers use quantum bits, or “qubits,” that don’t just toggle on or off like the transistors in conventional computers, but can be both on and off simultaneously. In principle, quantum computers can vastly outperform traditional computers. But actually building such a machine is a challenge akin to—you guessed it—herding cats.
To understand how quantum computers work, you can start by opening your eyes. If you see something, you pretty much know it's there, right? However, if you can't see something—if it's hidden in a box, for instance, or it's too small to see—you might very well imagine that it might be anywhere, or nowhere. This realm of uncertainty is more than just a fanciful idea; it is the backbone of quantum physics, and it is exactly the odd life that the elementary building blocks of the universe live if you aren't looking. Atoms and subatomic particles live in states of flux known as "superpositions" where they can, for instance, exist in two or more places at once, or spin in two opposite ways simultaneously. However, once a particle gets disturbed by its surroundings, its superposition "collapses" so that it is in just one of the many possible states represented by the superposition.
Quantum computers are based on objects in superposition—those qubits that can read both “zero” and “one” simultaneously. The more qubits you "entangle," or link together so that they operate in perfect unison, the more potential on/off combinations your quantum computer can run at the same time. A quantum computer with just 300 such qubits could run more calculations in an instant than there are atoms in the universe.
Superpositions are extraordinarily fragile, though. The easiest chunks of matter to coax into superpositions are usually very small, because their activity is easier to control, or very cold, because their low energy makes it unlikely they will interact with their environment—for instance, super-cooled rubidium or ytterbium atoms.
So far quantum computers are only capable of fairly rudimentary behavior, such as figuring out what numbers multiply together to get 15. But these are just toy versions of the powerhouses that quantum computers could one day become. Today, militaries, intelligence agencies, corporations and universities worldwide are competing to develop quantum computers that live up to that promise.
Conventional computers symbolize data as a series of ones and zeroes, binary digits known as bits. This code is conveyed via transistors, which are electronic switches that are flicked either on or off to represent a one or a zero, and is the basis for all the calculations associated with traditional computers. In contrast, quantum computers are based on objects in superpositions—quantum bits or "qubits" that don't just work on or off, but both on and off simultaneously. The more qubits are "entangled" or linked together in a way where they operate in perfect unison, the more potential on-off combinations they can run at the same time. A quantum computer with just 300 such qubits could run more calculations in an instant than there are atoms in the universe.
"A quantum computer will allow humankind to perform tasks that are far beyond what the best classical supercomputer can do," said physicist Matteo Mariantoni at the University of California at Santa Barbara.
Certain tasks thought impossible for regular computers could be accomplished quickly by quantum computers. For instance, a quantum computer could easily factor a number hundreds of digits long. This is a math problem that’s too difficult for even the best computers today, which is why online encryption of credit card numbers and passwords depends on it. The National Security Agency (NSA) and others in the intelligence community are therefore racing to build a quantum computer that’s up to the task before someone gets there first. "My bet is the first quantum computer will appear in a lab related to the NSA," said Raymond Laflamme, executive director of the Institute for Quantum Computing at the University of Waterloo in Canada.
Quantum computers could be good for more than just hacking. Since they are quantum systems, they can be used to simulate other quantum systems, helping scientists investigate how complex molecules behave. Such work "could revolutionize the pharmaceutical industry," Mariantoni said. Quantum simulations could also help "solve mysteries of physics," such as superconductivity, the phenomenon where electrons zip without resistance through objects, said Markus Greiner, a physicist at Harvard University. In the process, such research could also help develop novel materials with fantastic, unforeseen new properties, he added.
Research teams across the world are pursuing a wide variety of different methods to create quantum computers. Qubits are being made from electrically charged atoms held in place by electrical fields, from photons of light, and from superconducting circuits, among many other architectures. An intriguing development in quantum computing are qubits that don't need super-cold temperatures, but rather can exist at room temperature. For instance, impurities within diamond can stay in superposition because their placement within such a pure crystal insulates them from outside disturbances.
Although scientists have created basic working quantum computers with a few qubits for nearly 20 years now, more advanced versions with hundreds of qubits that can outperform classical computers will likely take decades to materialize. Superpositions are delicate and easily broken, and the problem of keeping them isolated gets harder with each additional qubit. Moreover, it's not certain which architecture is optimal. "It's not even clear there will be a winner. Maybe we'll see hybrid devices that take advantage of several architectures, or a different approach altogether," said Jeremy O'Brien, director of the Center for Quantum Photonics at the University of Bristol in England. "However, personally, I'm very confident that an all-optical approach with single photons is the leading one."
Although it might seem as if development of quantum computers is slow, "Charles Babbage conceptualized and designed the first programmable computer in the 1830s," Awschalom said. "Nevertheless, it took until the second half of the 20th century for a recognizable electronic computer to arrive."
Still, benefits from spinoff technologies should appear long before a quantum computer more powerful than a conventional supercomputer does. For instance, research into quantum computers is now pioneering ways to resist hacking. When qubits are entangled, they stay in sync instantaneously as if they were one, even if they are at separate ends of the universe, a seemingly impossible connection Einstein dubbed "spooky action at a distance." If anyone tried to eavesdrop on communications involving qubits, the disturbance would immediately be obvious. Such research is now helping develop a new kind of extraordinarily secure cryptography. "Those applications may outpace the development of a quantum computer that will break current cryptographic schemes, which is a good thing for information security," Awschalom said.
The first quantum computers will probably live in labs or server farms. However, according to experimental physicist Ian Walmsley at the University of Oxford in England "As they get easier to build and designs get more sophisticated, I think we'll see them in the office—maybe in the home."
If you find it hard to imagine a quantum computer sitting on your desk, Laflamme suggests looking to history: “If you went back to the 1950s and asked if you really needed a computer in the house, I think you'd get a similar answer."
Will we ever realize the sci-fi dream of human teleportation? Physicists have already successfully teleported tiny objects. (See Beam Me Up, Schrödinger for more on the mechanics of quantum teleportation.) What will it take to extend the technique to a living, breathing human being?
Quantum teleportation is possible because of two quantum phenomena that are utterly foreign to our everyday experience: entanglement and superposition. Entanglement is the connection that links the quantum states of two particles, even when they are separated: The two particles can be described only by their joint properties.
Though there is no classical analogue for entanglement, in his book Dance of the Photons Zeilinger imagined how entanglement might work if it could be applied to a pair of ordinary dice instead of a pair of subatomic particles: “The science fiction Quantum Entanglement Generator produces pairs of entangled dice. These dice do not show any number before they are observed." In other words, they are in a superposition of states where there is an equal chance of producing any number between one and six. "When one die is observed, it randomly chooses to show a number of dots. Then, the other distant die instantly shows the same number.”
This works no matter how far apart the dice are. They can be sitting beside each other or on opposite ends of the universe. In either case, when the particle over here is measured to be in one of many possible states, then we can infer the state of the particle over there, even though no energy, no mass, and no information travels between A and B when the first one is observed. The state of particle B simply is what it is. The difficult concept is that B’s state corresponds with the state of the measured particle A.
Entanglement is so confounding that in the early days of quantum theory, when entanglement was supported only by thought experiments and math on paper, Einstein famously derided it as “spooky action at a distance.” Today, though, entanglement has been thoroughly tested and verified. In fact, entangling particles isn’t even the hard part: For physicists, the most difficult task is maintaining the entanglement. An unexpected particle from the surrounding environment—something as insubstantial as a photon—can jostle one of the entangled particles, changing its quantum state. These interactions must be carefully controlled or else this fragile connection will be broken.
If entanglement is one gear in the quantum machinery of teleportation, the second critical gear is superposition. Remember the thought experiment about Schrödinger’s cat? A cat, a flask of poison, and a radioactive source are all placed in a sealed box. If the source decays and emits a particle, then the flask breaks and the cat dies. While the box is closed, we can’t know whether the cat is living or dead. Moreover, the cat can be considered both alive and dead until the box is opened: The cat will stay in a superposition of the two states until we look in the box and observe that the cat is either alive or dead.
Schrödinger never tried this on a real cat—in fact, he drew up the thought experiment just to demonstrate the apparently preposterous implications of quantum theory—but today scientists have demonstrated that superposition is real using systems that are increasingly large (albeit still much smaller than a cat). In 2010, a group of researchers at the University of California, Santa Barbara demonstrated superposition in a tiny mechanical resonator—like a tuning fork, it vibrates at a characteristic frequency, but just like the cat it doesn't exist in a single position until measured. Last year, another group of researchers demonstrated quantum superposition in systems of as many as 430 atoms.
Before superposition and entanglement appear in a human-scale teleporter, if ever, they will be harnessed for multiple applications in computing. Quantum cryptography uses entanglement to encode messages and detect eavesdropping. Because observation perturbs entanglement, eavesdropping destroys information carried by entangled particles. And if two people each receive entangled particles, they can generate an entirely secure key. Quantum cryptography is an active area of research and some systems are already on the market.
Quantum mechanical superposition and entanglement could also be exploited to make faster and more powerful computers that store information in quantum states, known as “qubits,” instead of traditional electronic bits. Quantum computers could solve problems that are intractable for today’s computers. Whether it’s possible to make a working quantum computer is still in question, but roughly two dozen research groups around the world are avidly investigating methods and architectures.
So we know how to teleport one particle. But what if we want to make like Captain Kirk and teleport an entire human being?
Remember that we wouldn't be moving Kirk's molecules from one place to another. He would interact with a suite of previously-entangled particles, and when we read the quantum state we would destroy the complex quantum information that makes his molecules into him while instantly providing the information required to recreate his quantum state from other atoms in a distant location.
Quantum mechanics doesn’t forbid it. The rules of quantum mechanics still apply whether you’re talking about a system of two particles or human being made of 1027 atoms. “The size doesn’t matter in and of itself,” says Andrew Cleland, a physicist at the University of California, Santa Barbara. Macroscopic systems like superconductors and Bose-Einstein condensates show quantum effects while arbitrarily large.
From an engineering standpoint, though, teleporting larger objects becomes an increasingly tough problem. Cleland comments, “Taking any object and putting it in a quantum state is hard. Two is multiply hard.” Maintaining entanglement between particle requires isolating them from interactions that would break their entanglement. We don’t want Captain Kirk to end up like The Fly, so we need to keep the particles absolutely isolated.
What if we start with something simpler: Instead of teleporting a person, can we teleport a much smaller living thing—like a virus?
In 2009, Oriol Romero-Isart of the Max-Planck-Institut fur Quantenoptik in Germany and his colleagues proposed just such an experiment. Using current technology, it should be possible to demonstrate superposition in a virus, they argued. They didn’t try it, but laid out a procedure: First, store the virus in a vacuum to reduce interactions with the environment, and then cool it to its quantum ground state before pumping it with enough laser light to create a superposition of two different energy states.
This is possible in theory because some viruses can survive cold and vacuum. But humans are hot, and that thermal energy is a problem. “We have quadrillions of quantum states superimposed at the same time, dynamically changing,” says Cleland. Not only are we hot, but we interact strongly with our environment: We touch the ground, we breathe. Ironically, our need to interact with our environment, our sheer physicality, could come between us and the dream of human teleportation.
If the journey were really more important than the destination, then we wouldn’t dream about teleportation. Going from here to there, instantly, is a staple of science fiction and comic books. It is wish-fulfillment for anyone who has had a car break down in the middle of nowhere or waited in a crowded airport the day before Thanksgiving. Even "Star Trek"’s Captain Kirk, who had a starship and planetary shuttle at his disposal, bypassed trips by jumping into the ship’s teleporter.
The good news for would-be Captain Kirks is that teleportation is real. By exploiting quantum mechanical effects that are based on rigorous mathematics and bolstered by decades of laboratory experiments, physicists have demonstrated that teleportation works. The bad news: So far, it only works on very tiny objects. Teleporting an entire human being is still a long way off. Engineering complexities may stop humans from ever being teleported.
Why can’t we teleport? After all, we can scan the surface of an object, transmit the data at the speed of the Internet, and recreate it as many times as we like using a 3D printer. Shouldn’t we be able to just improve the resolution of the scanners and printers (and the fidelity of the transmission) until we can print a living, breathing person from a data file?
Most of us think in terms of classical mechanics, the rules of reality that make sense at size scales humans can touch. And if reality were based on classical mechanics, then the print-me-to-Paris approach might work. But these familiar rules turn out to be just a special case of the much stranger principles of quantum mechanics. In the classical world, it is possible to have complete information about every atom that makes up your body, but quantum physics imposes restrictions on how much we can know. In the quantum world, there is a limit to the resolution with which we can measure an object, and measuring an object changes it. If we want to teleport, we have to play by the rules of quantum mechanics.
Teleportation has been demonstrated with tiny particles. The process is less glamorous than what you’ve seen in the Enterprise’s transporter room, though.
Here’s how it works. Imagine three particles, X, Y, and Z. We want to teleport particle Z from here to there. Each particle is in a particular quantum state, though we don’t know exactly what state—to measure it would be to destroy the information. So we have to take a gentler approach. We’ll start by putting particles X and Y into a bizarre sort of quantum co-dependency called “entanglement”—more on that in the next blog post. For now, just keep in mind that no matter how far they may travel from each other, particle X and particle Y retain an intimate connection that links their (still unknown) quantum states.
So, when researchers send particle Y someplace over there, it is still coupled to particle X. The next step in the experiment is to let particles Z and X interact. That gives the physicists a way to compare their quantum states. This measurement inevitably, unavoidably, changes the states of the particles: the original state of particle Z is destroyed. But the physicists can use the information revealed by that comparison to transform the distant particle Y into a perfect copy of particle Z. Particle Z has teleported from here to there.
But wait, you say! Particle Z and particle Y are different particles! You haven’t really transported Z—you’ve just turned Y into a facsimile. But according to the rules of quantum physics, a particle’s quantum state defines its identity. Two particles in the same quantum state are indistinguishable. They are, for all intents and purposes, the same particle.
This type of quantum teleportation was first proposed by an international group of physicists in 1993. A particle could be teleported, they concluded, as long as the original is destroyed. (This, presumably, is why there is only one Captain Kirk.) In 1997, researchers in Anton Zeilinger’s group in Innsbruck demonstrated quantum teleportation using photons.
Since then, researchers have increased both the teleportation distance and the size of the particles teleported. Zeilinger’s group demonstrated quantum teleportation using photons in free space across 144 km in the Canary Islands.
In 2004, Reiner Blatt’s group at the University of Innsbruck reported teleporting trapped calcium ions a distance of 5 microns and in 2009, researchers at the universities of Maryland and Michigan announced that they had successfully teleported ytterbium ions a full meter.
Next week we consider entanglement, superposition, and the technical challenges necessary before scientists can teleport living creatures, starting with a lowly virus and maybe someday culminating in a Star Fleet captain.
Conventional wisdom has it that putting the words “quantum gravity” and “experiment” in the same sentence is like bringing matter into contact with antimatter. All you get is a big explosion; the two just don’t go together. The distinctively quantum features of gravity only show up in extreme settings such as the belly of a black hole or the nascent universe, over distances too small and energies too large to reproduce in any laboratory. Even alien civilizations that command the energy resources of a whole galaxy probably couldn’t do it.
Physicists have never been much for conventional wisdom, though, and the dream of studying quantum gravity is too enthralling to give up. Right now, physicists don’t really know how gravity works—they have quantum theories for every force of nature except this one. And as Einstein showed, gravity is not just any old force, but a reflection of the structure of spacetime on which all else depends. In a quantum theory of gravity, all the principles that govern nature will come together. If physicists can observe some distinctively quantum feature of gravity, they will have glimpsed the underlying unity of the natural world.
Even if they can’t crank up their particle accelerators to the requisite energies, that hasn’t stopped them from devising indirect experiments—ones that don’t try to swallow the whole problem in one gulp, but nibble at it. My award-winning colleague Michael Moyer describes one in Scientific American's February cover story, and lots of others are burbling, too. Rather than matter and antimatter, “quantum gravity” and “experiment” are more like peanut butter and chocolate. They actually go together quite tastily.
An example came out at the American Astronomical Society meeting in Austin earlier this month. Robert Nemiroff of Michigan Technological University presented his team’s study of extremely high-energy, short-wavelength cosmic gamma rays. The idea, which goes back to the late 1990s, is that short-wavelength photons may be more sensitive than long-wavelength ones to the microscopic quantum structure of spacetime, just as a car with small tires rattles with road bumps that a monster truck doesn’t even feel. The effect might be slight, but if the photons travel for billions of years, even the minutest slowdown or speed-up can appreciably change their time of arrival. Nemiroff’s team focused on gamma-ray burst GRB 090510A, observed by the Fermi space telescope. It went off about 7 billion years ago, and photons of short and long wavelength arrived at almost the same time—no more than about 1 millisecond apart. Any speed difference was at most one part in 1020, implying that quantum gravity hardly waylaid these photons at all.
Theoretical physicists have long debated whether quantum gravity would alter photon speed, and most were not surprised by the negative result. But what’s important is the change of mindset. Experimenters and observers care less about what we should see than what we can see. These are people who love to build stuff. If they can build some gizmo that might bring gravity and quantum mechanics into contact, they’ll do it, whatever the theorists might say. They take an “if you build it, something will come” attitude. Historically, physics has been well-served by going out to look at nature with a minimum of prejudice.
The latest brainstorm is to apply techniques from quantum optics and related disciplines, which manipulate photons of light and other particles in order to build encrypted communications links, develop the components of a quantum computer, and study matter at extremely low temperatures. The tool of this trade is an interferometer, an apparatus that probes the wave nature of particles. It consists of a particle source, a particle detector, and two paths to get from one to the other. Being quantum, a particle goes both ways. That is to say, the wave corresponding to the particle splits in two, travels the distance, and fuses back together again. The relative length of the paths (or anything else that differentiates them) determines whether the waves will mutually reinforce or cancel and therefore what the detector will detect.
At first glance, these setups are the last place you’d go to look for quantum gravity. They are decidedly low-energy experiments, usually conducted on lab benches the size of dining-room tables. There is nary a gamma ray or accelerated particle to be found. But Moyer’s cover story describes how an interferometer can serve as an extremely precise ranging instrument. Any change in the paths’ relative length, as you might expect if spacetime is roiled by quantum fluctuations, will register at the detector.
Last spring, a team of physicists in Vienna led by Çaslav Brukner explored another use of interferometers: to see whether quantum particles truly obey gravity as Einstein conceived it. This isn’t quantum gravity, per se—the particles are quantum, but gravity behaves in a strictly classical way. Nonetheless, it is a fascinating case of how the two theories interact. You might think that the gravity on a single particle is way too feeble to measure, but an interferometer can manage it. You set it up so that the two paths are at different heights and therefore experience a different gravitational potential, which registers at the detector.
The Vienna team proposed sending not just any particle through the interferometer, but one that acts like a miniature clock—marking time by rotating or decaying. General relativity predicts that clocks run slower the deeper they get into a gravitational field, which, in this experiment, would act to wash away the wave nature of the particle altogether. The fading-away of the wave properties would be the unmistakable fingerprint of general relativity and a stepping-stone to quantum gravity. Current interferometers lack the necessary precision to look for this effect, but it is just a matter of time. (Sorry, couldn’t resist.) For more, see the authors’ own blog post and their paper in Nature Communications last fall.
It it also possible that quantum gravity could modify Heisenberg’s famous uncertainty principle. As Sabine Hossenfelder at Backreaction described last Wednesday, gravitational effects may set a minimum length that anything in nature could ever have, which means that no matter how much momentum imprecision you’re willing to accept, a position measurement could never be more precise than the minimum length. Experiments like this one could use tiny mirrors and springboards to pick up that effect.
Still another approach suggested by the ever-inventive Viennese is to define quantum gravitational ideas in concrete rather than abstract terms. Theorists think that quantum fluctuations in spacetime might make cause-effect sequences ambiguous, with the practical consequence of changing the types of correlations physicists observe in the lab. But the Viennese suggest thinking about it the other way round: Physicists observe certain types of correlations in the lab and, from these, draw conclusions about spacetime.
Some such correlations—those that muddle cause and effect—would be be inexplicable in ordinary physics. When quantum effects enter into play, “spacetime” loses some of the most basic features we associate with it, such as the notion that objects reside in certain places at certain times. In the Viennese scenario, you lose the ability to tell a story: One thing happened, then another, then another. It becomes a Dadaist jumble.
This approach hasn’t lent itself to a specific experiment yet, but is generally inspired by the experimentalist mindset. In this, it follows a trail blazed by Einstein himself, who developed his theories of relativity by thinking of abstract ideas in a concrete way. Even when experimenters can’t build actual experiments, their feet-on-the-ground mentality provides a fresh look at some of the hardest problems in modern science.
This post is adapted and reprinted from Scientific American; find the original here.
Editor's picks for further reading
FQXi: Journeying through the Quantum Froth
Are cosmic rays revealing the quantum nature of spacetime?
FQXi: Table-Top Tests for Quantum Gravity
In this podcast, discover how scientists are probing quantum gravity using quantum optics.
Perimeter Institute: Lectures on the experimental search for quantum gravity
Watch a series of scientific lectures on experimental probes of quantum gravity.
There is a thin line between a bang and a whimper.
For stars, this line is called the Chandrasekhar Limit, and it is the difference between dying in a blaze of glory and going out in a slow fade to black. For our universe, this line means much more: Only by exceeding it can stars sow the seeds of life throughout the cosmos.
The Chandrasekhar Limit is named for Subrahmanyan Chandrasekhar, one of the great child prodigies. Chandrasekhar graduated with a degree in physics before reaching his twentieth birthday. He was awarded a Government of India scholarship to study at Cambridge, and in the fall of 1930 boarded a ship to travel to England. While aboard the ship—still before reaching his twentieth birthday—he did the bulk of the work for which he would later be awarded a Nobel Prize.
By the 1920s—a decade before Chandrasekhar began his journey to England—astronomers had realized that Sirius B, the white dwarf companion to the bright star Sirius, had an astoundingly high density—more than a million times the density of the sun. An object of this density could only exist if the atoms comprising the star were so tightly compressed that they were no longer individual atoms. Gravitational pressure would compress atoms so much that the star would consist of positively-charged ions surrounded by a sea of electrons.
Prior to the discovery of quantum mechanics, physicists knew of no force capable of supporting any star against such gravitational pressure. Quantum mechanics, though, suggested a new way for a star to hold itself up against the force of gravity. According to the rules of quantum mechanics, no two electrons can be in the exact same state. Inside an extremely dense star like Sirius B, this means that some electrons are forced out of low energy states into higher ones, generating a pressure called electron degeneracy pressure that resists the gravitational force. This makes it possible for a star like Sirius B to achieve such extreme density without collapsing in on itself.
This discovery was made by Ralph Fowler, who would later become Chandrasekhar’s graduate supervisor. But Chandrasekhar realized what Fowler had missed: The high-energy electrons inside the white dwarf would have to be traveling at velocities near the speed of light, invoking a set of bizarre relativistic effects. When Chandrasekhar took these relativistic effects into account, something spectacular happened. He found a firm upper limit for the mass of any body which could be supported by electron degeneracy pressure. Once this limit—the Chandraskehar limit—was exceeded, the object could no longer resist the force of gravity, and it would begin to collapse.
When Chandrasekhar published these results in 1931, he set off a battle with one of the greatest astrophysicists of the era, Sir Arthur Eddington, who believed that the white dwarf state was the eventual fate of every star. At a conference in 1935, Eddington told his audience that Chandrasehkar’s work “was almost a reduction ad absurdum of the relativistic degeneracy formula. Various accidents may intervene to save a star, but I want more protection than that. I think there should be a law of Nature to prevent a star from behaving in this absurd way!”
Chandrasekhar was deeply hurt by Eddington’s reaction, but colleagues can disagree profoundly and still remain friends. Chandrasekhar and Eddington remained friends, went to the Wimbledon tennis tournament together and went for bicycle rides in the English countryside. When Eddington passed away in 1944, Chandrasekhar spoke at his funeral, saying “I believe that anyone who has known Eddington will agree that he was a man of the highest integrity and character. I do not believe, for example, that he ever thought harshly of anyone. That was why it was so easy to disagree with him on scientific matters. You can always be certain he would never misjudge you or think ill of you on that account.”
Vindication would eventually come to Chandrasekhar when he was awarded the Nobel Prize in 1983 for his work. The Chandrasekhar Limit is now accepted to be approximately 1.4 times the mass of the sun; any white dwarf with less than this mass will stay a white dwarf forever, while a star that exceeds this mass is destined to end its life in that most violent of explosions: a supernova. In so doing, the star itself dies but furthers the growth process of the universe—it both generates and distributes the elements on which life depends.
The life of a star is characterized by thermonuclear fusion; hydrogen fuses to helium, helium to carbon, and so on, creating heavier and heavier elements. However, thermonuclear fusion cannot create elements heavier than iron. Only a supernova explosion can create copper, silver, gold, and the “trace elements” that are important for the processes of life.
Lighter elements like carbon, oxygen, and nitrogen are also essential to life, but without supernova explosions, they would remain forever locked up in stars. Being heavier than the hydrogen and helium that comprise most of the initial mass of the stars, they sink to form the central core of the star—just as most of the iron on Earth is locked up in its core. If stars are, as Eddington believed, destined to become white dwarfs, those elements would remain confined to the stellar interior, or at best be delivered in relatively minute quantities to the universe as a whole via stellar winds. Life as we know it requires rocky planets to form, and there simply is no way to get enough rocky material out into the universe unless stars can deliver that material in wholesale quantities. And supernovae do just that.
The Chandrasekhar Limit is therefore not just as upper limit to the maximum mass of an ideal white dwarf, but also a threshold. A star surpassing this threshold no longer hoards its precious cargo of heavy elements. Instead, it delivers them to the universe at large in a supernova that marks its own death but makes it possible for living beings to exist.
Editor's picks for further reading
BBC: Test Tubes and Tantrums: Arthur Stanley Eddington and Subrahmanyan Chandrasekhar
In this radio program, discover the history of one of the nastiest disagreements in astrophysics.
FQXi: Exploding the Supernova Paradigm
In this blog post, Zeeya Merali investigates gaps in our understanding of supernova explosions.
Nobelprize.org: Subramanyan Chandrasekhar – Autobiobraphy
Nothing is not as simple as it seems.
The concept of nothing has fascinated philosophers and scientists throughout history. The search for an ever-deeper understanding of nothing has driven scientific discovery since the age of ancient Greece, and today the pursuit of nothing defines the frontier of modern particle physics. But before we talk about nothing, let’s talk about something: air.
For millennia, philosophers thought that “empty” air was nothing. Aristotle and the ancient Greeks, though, recognized air as a “thing” in its own right. Wind, after all, is nothing but air, yet it can be felt powerfully. Indeed, the Greeks considered air to be one of the basic elements, along with earth, water, and fire. These elements, in turn, were believed made of some basic something which they called “ur-matter.” A familiar modern example, sucking on a drinking straw, seems to illustrate the impossibility of creating a vacuum: The straw doesn’t fill up with vacuum but instead “implodes,” apparently confirming the Greek belief that “Nature abhors a vacuum.”
About two millennia would pass before Galileo and others realized that the implosion is due to the external pressure of the air, and not a cosmic law against nothingness. This soon led to the invention of the barometer and a remarkable discovery: Air pressure decreases with altitude. The reason is that the atmosphere has a finite height and the nearer you get to the surface, the less air there is pressing down on you. This inspired the thought that above the atmosphere is nothing—or, at least, no air.
By the end of the 17th century, then, when people talked about “nothing,” they were no longer talking about air: They were talking about the void of space. Today, we know that though space is empty of air, it is filled with gravitational forces which guide the planets and order the galaxies. It is also full of electric and magnetic fields that give us sunlight and starlight in the form of electromagnetic waves.
This created great problems for 19th century scientists: Since the electromagnetic waves from the sun and stars were making it all the way to Earth, they must be traveling through something. After all, they knew that sound waves need a medium through which to travel. I speak and air molecules bump into one another until some hit your eardrums, making them vibrate, generating signals that your brain interprets as sound. The absence of air in space leaves the sun silent, yet we can see it.
To resolve this paradox, scientists argued that there must be some medium through which the electromagnetic waves traveled. “Waves in what?” was answered with: “The ether.” And so began one of the greatest wild goose chases in the history of science, as many of the leading lights in the field went in search of this weird ether that was capable of transmitting light at about 300,000 km every second while still allowing the planets to pass through as if there were nothing there at all. The search did not end until Einstein finally introduced his theory of relativity in 1905, which eliminated the need for the ether. (But that's a story for another day.) The tables had turned on nothing: Aristotle was wrong. Nothing could exist—or so we thought. And then came quantum mechanics.
In the quantum realm of tiny subatomic particles, the more closely you look at nothing, the more things you discover. What looks empty to our gross senses turns out to be effervescing with particles of matter and anti-matter. The apparent void is a medium filled with stuff, a froth of will-o’-the-wisp particles of matter and antimatter.
This new quantum mechanical view of nothing began to emerge in 1947, when Willis Lamb measured spectrum of hydrogen. The electron in a hydrogen atom cannot move wherever it pleases but instead is restricted to specific paths. This is analogous to climbing a ladder: You cannot end up at arbitrary heights above ground, only those where there are rungs to stand on. Quantum mechanics explains the spacing of the rungs on the atomic ladder and predicts the frequencies of radiation that are emitted or absorbed when an electron switches from one to another. According to the state of the art in 1947, which assumed the hydrogen atom to consist of just an electron, a proton, and an electric field, two of these rungs have identical energy. However, Lamb’s measurements showed that these two rungs differ in energy by about one part in a million. What could be causing this tiny but significant difference?
When physicists drew up their simple picture of the atom, they had forgotten something: Nothing. Lamb had become the first person to observe experimentally that the vacuum is not empty, but is instead seething with ephemeral electrons and their anti-matter analogues, positrons. These electrons and positrons disappear almost instantaneously, but in their brief mayfly moment of existence they alter the shape of the atom's electromagnetic field slightly. This momentary interaction with the electron inside the hydrogen atom kicks one of the rungs of the ladder just a bit higher than it would be otherwise.
This is all possible because, in quantum mechanics, energy is not conserved on very short timescales, or for very short distances. Stranger still, the more precisely you attempt to look at something—or at nothing—the more dramatic these energy fluctuations become. Combine that with Einstein’s E=mc2, which implies that energy can congeal in material form, and you have a recipe for particles that bubble in and out of existence even in the void. This effect allowed Lamb to literally measure something from nothing.
This suggests that the contents of the vacuum—the “stuff” of nothing—could be organized in different ways at different times in the history of the universe. Think of water molecules: They can roam freely in the liquid or lock tightly to one another in ice crystals. This analogy hints at an intriguing possibility: Could the contents of the quantum vacuum be in a different configuration in today’s cool universe than they were in the first moments after the hot Big Bang?
At creation, the thinking goes, particles had no mass and moved through the vacuum at the speed of light. Around a trillionth of a second after the Big Bang, the universe was cool enough that a mass-giving field called the “Higgs field” condensed in the vacuum, as water condenses from steam.
The Higgs field is believed to disturb the motion of fundamental particles like electrons as they move through it, producing the effect that we call mass. If this is correct, there should be particle manifestations of the Higgs field, known as Higgs bosons, just waiting to be discovered. The Large Hadron Collider (LHC) at CERN is hot on the trail of these particles, but decisive evidence of the Higgs boson—which is very massive and can only be produced in an enormous blast of energy—is still elusive. Scientists working on the LHC expect that they may see the first glimpse of the Higgs by the end of 2012. Whether this is the real deal or whether we are being fooled by some cruel, random throw of Nature’s dice, time will tell.
Aristotle was right: There is no thing that is nothing. Is the Higgs field part of the something? Within a few months we may know the answer.
Editor's picks for further reading
FQXi: Much Ado About Nothing
Ted Jacobson investigates the nature of the cosmic vacuum.
The New York Times: There’s More to Nothing Than We Knew
In this article, Dennis Overbye reviews why physicists believe that something—like our universe—can come from nothing.
World Science Festival: Nothing: The Subtle Science of Emptiness
Journalist John Hockenberry leads Nobel laureate Frank Wilczek, cosmologist John Barrow, and physicists Paul Davies and George Ellis in a discussion of the physics and philosophy of nothing.