Physicists are on the brink of a breakthrough discovery: They may have finally cornered the Higgs boson, the subatomic particle hypothesized to give mass to all the stuff in the universe. But should we really be calling this particle the “Higgs”?
A computer simulation of a detection of the Higgs boson. Or is that the ABEGHHK’tH boson? Credit: David Parker/Photo Researchers, Inc.
Peter Higgs, it turns out, wasn’t the only one to come up with the idea of a new field (the Higgs field) that endows particles with mass. In fact, he wasn’t even the first to publish the theory. That distinction goes to Robert Brout and Francois Englert at the Free University in Brussels, who wrote up the idea in August 1964. Higgs was close on their heels with his own paper in October of the same year. Just a few weeks later, Dick Hagen, Gerald Guralnik, and Tom Kibble published their take on what would come to be known as the Higgs field and Higgs boson.
This wasn’t plagiarism: It was a kind of synchronicity that is the norm in science, says MIT science historian David Kaiser. In fact, independent research groups simultaneously arrive at similar breakthroughs so often that Robert Merton, a sociologist of science, put a name to the phenomenon: multiples. One famous multiple is calculus, which was simultaneously “discovered” by both Isaac Newton and Gottfried Leibniz in the late 17th century. More recently, the accelerating expansion of the universe was observed at nearly the same time by two competing groups of astronomers, both of which were honored with the Nobel Prize in physics in 2011.
Higgs, Brout, Englert and the rest were continuing a tradition that is as old as physics itself. But why is “Higgs” the name that stuck? “Higgs expressed the challenge”—how do we get particles that have mass and still obey the rules of symmetry?—“and the expected solution especially sharply,” says Kaiser. Another recounting pins the name on Ben Lee, a physicist who used “Higgs” as shorthand in a 1972 Fermilab conference program after having had a productive lunch chat with Higgs.
Higgs himself has always been uncomfortable seeing his name ride solo. He prefers to call the particle the “scalar boson” or the “so-called Higgs,” Ian Sample writes in his book “Massive.” Higgs has also advanced the uncommonly inclusive acronym ABEGHHK’tH—that’s the Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble and ‘t Hooft—to honor all the scientists who played a part in originating the theory.
Frank Wilczek, a Nobel prize-winning physicist who has named a few particles of his own (anyons and axions—the latter inspired by a laundry detergent), thinks that the alphabet soup solution would be “especially absurd.” Says Wilczek: “History is complicated, and wherever you draw the line there will be somebody just below it!”
If the Higgs discovery is confirmed, though, someone will have to draw that line—and that someone will be the Nobel Prize committee. The discovery is seen as a shoo-in for the physics honor, but the prize can be divided among no more than three laureates. There are at least six scientists with reasonable claims on the Higgs—not to mention the cast-of-thousands teams whose instruments are responsible for the experimental evidence that the Higgs actually exists.
Complicating matters is physicists’ anarchic naming methodology. When astronomers have planets, moons, and asteroids in need of naming, they turn to the International Astronomical Union. Elements get their formal names from the International Union of Pure and Applied Chemistry. Physicists, who have no such official naming body, have historically opted for descriptive names, like “neutrino” (“little neutral one”), or names devoid of any physical meaning at all, like “up,” “down,” and “charm.” As a particle named after a person, the Higgs is essentially alone among the fundamental elementary particles.
So what should we be calling the Higgs? “By now it's so deeply embedded in the literature that changing to another name would be jarring, and might introduce a gratuitous complication in literature searches or eventually even a hurdle to parsing older papers,” says Wilczek. If he had to choose? “A possibly better choice might be ‘zeron,’ to connote that the particle has zero quantum numbers, and in some sense is an ingredient of what we call nothingness.”
“I’d find a fancy-sounding word in ancient Greek, to give it gravitas, and then add ‘on,’” says Kaiser. In the absence of a Greek dictionary, Kaiser nominates “lardon”—a particle that makes things heavy.
Ultimately, it may come down to branding. “In business, it would be considered destructive to take a well-known name and replace it with a long-winded, technical-sounding alternative that no one has heard of,” wrote the editors of Nature in a recent editorial. Indeed, “Higgs” seems to have captured the public imagination—and it makes a much better Twitter hashtag than #ABEGHHK’tH.
Now it’s your turn: If you could rename the Higgs, what would you call it?
Editor's picks for further reading
Facebook: Peter Higgs
No, you can't "friend" him, but you can "like" him.
FQXi: Higgs Almighty
Whatever you call it, please stop calling it the “God particle,” says blogger William Orem.
PHD Comics: Higgs Boson Explained
In this video, particle physicist Daniel Whiteson at CERN explains how the LHC is searching for the Higgs boson.
Imagine describing our universe to an alien from an alternate dimension. Where would you start?
You might reasonably begin by explaining that we live in three dimensions of space and one dimension of time. Space and time are so fundamental to our understanding of the universe that they are woven into nearly every equation in physics. They are the words in which we speak the language of nature—so tried, tested, and true that we don’t even know how to talk about the cosmos without engaging space and time in the conversation.
But what if it turns out that space and time are not the fundamental infrastructure of our cosmos—what if they are themselves products of some deeper physics?
This idea is called emergence. We see it in nature, as when fish school or birds flock. If you were only to study an individual fish or bird, you would never predict how they would come together as a group. Yet each one “knows” simple rules that, when combined, create a wide range of agile and elegant behaviors. Could it be that physicists have been studying flocks all along, not realizing that it’s the birds that are truly fundamental?
“There aren’t many things in quantum gravity that everyone agrees on,” says Eleanor Knox, a philosopher at King’s College London who specializes in the philosophy of physics. “Yet the one thing many people seemed to agree on in quantum gravity was that we were going to have to cope with space and time not being fundamental.”
It sounds radical, but physics has a long and proud history of spearheading exactly this kind of coup. “Historically, whenever we thought something was fundamental, it turns out that it is not,” says Nathan Seiberg, a theoretical physicist at the Institute for Advanced Study. Kepler, for instance, believed that the Platonic solids were the fundamental constituents of the universe. Today we know better. In the 17th century, scientists thought that cold was a substance that could flow from one place to another, chilling your doorstep or tip of your nose. Now we understand that heat and cold are just another way of talking about the statistical properties of a collection of molecules. Of course, that doesn’t mean that it feels any less real when you burn your tongue on your hot cocoa.
So why are physicists picking on space? Relativity delivered the first strike. “In relativity, space and time are not rigid. They are dynamic,” says Seiberg. Building all of physics on such a malleable infrastructure is akin to constructing your house on a foundation of Jello.
More alarmingly to theorists, our ability to measure features in space is intrinsically limited. A ruler can’t measure distances smaller than the width of its painted markings; the resolution of a microscope is constrained by the wavelength of the light in which it makes images; even scanning tunneling microscopes are limited by the physical size of their probe tips.
Can’t we just build a better microscope? “It’s not because we don’t have the budget to build a powerful enough machine,” explains Seiberg. If we somehow tried to make an infinitely small measuring device, that device would become so dense that it would warp the fabric of space. The conclusion: “Space itself is ambiguous,” says Seiberg. Strike two.
Space also took a hit from an unlikely foe: the hologram. We think of holograms as the dazzling, silvery images on postcards and credit cards: two-dimensional objects that project three-dimensional pictures. More generally, though, a hologram is anything—even an equation—that encodes an extra dimension’s worth of information. It turns out that you can write equations that describe our universe perfectly well using different combinations of spatial dimensions, creating mathematical holograms that are indistinguishable from reality. Like a book that can be translated into many disparate languages without losing a syllable of meaning, our universe seems to tell a story that is independent of the words in which we have always chosen to express it.
Finally, physicists have known for some time that their descriptions of space start to break down when they’re applied to the strange-but-true environments inside black holes and close to the time of big bang. In such cases, the familiar equations start popping out infinities—nonsense answers that suggest that the equations are missing some essential machinery. “Something else should kick in,” says Seiberg.
But what is that something else? “I don’t think I have an answer to that,” says Seiberg. Knox also leaves the door open to as-yet-unknown possibilities: “Whatever it is that’s fundamental, it’s not the stuff we have a handle on right now.” Morever, Seiberg adds that though theorists have assembled a strong case that space is emergent, time presents a more difficult problem. “In order to understand emergent time, we need a complete revolution in the way we think about physics.”
Letting go of space and time without ready replacements may seem like a surefire way to plunge into the abyss of abstraction. But it may be only by loosening our grip that we can come to grasp what is truly fundamental.
Editor's picks for further reading
Discover Magazine: Newsflash: Space and Time May Not Exist
If time isn't fundamental, what is it?
FQXi: Breaking the Universe's Speed Limit
John Donoghue investigates the possibility that the speed of light is not a constant.
FQXi: Melting Spacetime
Joanna Karczemarek investigates how space and time could emerge from deeper physics.
Are we living in someone else’s fantasy?
The Chinese philosopher Zhuangzi posed this question more than two thousand years ago when he recalled waking from a dream unsure whether he was a man who dreamed he was a butterfly or a butterfly dreaming that he was a man. Today, with the advent of computers that can simulate cells, cities, and even solar systems, philosophers and scientists are asking this ancient question in a new way: Are we living in a computer simulation?
This question is more than just the premise of "The Matrix." It's a conjecture that lives at the intersection of humanity and technology—and though it might seem like philosophy, it spurs ambitious new questions about what computers are capable of and about the nature of reality itself. As theorists begin to think of our universe as nothing more than a vast collection of information, can we ever truly know whether our reality is as “real” as we think it is?
The philosopher Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, posed the latest iteration of this ancient question in a 2003 paper. His "simulation argument" begins with the observation that modern computers have improved at an exponential rate since their invention. If computing power continues to grow at this pace, advanced civilizations will one day be able to build titanic, densely-packed supercomputers capable of doing everything from beating the stock market to predicting the weather months or years in advance. “Post-human” programmers might even use these machines to simulate entire civilizations, vast electronic worlds that would put today’s computer games to shame.
What would it take to create this kind of simulation?
When it comes to simulating a person, scientists estimate it might take 1017 operations per second—that's one followed by 17 zeroes—to simulate a human brain, based on the number of neurons in the brain and rate of which those neurons “talk” to each other. Assuming that simulating the sensory events a person experiences—every taste, sound, smell, touch and sight that is coded in our neurons—takes about 100 million bits per second, and that approximately 100 billion humans have lived on Earth to date, Bostrom estimates it might take 1036 calculations in total to create a simulation of the whole of human history that is indistinguishable from reality.
That’s just to simulate the parts of the universe that humans can sense. What about the microscopic structure of the Earth's interior or the subtle features of distant stars? These little details could be safely omitted until a simulated person needed to observe them. In addition, to save computing power, maybe not every person in a simulation would be fully simulated. Perhaps some of the characters in the simulation would be "zombies or 'shadow-people'—humans simulated at a level sufficient for the fully simulated people to not notice anything suspicious," Bostrom writes in his paper.
So how close are we to achieving this dream (or nightmare)? Today’s most powerful supercomputers are capable of operating at roughly 10 petaflops per second—that is, 1016 calculations per second. A planet-sized computer based on current electronics might carry out 1042 operations per second. Bostrom also notes that quantum physicist Seth Lloyd of MIT has calculated that a 1-kilogram "ultimate laptop" that operates at the known limits of physics might be capable of 5 × 1050 operations per second. So, the planet-sized computer might be able to simulate all of human history in a millionth of a second; the ultimate laptop, a hundredth of a billionth of a second.
Given that fully simulating every person who has ever lived might only take a tiny fraction of an advanced civilization's resources, Bostrom reasons that the number of computer-generated minds buzzing away inside simulations could vastly outnumber the total sum of real minds that have ever lived. If that is true, the odds are that we are simulated, not real. It may even be possible that our simulators are themselves simulated, and their simulators are simulated, and so on. "Reality may thus contain many levels," Bostrom says.
This does not prove that we live in a simulation, Bostrom emphasizes. There are a number of caveats that could stop this bizarre future before it starts. One glum possibility is that civilizations might very well go extinct or collapse—say, by annihilating themselves in a nuclear war—before they can develop supercomputers of such immense power. Another thought is that civilizations simply have no desire to commit the vast resources needed to create supercomputers. Or perhaps advanced civilizations might not indulge in such simulations—maybe they would be ethically opposed to simulating minds and their suffering, or they might prefer to entertain themselves with machines that directly stimulate their brain's pleasure centers. "Personally, I assign less than 50 percent probability to the simulation hypothesis—rather something like in the 20 percent region, perhaps, maybe," Bostrom writes, although he describes this as a gut feeling rather than part of his logical argument.
Unless the simulators decide to make themselves known, there may be no way to prove or disprove the simulation argument. Some have suggested looking for "glitches" in the simulation, but such glitches would be more plausibly explained as hallucinations, visual illusions, fraud or self-deception. Even if errors did pop up, a smart simulator could simply wipe any memory of the anomaly from our simulated brains.
If we are living in a computer simulation, how should we live our lives? "The simulation hypothesis currently does not seem to have any radical implications for how one should live," Bostrom said. Still, "it helps to shed light on, among other things, the prospects of our species."
Also, thinking of the universe as a computer may actually be a helpful approach in science. "You can start thinking about what kind of computer it is, what kind of operations can it do, what kinds of problems can it solve," said theoretical computer scientist Scott Aaronson at MIT. "That's an extraordinarily fruitful way of thinking about the universe that has led to the whole field of quantum computers—devices based on the quantum physics that explains how the fundamental building blocks of the universe behave."
We may never know whether we are living in someone else's fantasy; whether we’re the man or the butterfly. But if we do one day develop supercomputers capable of simulating minds and universes, perhaps our creations will be able to answer the question for us.
ONE night in June 2007, I got to watch astronomer Sandra Faber put the 10-meter Keck II telescope through its paces. She was observing galaxies in a region of the sky called the Extended Groth Strip, in the direction of the constellation Ursa Major. We sat in the cozy confines of the telescope control room, far below the telescope’s perch near the 13,796-foot-high summit of the Mauna Kea volcano in Hawaii.
Around midnight, Faber wrapped up her observations and we stepped out for a few minutes under the night sky. “I take comfort in the fact that it is a beautiful universe, and we belong here and that we fit,” Faber mused. “This is our home.”
Faber, a professor at the University of California, Santa Cruz, was referring to the idea that there is something uncannily perfect about our universe. The laws of physics and the values of physical constants seem, as Goldilocks said, “just right.” If even one of a host of physical properties of the universe had been different, stars, planets, and galaxies would never have formed. Life would have been all but impossible.
Take, for instance, the neutron. It is 1.00137841870 times heavier than the proton, which is what allows it to decay into a proton, electron and neutrino—a process that determined the relative abundances of hydrogen and helium after the big bang and gave us a universe dominated by hydrogen. If the neutron-to-proton mass ratio were even slightly different, we would be living in a very different universe: one, perhaps, with far too much helium, in which stars would have burned out too quickly for life to evolve, or one in which protons decayed into neutrons rather than the other way around, leaving the universe without atoms. So, in fact, we wouldn’t be living here at all—we wouldn’t exist.
Examples of such “fine-tuning” abound. Tweak the charge on an electron, for instance, or change the strength of the gravitational force or the strong nuclear force just a smidgen, and the universe would look very different, and likely be lifeless. The challenge for physicists is explaining why such physical parameters are what they are.
This challenge became even tougher in the late 1990s when astronomers discovered dark energy, the little-understood energy thought to be driving the accelerating expansion of our universe. All attempts to use known laws of physics to calculate the expected value of this energy lead to answers that are 10120 times too high, causing some to label it the worst prediction in physics.
“The great mystery is not why there is dark energy. The great mystery is why there is so little of it,” said Leonard Susskind of Stanford University, at a 2007 meeting of the American Association for the Advancement of Science. “The fact that we are just on the knife edge of existence, [that] if dark energy were very much bigger we wouldn’t be here, that’s the mystery.” Even a slightly larger value of dark energy would have caused spacetime to expand so fast that galaxies wouldn’t have formed.
That night in Hawaii, Faber declared that there were only two possible explanations for fine-tuning. “One is that there is a God and that God made it that way,” she said. But for Faber, an atheist, divine intervention is not the answer.
“The only other approach that makes any sense is to argue that there really is an infinite, or a very big, ensemble of universes out there and we are in one,” she said.
This ensemble would be the multiverse. In a multiverse, the laws of physics and the values of physical parameters like dark energy would be different in each universe, each the outcome of some random pull on the cosmic slot machine. We just happened to luck into a universe that is conducive to life. After all, if our corner of the multiverse were hostile to life, Faber and I wouldn’t be around to ponder these questions under stars.
This “anthropic principle” infuriates many physicists, for it implies that we cannot really explain our universe from first principles. “It’s an argument that sometimes I find distasteful, from a personal perspective,” says Lawrence Krauss of Arizona State University in Tempe, Arizona, author of A Universe From Nothing. “I’d like to be able to understand why the universe is the way it is, without resorting to this randomness.”
And he’s not the only one who feels this way. Nobel laureate Steven Weinberg of the University of Texas at Austin once told me, “I would, and most physicists would, prefer not to have to rely on anything like the anthropic principle, but actually to be able to calculate things.”
Nonetheless, there is growing and grudging acceptance of the multiverse, especially because it is predicted by a theory that was developed to solve one of the most frustrating of fine-tuning problems of all—the flatness of our universe.
Spacetime today is flat, not curved—meaning that two rays of light that start out parallel stay parallel, neither converging nor diverging. This has been confirmed to exquisite precision by measurements of the cosmic microwave background, the radiation left over from the big bang. That means that a cosmological parameter called Omega, which dials in the curvature of spacetime, is very close to one. But for today’s universe to have an Omega anywhere near one, its value just one second after the big bang had to be exactly one to precision of about fourteen decimal places. This smacked of fine-tuning.
But in 1979, the physicist Alan Guth, now of MIT, discovered a way to get that value of Omega without fine-tuning. Guth showed that in the instants after the big bang, the universe would have undergone a period of exponential expansion. This sudden expansion, which Guth called “inflation,” would have rendered our observable universe flat regardless of the value of Omega before inflation began.
Imagine starting with a small balloon whose surface is curved and blowing it up some forty orders of magnitude. Any small piece of the balloon’s surface will now look flat. In the inflationary view, that’s what happened to our universe—our local patch of spacetime looks flat regardless of the curvature of spacetime before inflation began.
Some physicists believe that inflation continues today in distant pockets of spacetime, generating one new universe after another, each with different physical properties. Inflation, therefore, walks both sides of the fine-tuning line: It lends credence to the anthropic principle by predicting a multiverse, but it also reminds us that parameters we once thought were fine-tuned, like Omega, can be explained by a more fundamental theory. “The history of physics has had that a lot,” says Krauss. “Certain quantities have seemed inexplicable and fine-tuned, and once we understand them, they don’t seem to so fine-tuned. We have to have some historical perspective.”
We’ll gain such perspective only after we have a fundamental theory of everything—or perhaps when we detect signs of other universes. The urge to understand our universe from first principles and not ascribe it to some divine force compels us to seek scientific explanations for what seems to be an incredible stroke of luck.
Editor's picks for further reading
FQXi: The Patchwork Multiverse
Raphael Buosso examines links between string theory, dark energy, and the multiverse.
FQXi: Testing the Multiverse
Hiranya Peiris looks for evidence of other universes in the cosmic microwave background radiation.
Skeptical Inquirer: Anthropic Design: Does the Cosmos Show Evidence of Purpose?
Victor Stenger provides a critical analysis of the "so-called anthropic coincidences."
TED: Why Is Our Universe Fine-Tuned for Life?
In this video, Brian Greene asks why our universe appears so exquisitely tuned for life.