## Who's On First? Relativity, Time, and Quantum Theory

Nov

9

Einstein’s special theory of relativity calls for radical renovation of common-sense ideas about time. Different observers, moving at constant velocity relative to one another, require different notions of time, since their clocks run differently. Yet each such observer can use his “time” to describe what he sees, and every description will give valid results, using the same laws of physics. In short: According to special relativity, there are many quite different but equally valid ways of assigning times to events.

Einstein himself understood the importance of breaking free from the idea that there is an objective, universal “now.” Yet, paradoxically, today’s standard formulation of quantum mechanics makes heavy use of that discredited “now.” Playing with paradoxes is part of a theoretical physicist’s vocation, as well as high-class recreation. Let’s play with this one.

First, some background. Despite special relativity’s freedom in assigning times, for each choice there is a definite ordering of events into earlier and later. In a classic metaphor, time flows like a river through all space, and the flow never reverses.1 Figures 1, 2, and 3 tell the central story.

Figure 1

Figure 2

Figure 3

To organize our thoughts, let us make a definite choice of time; in the jargon, let us fix a frame of reference. Then we can frame the history of the world as shown in Figure 1. Here time runs vertically, while space runs horizontally. Since we’re going to be considering several versions of time, we’ll name this one t1. For convenience in drawing, we are restricting attention to a one-dimensional slice of space—in other words, a line. One-dimensional “spaces” of events sharing the same value of time t1 would appear as horizontal lines (which I haven’t drawn). The meaning of the colored regions and their labels will be elucidated presently.

Observers moving at constant velocity with respect to our frame of reference will need to use their own physically appropriate, different versions of “time,” corresponding to how their clocks run. Figures 2 and 3 display the lines for which two different versions of time, t2 and t3, are constant. t2 is the appropriate measure of time for observers moving at a certain constant velocity toward the right, while t3 is the appropriate measure of time for observers moving at a certain velocity toward the left—that is, in our figures, in the horizontal, “spatial” direction—relative to our reference frame. For observers with higher speeds, the tilt of these lines will be steeper. But the tilt never exceeds 45 degrees, because 45 degrees corresponds to the limiting speed, namely the speed of light.

With this background, we are ready to appreciate the distinctions shown in Figure 1. In the center of the diagram is a blue point b representing a specific event. Some events—those that lie in the green future region of space-time—occur at a later time than b, whether we use t1, t2, t3, or any other allowed observer’s measure of time. We say that these events are in b’s causal future (or, if there is no danger of confusion, simply b’s future). What happens at b can affect events in b’s causal future, without upsetting any observer’s sense that a cause—b—must occur before its effect. Closely connected is the fact that signals from b can reach events in b’s future without ever exceeding the speed of light. We call such physically allowed signals “subluminal” signals.

Similarly, we can define b’s causal past, depicted in red. It consists of all events that can affect b. There is a nice symmetry here: If we draw cones emanating from an event a in b’s causal past, we will find b in the upper colored region. An event a is in b’s causal past, if and only if b is in a’s causal future.

But many events fall into neither of those regions; they are neither in b’s causal future, nor in b’s causal past. We say that such events are “space-like” with respect to b. The event a, which appears in Figures 2 and 3, is of that kind. According to t2, a occurs after b; but according to t3, a occurs before b. Neither a nor b can send subluminal signals to the other.

In a similar way, we can consider the regions that are future, past, or space-like with respect to a. This leads us to a more elaborate division of space-time, illustrated in Figure 4. The orange region contains events in the common (causal) past of both a and b, the purple region their common future, and so forth. This colorful diagram hints at a potentially rich subject, the geometry of causation, that could be developed much further. (Specifically, it could add some spice to high-school geometry and analytical geometry courses, and provide material for independent projects.)

Figure 4

As we’ve seen, if a and b are space-like separated, then either can come before the other, according to different moving observers. So it is natural to ask: If a third event, c, is space-like separated with respect to both a and b, can all possible time-orderings, or “chronologies,” of a, b, c be achieved? The answer, perhaps surprisingly, is No. We can see why in Figures 5 and 6. Right-moving observers, who use up-sloping lines of constant time, similar to the lines of constant t2 in Figure 2, will see b come before both a and c (Figure 5). But c may come either after or before a, depending on how steep the slope is. Similarly, according to left-moving observers (Figure 6), a will always come before b and c, but the order of b and c varies. The bottom line: c never comes first, but other than that all time-orderings are possible.

Figure 5

Figure 6

These exercises in special relativity are entertaining in themselves, but there are also serious issues in play. They arise when we combine special relativity with quantum mechanics.

Two distinct kinds of difficulties arise as we attempt to combine those two great theories. They are the difficulties of construction and the difficulties of interpretation.

The difficulties of construction dominated 20th century physics. (One measure of this: By my conservative count six separate Nobel Prizes, shared by 12 individuals, were awarded primarily for advances on this problem.) The tough issues that arose here, in the construction of relativistic quantum theories, are in some sense technical. Combining special relativity and quantum mechanics leads to quantum field theory, and the equations of quantum ﬁeld theory are dicey to solve. If you try to solve those equations in a straightforward way, you find nonsensical results—for example, inﬁnitely strong forces. In fact it emerged, after many adventures, that most quantum ﬁeld theories really don’t make sense! They are mathematically inconsistent. Those that do make sense can only be defined using tricky mathematical procedures. Passing in silence over that epic, we reach the bottom line: After heroic struggles, the difficulties of construction were eventually (mostly) overcome, and today quantum ﬁeld theory forms the foundation of our immensely successful Standard Model.

The difficulties of interpretation have a different flavor. Closely related to our issues with time-orderings, they arise because labeling events by time plays an absolutely central role in the conventional formulation of quantum mechanics.

The quantum state of the world is represented by its wave function, which is a mathematical object defined on surfaces of constant time. Furthermore, measurements “collapse” the wave function, introducing a drastic, discontinuous change. Suppose, for example, that we decide to use t1 as our time. Then a measurement at t1 = 0 changes the wave function everywhere at all times subsequent to t1 = 0.

But what if we had chosen t2 or t3? The occurrence of that sort of collapse implies that there is a drastic difference between the formal descriptions of quantum mechanics based on our choice of reference frame. If we work with t2, then measurements at b will collapse the wave function seen at a, since b comes before a. For the same reason, measurements at b do not collapse the wave function at a. But if we work with t3, since the time-ordering between a and b is reversed, the situation is just the opposite!

Yet special relativity demands that either t2 or t3 can be used in a valid description of nature. Have we discovered a contradiction?

Not necessarily.

The point is that quantum-mechanical wave functions are tools for describing nature, rather than nature herself. Mathematically, quantum-mechanical wave functions contain a lot of excess (unobservable) baggage and redundancy, so that wave functions that look drastically different can nevertheless give the same results for most, or possibly all feasible physical observations.

While it falls short of outright contradiction, there remains, it seems fair to say, considerable tension at the interface between quantum mechanics and special relativity. During the long struggle to construct quantum ﬁeld theories, several physicists speculated that the inﬁnitely strong forces they calculated were surface symptoms of a fundamentally rotten core, whose rottenness was indicated more directly by the difficulties with interpretation. It didn’t work out that way. We have been able to construct theories that are not only consistent but also immensely successful, despite their near-contradictions and excess baggage.

As new technologies for probing the nano-world render possible what were once purely thought experiments, we have wonderful new opportunity to ask creative questions, confronting the paradoxes of quantum mechanics head on. Maybe we’ll ﬁnd some surprising answers—that’s what makes paradoxes fun.

1 There are more speculative possibilities: that time exhibits cycles, or branches, or even has several dimensions of it own. In general relativity we let time bend together with space, and in describing the Big Bang and black holes we encounter singularities, where time begins or ends. This is fascinating stuff! But “flat, unidirectional” time is the basis for almost all practical physics, and it already provides rich food for thought, so that’s what I’ll be considering here.

Go Deeper

arXiv: Constraints on Chronologies
Read the author's technical paper on chronologies, written with theoretical particle physicist Alfred Shapere.

FQXi: Cheating the Causal Game
In this article, discover how researchers at the University of Vienna are deconstructing the physics of cause and effect.

Relativity for the Questioning Mind
Explore the fundamentals of relativity in this book by Oberlin College physics professor Dan Styer.

## Thanks, Mom! Finding the Quantum of Ubiquitous Resistance

Jul

4

CERN’s July 4 declaration of victory in the quest to find the Higgs particle (or something very much like it) is a many-splendored triumph. It confirms, as it completes, the Standard Model of fundamental physics. It hints at the splendid new prospect of supersymmetry while debunking rival speculations. Most fundamentally, it reaffirms our scientific faith that nature works according to precise yet humanly comprehensible laws—and, importantly, rewards our moral commitment to testing that faith rigorously.

Inside the tunnel of the Large Hadron Collider, particles speed through a 27-kilometer ring of superconducting magnets. Credit: David Parker/Photo Researchers, Inc.

A few months ago, when the evidence was suggestive but not yet conclusive, I discussed here the nature of the Higgs particle, and what its discovery would mean for the enterprise of physics. Now I will supplement that discussion, focusing on what it took to win the victory.

Physicists had to overcome three challenges to discover the Higgs particle: producing it, detecting it, and proving that they really had produced and detected it.

To put these challenges in context, let me introduce another perspective on what the Higgs particle is: The Higgs particle is The Quantum of Ubiquitous Resistance. I’m referring here to a universe-filling medium that offers resistance to the motion of many elementary particles, thus producing what we commonly think of as their mass.

The Standard Model of physics—our best-yet model of the matter and forces that make our universe—requires, for consistency of its equations, that many of its ingredients are particles with zero mass. These particles should travel at the speed of light in empty space, but in reality, some of them—like quarks, leptons, and W and Z bosons—travel more slowly. What is slowing them down?

Our Standard Model comes equipped with a Standard Reconciliation: Space is never empty! Space is filled with a material that resists the motion of those particles. Over the past decades, physicists have deduced many of the properties of the Ubiquitous Resistance by observing its effects on the forms of matter we can see. They even gave it a name: the Higgs field. But none of the known particles had the right properties to build up the Ubiquitous Resistance. So theorists drew up the specifications for a particle that would do the job. They called it the Higgs particle.

But wishing doesn’t make it so. Only experiments can grant (or deny) theorists’ wishes. With that in mind, let us consider the three challenges facing experimental observation of the Higgs particle.

Producing it

Any physical material, hit hard enough, is bound to break. The smallest possible shard reveals the most basic unit of the material: its “quantum.” For the Ubiquitous Resistance, that quantum is the Higgs particle.

To break off a piece of the Ubiquitous Resistance, though, requires producing disturbances of unprecedented intensity, albeit confined to tiny volumes of space for tiny intervals of time. That is what the Large Hadron Collider (LHC) is all about. By accelerating beams of protons to extremely high energy, and bringing them into collision, the LHC creates “Little Bangs” systematically.

Detecting it

Once you’ve produced a Higgs particle, the next challenge is to detect it. This isn’t as easy as it sounds, as the Higgs rapidly decays into other particles. We can look for those secondary particles, but most of them are useless for detection because they are produced more abundantly by other processes. The Higgs’ tiny signal competes with a cacophony of noise. The most likely mode of Higgs decay, into a bottom quarks and its antiparticle, in particular, is diluted by garden-variety strong interaction processes which produce those particles in droves.

So detection requires cunning.

Some decay processes that we might be able to detect are sketched below. Each has its own advantages and limitations, and each adds information, so experimenters have pursued them all. (For more information on the characters you’ll encounter below—W bosons, Z bosons, and the rest of the particle zoo, this is a good starting point.)

#1: Photon pairs

After a Higgs particle is created, quantum fluctuations convert it into a particle-antiparticle pair, which recombines into two photons.

The observable signal, in this case, is the pair of photons emerging from the decay. From the energy and momentum of the two photons, one can reconstruct the mass of the Higgs particle. This is significant because there are many other ways to make photons in collisions at the LHC that don’t require the production and decay of Higgs particles. The Higgs signal would be swamped, if not for the redeeming feature that randomly produced photons will “add up” to indicate random masses for their hypothetical progenitors, and only by rare accident land on the Higgs particle mass, whatever it happens to be. The signature of the Higgs, then, is an excess of photon pairs in a very narrow mass range. The mass where there’s an excess is fingered as the Higgs particle mass. Since the energy and momentum of photons can be measured accurately, this method gives an excellent measurement of the Higgs particle mass.

The main limitation of this technique, besides the unavoidable background “noise,” is the fact that this decay process is quite rare compared to other possibilities.

#2: W boson+ (Higgs -> bottom-antibottom)

Here is one of those other possibilities: In this case, the Higgs particle is produced as a byproduct of the creation of a W boson. The W boson itself decays, but in ways that experimentalists are thoroughly familiar with, and can often identify with confidence. The presence of the W boson, itself a relatively rare occurrence, helps this class of event to stand out above the strong interaction background. Thus the most common Higgs decay, into bottom-antibottom pairs, becomes discernable when you demand an accompanying W.

There are two more possibilities:

#3: Higgs -> WW -> lepton + antilepton + neutrino + antineutrino

#4: H -> ZZ -> 2 leptons + 2 antileptons

In Processes 3 and 4, the observed particles are leptons (l), which is just another way of saying that they might be either electrons or muons, and their antiparticles; the ghostly neutrinos escape detection. The Higgs boson barely interacts with those light particles, but it can communicate with them indirectly, through fluctuations in the W and Z boson fields (a.k.a. “virtual particles”). Process 4 is special, in that it is the only case where the background is so small that individual events, as opposed to enhanced probabilities, can be ascribed with confidence to Higgs particles.

By measuring the rates of all of these processes, one can determine how powerfully the Higgs communicates with many different things: two gluons, two photons, two Z bosons, two W bosons, and bottom-antibottom pairs. Their different rates are logically independent, of course, but theory connects them.

Proving it

This is the final challenge. Finding the Higgs boson depends on assuming that the Standard Model is reliable, so we can work around the “background noise”. Here years of hard bread-and-butter work at earlier accelerators—especially the Large Electron-Positron Collider (LEP), which previously occupied the same CERN tunnel in which the LHC resides today, and the Tevatron at Fermilab, as well as at the LHC itself—pays off big. Over the years, many thousands of quantitative predictions of the Standard Model have been tested and verified. Its record is impeccable; it has earned our trust.

The next step is to search for data that the Standard Model can’t explain, like excesses of the decay products discussed earlier, and compare them against our predictions for yields from a hypothetical Higgs boson. Insofar as these quantitative predictions match the observations, which they do, one can speak of proof.

Future observations may reveal new effects, or small quantitative discrepancies in the effects already observed. (I’ll be surprised if they don’t!) But the original, simplest sketch of what The Quantum of Ubiquitous Resistance could possibly be resembles reality enough to pass muster, at least as its first draft.

Finally, I’d like to reprise the conclusion of my earlier piece, in which I considered what might happen if the hints of the Higgs did not pan out:

And if not?

I’ll be heartbroken. Mother Nature will have shown that Her taste is very different from mine. I don’t doubt that it’s superior, but I’ll have to struggle to understand it.

Thanks, Mom!

## The Higgs Boson Explained

Jun

28

Editor's note: An earlier version of this article originally appeared here on December 15, 2011. We are featuring it again, updated for context, in anticipation of the July 4, 2012 announcement on the latest results from the ATLAS and CMS instruments.

What is all the buzz about the Higgs boson, aka the "God particle"?

The construction of the ATLAS detector at the LHC. ATLAS is one of the detectors involved in the hunt for the Higgs. Credit: Martial Trezzini/epa/Corbis

“Higgs” is Peter Higgs, a professor at Edinburgh, who made some interesting suggestions along the lines I’ll discuss below in 1964. The name “Higgs particle,” though standard, is not entirely fair, for several reasons: the basic idea has a significant pre-history; what’s original with Higgs has co-claimants; and the modern, mature version of the theory involves many ideas that were not anticipated in 1964. I’ll leave those issues for historians of science and the Swedish Academy to sort out.

God on the other hand deserves full credit, or blame.

Herewith a brief introduction, in question and answer format, for the buzz-curious.

What’s the basic idea?

Suppose that a species of fish evolved to the point that some of them became physicists, and began to ponder how things move. At first the fish-physicists would, by observation and measurement, derive very complicated laws. But eventually a fish-genius would imagine a different, ideal world ruled by much simpler laws of motion–the laws we humans call Newton’s laws. The great new idea would be that motion looks complicated, in the everyday fish-world, because there’s an all-pervasive medium–water!–that complicates how things move.

Modern physics proposes something very similar for our world. We can use much nicer equations if we’re ready to assume that the “space” of our everyday perception is actually a medium whose influence complicates how matter is observed to move.

Are there precedents for such an outrageous dodge?

Yes. In fact it’s a time-honored, successful strategy.

For example: In its basic equations, Newtonian mechanics postulates complete symmetry among the three dimensions of space. Yet in everyday experience there’s a big difference between motion in vertical, as opposed to horizontal, directions. The difference is ascribed to a medium: a pervasive gravitational field.

A much more modern example occurs in quantum chromodynamics (QCD), our fundamental theory of the strong force between quarks and gluons. There we discover that the universe is filled with a medium, the sigma (σ) field, that forms a sort of cosmic molasses for protons and neutrons. The σ field slows protons and neutrons down. Allowing a bit of poetic license, we can say that the σ field gives protons and neutrons mass. Many consequences of the σ field have been calculated and successfully observed, so that to modern physicists it is now every bit as real as Earth’s gravity field. But the σ field exists everywhere and everywhen; it is not tied to Earth.

What’s the new idea, then?

In the theory of the weak force, we need to do a similar trick for less familiar particles, the W and Z bosons. We could have beautiful equations for those particles if their masses were zero; but their masses are observed not to be zero. So we postulate the existence of a new all-pervasive field, the so-called Higgs condensate, which slows them down. This proposal, which here I’ve described only loosely and in words, comes embodied in specific equations and leads to many testable predictions. This proposal has been resoundingly successful.

What is the Higgs particle, conceptually?

Trouble is, no known form of matter has the right properties to make the Higgs condensate. In order to build that medium, we need to add to our inventory of world-ingredients. The simplest, “minimal” implementation introduces exactly one new elementary particle: the Higgs particle.

What is the Higgs particle, specifically?

There’s a quotation I love from Heinrich Hertz, about Maxwell’s equations, that’s relevant here.

To the question: "What is Maxwell’s theory?" I know of no shorter or more definite answer than the following: "Maxwell’s theory is Maxwell’s system of equations."

Similarly, Higgs particles are the entities that obey the equations of Higgs particle theory. Those equations prescribe everything about how Higgs particles move, interact with other particles, and decay—with just one, albeit glaring, exception: The equations do not determine the mass of the Higgs particle. The theory can accommodate a wide range of values for that mass.

What is a Higgs particle, operationally?

A Higgs particle is a highly unstable particle, visible only through its decay products. It has zero electric charge, and—unlike all other known elementary particles—no intrinsic rotation, or “spin.” These null properties reflect the fact that many Higgs particles, uniformly distributed through space, build up the Higgs condensate, which we sense as emptiness or pure vacuum. (Although individual Higgs particles are highly unstable, a uniform distribution of them is stabilized through their mutual interactions. Visible Higgs particles are disturbances above that uniform background.)

As mentioned before, theory does not predict what mass a Higgs particle should have. Masses anywhere from 10 Giga-electron Volts (GeV) to 800 GeV might be accommodated, though problems start to emerge near either extreme. (Physicists commonly use GeV as the unit of mass for elementary particles. One GeV is close to, but slightly more than, the mass of one proton.)

Because Higgs particles are unstable, to study them one must produce them. That requires concentrating lots of energy into a very small space to create enormous energy density. The required concentration of energy is achieved at particle colliders. At the LHC, two counter-rotating beams of high energy protons are made to pass through one another, or cross, at a few points. At each crossing some fraction of the protons, which are moving in opposite directions at very close to the speed of light, collide. The collisions produce fireballs that explode into tens or hundreds of stable or near-stable particles including electrons and positrons, pi mesons, photons, protons and antiprotons, and several other possibilities.

Known physical processes account for the vast majority of this debris. Production and decay of Higgs particles, if they exist, will produce some additional debris. To get evidence for the existence of Higgs particles, therefore, one must identify some distinctive patterns in the observed debris that could result from Higgs particle decays but which are difficult to produce with conventional processes.

Putting it another way: If you’re looking for needles in a haystack, you’d better have a really good grip on what hay can look like—and it helps to look for needles that are hard to mistake!

Several patterns play an important role in the analysis, but I’ll discuss just one—a crucial one—to give a flavor of what’s involved. One process of Higgs particle production and decay is depicted in this sketch:

The sequence of events in the sketch above unfolds reading upwards. Gluons inside the fast-moving protons convert, by quantum fluctuations, into a “virtual” top quark and its antiparticle. The virtual top quark and antiquark swiftly recombine into a Higgs particle. Then the Higgs particle decays by a similar mechanism: quantum fluctuations convert it into a particle-antiparticle pair, which recombine into two photons. At the end of the day, it is those two photons that are observed. (I’m particularly fond of this exotically beautiful quantum process, which I discovered theoretically in 1977.) The point is that more conventional processes, i.e. processes that don’t involve Higgs particles, but which produce two energetic photons are fairly rare. Thus the calculated contribution from Higgs particles, should they exist, can be discerned above the background.

What did we know about the Higgs before July 4, 2012?

Prior to the July 4 announcement, we already knew that a very large range of potential mass-values had been ruled out. Only a small window in the range between 115 and 127 GeV remains viable.

On the other hand, an excess of events, above expectations from known processes, had been observed in the two-photon channel mentioned above and (less clearly) in several others. The excesses are compatible with, and could be explained by, the existence of Higgs particles with mass close to 125 GeV.

The observed excess might also be compatible with a statistical fluctuation in the background processes—e.g., an improbable run of normal processes leading to photon pairs, comparable to rolling four consecutive sixes at dice.

What will it mean if we find the Higgs?

First of all, it will be a dazzling triumph for theoretical physics. Physicists will have used intricate equations and difficult calculations to predict not only the mere existence of the Higgs particle, but also (given its mass) its rate of production in the complex, extreme conditions of ultra high energy proton-proton collisions. Those equations will also have accurately rendered the relative rates at which the Higgs particle decays in different ways. Yet the most challenging task of all may be computing the much larger, competing background “noise” from known processes, in order to successfully contrast the Higgs’ “signal.” Virtually every aspect of our current understanding of fundamental physics comes into play, and gets a stringent workout, in crafting these predictions.

The animating spirit of research in fundamental physics, captured in the maxim “Today’s sensation is tomorrow’s calibration,” will not rest in that triumph, however. A Higgs particle at mass 125 GeV would portend a new level of fundamental understanding and discovery. Let me explain why.

Within our current theories of the fundamental interactions, embodied in the so-called Standard Model, the Higgs particle mass might, as previously mentioned, have any value within a wide range. Yet there are good reasons to suspect that despite its many virtues, the Standard Model is incomplete. Notably, its equations postulate four different forces (strong, weak, electromagnetic and gravitational) and six different materials they act on. It would be prettier to have a more coherent, unified theory. And in fact there are beautiful, concrete proposals for unified field theories, within which we have just one force and just one kind of material. But to make the unified theory work quantitatively, in detail, we need to expand the equations of the Standard Model so that they integrate a concept called supersymmetry.

Supersymmetry has many aspects and ramifications, but two are most relevant here. First, supersymmetry (for experts: more specifically, focus point supersymmetry) predicts that the Higgs particle mass should lie in the range 120-130 GeV. Finding Higgs particles with mass in that range would give strong circumstantial evidence both for supersymmetry and for the unification that supersymmetry enables.

Second, supersymmetry predicts the existence of many additional new fundamental particles, besides the Higgs particle, that should be accessible to the LHC. So if supersymmetry is right, the LHC will have many more years of brilliant discovery in front of it.

And if not?

I’ll be heartbroken. Mother Nature will have shown that Her taste is very different from mine. I don’t doubt that it’s superior, but I’ll have to struggle to understand it.

## After a Golden Age

Jun

6

Are physicists victims of their own success? They have strived to find the fundamental laws of matter, and in recent years they’ve done it. The so-called standard model of physics provides us with a thorough census of the subatomic particles that combine to make everything we see, and its equations define a complete mathematical explanation of how they behave. The golden age, it seems, has come and gone.

Or has it?

A millennium from today, historians will look back at the twentieth century primarily as the age of a rich flowering of science. Within a few decades molecular biology unveiled the body and soul of the genetic code, cosmology reconstructed the history of the universe, and geophysics disclosed a home planet more dynamic than ever previously imagined. Yet the biggest revolution of all, from which all those others drew, was ironically the smallest: the conquest of microphysics.

Three icons of twentieth century science. Clockwise: The double helix structure of DNA, revealed by analysis of its x-ray diffraction pattern (credit: Richard Wheeler via Wikimedia); anisotropies in the microwave background radiation, providing a picture of the very early universe (credit: WMAP Science Team, NASA); motion of the continents, proved and then mapped by analysis of magnetic field reversals at mid-ocean ridges. In each case, the enabling sensitive instruments and technologies depend upon profound understanding of the properties of matter, based on microphysics (credit: EMAG2)

History does not come with time-stamps affixed, but two epochal experiments roughly bracket the High Golden Age of microphysics. In 1912, Ernest Rutherford decoded atoms experimentally, revealing that each has a tiny nucleus containing all its positive charge and almost all its mass. That nucleus is surrounded by electrons, which are bound to it by electric forces. Just a year later, Niels Bohr introduced strange new ideas about the laws of motion in the microworld. His breakthrough matured into quantum mechanics.

Quantum mechanics broke the code of the microworld, but it took decades to master its text. Finally, in the “November Revolution” of 1974, two separate experimental groups announced the discovery of a striking set of new particles, the charmonium system. Their discoveries provided a brilliant confirmation of, and a fertile proving ground for, the collection of theoretical ideas we now call the Standard Model. The charm quarks (and their antiquarks) that make the charmonium particles rounded out the theory of the weak interaction, and the forces that bind them were just right for the theory of the strong interaction. Those theories tamed nuclear physics, and together with electromagnetism and gravity they complete the description of matter.

The Standard Model provides, we believe (after very thorough, rigorous, quantitative testing!), a complete mathematical explanation of how subatomic particles combine to make atoms, atoms to make molecules, and molecules to make materials, and how all this stuff interacts with light and radiation. Its equations are comprehensive yet economical, symmetrical but spiced with interesting detail, austere yet strangely beautiful. The Standard Model provides a complete, secure foundation for astrophysics, materials science, chemistry, and physical biology. Good stuff!

The Standard Model marks the ultimate triumph of reductionism. As Isaac Newton put it, we analyze matter by finding complete and simple laws governing the behavior of its elementary components, and then use those laws to synthesize the properties of macroscopic objects.

Triumph on that scale has a dark side: It’s a tough act to follow. By the late 1980s, articles and books with titles like “The End of Physics” began to appear. At the same time, “Theory of Everything” hyperbole erupted.

Neither reaction, however unseemly, was entirely baseless. The achievements of this golden age did mark the end of a certain special—and especially wonderful—kind of physics. After plumbing the bottom of ordinary matter (that is, physical material that’s reasonably accessible and usefully stable), where do you go? As physicists deciphered the atom, they revolutionized chemistry and enabled microelectronics; as they deciphered the nucleus, they revolutionized not only astrophysics and physical cosmology, but also bomb technology and medicine. There is no realistic prospect that the sort of frontier physics explored at the Large Hadron Collider, as esoteric and expensive as it is marvelous, will yield practical fruit. (This is not to say that the indirect value of this work, which serves as “the moral equivalent of war” for many talented, enthusiastic, creative young seekers, will not repay the money invested in it. It will, handsomely.) Its application in the natural world is likely to be restricted to the extremely early universe, and (maybe) a few super-extreme astrophysical situations, like Hawking’s black hole explosions.

But lamenting the passing of a golden age, or professing to reanimate it, are exercises in nostalgia. A healthier attitude, and an attitude that is truer to the unselfconscious exploratory spirit of the golden age itself, is to engage with its legacy of unanswered challenges and new opportunities. What a legacy it is, and what opportunities there are!

For the Standard Model, despite its practical success in describing ordinary matter, leaves many loose ends and unanswered questions. One of its ingredients, the Higgs particle, has not yet been observed directly. That embarrassment may soon be remedied, but other flaws run deeper. Its equations remain lopsided in peculiar ways. They beg to be embedded in a larger, still more symmetric theory. There are, in fact, excellent ideas for advancing toward such unification. Those ideas suggest new lines of experimental investigation, notably the search for proton decay and for supersymmetric particles. The other interactions, and indeed quantum mechanics itself, have not yet been organically united with gravity. String theory might help with those problems, but it’s clear that crucial ideas still await discovery.

We’d also like to understand why the laws of microphysics appear so nearly unchanged if we run time backwards. The only known explanation predicts the existence of a remarkable new class of particles called axions. These wraithlike cousins of photons, more elusive even than neutrinos, plausibly provide the astronomical dark matter. And if axions don’t—what does?

These and other unanswered challenges amply refute the notion that physicists are, in any meaningful sense, close to having a “Theory of Everything” (or that we’ve reached “The End of Physics”).

Yet the biggest challenges, I think, are of a different kind. The art of using our comprehension of microphysics is an open-ended invitation to creativity. Music-making doesn’t end when you’ve learned how your instrument works—it begins.

Can we engineer quantum computers, and through them fashion truly alien forms of intelligence? Can we tune in to the messages the universe itself broadcasts in gravity waves, in neutrinos, and in axions? Can we understand the human mind, molecule by molecule, and systematically improve it? To ask these questions is to discover, in the ripeness of one golden age, the seeds of new ones.

## Beautiful Losers

Dec

29

Are beauty and truth two sides of the same coin? It is charming to believe so. As Nobel Prize laureate Paul Dirac, who helped lay the mathematical groundwork for quantum mechanics, put it:

It seems that if one is working from the point of view of getting beauty in one's equations, and if one has really a sound insight, one is on a sure line of progress. If there is not complete agreement between the results of one's work and experiment, one should not allow oneself to be too discouraged, because the discrepancy may well be due to minor features that are not properly taken into account and that will get cleared up with further developments of the theory.

The poet John Keats expressed it more concisely:

Beauty is truth, truth beauty – that is all
Ye know on earth, and all ye need to know.

But, in science, does a beautiful hypothesis necessarily lead to deep truth about nature?

Several famous success stories suggest that it does, at least in physics:

James Clerk Maxwell arrived at his celebrated system of equations for electromagnetism by codifying what was thought to be known experimentally about electricity and magnetism, noting a mathematical inconsistency, and fixing it. In doing so, he moved from truth to beauty. The Maxwell equations of 1861, which survive intact as a foundation of today's physics, are renowned for their beauty. The normally sober Heinrich Hertz, whose experimental work to test Maxwell's theory gave birth to radio and kickstarted modern telecommunications, was moved to rhapsodize:

One cannot escape the feeling that these mathematical formulae have an independent existence and an intelligence of their own, that they are wiser than we are, wiser even than their discoverers, that we get more out of them than was originally put into them.

Albert Einstein, on the contrary, arrived at his equations for gravity—the general theory of relativity—with minimal guidance from experiment. Instead he looked for beautiful equations. After years of struggle, in 1915 he found them. At first, and for decades afterwards, few testable predictions distinguished Einstein's new theory of gravity from Newton's venerable one. Now there are many such tests, and it is amply clear that Einstein moved from beauty to truth.

Yet even in physics, the record is more mixed than is commonly known. Despite Keats and Dirac, beauty's seductions don't always give birth to truth. There have been fascinating theories that are both gorgeous and wrong: Beautiful Losers.

Like surgeons, physicists bury their failures. But the most beautiful of the Beautiful Losers deserve a better fate than oblivion, and here they'll receive it. I've written brief accounts of three Beautiful Losers: Plato's Geometry of Elements, Kepler's Harmonic Spheres, and Kelvin's Vortex Atoms.

Plato’s Geometry of Elements: Plato believed that he could describe the Universe using five simple shapes. These shapes, called the Platonic solids, did not originate with Plato. In fact, they go back thousands of years before Plato; you can find stone models (perhaps dice?) of each of the Platonic solids in the Ashmolean Museum at Oxford dating to around 2000 BC. But Plato made these solids central to a vision of the physical world that links ideal to real, and microcosm to macrocosm in an original, and truly remarkable, style. Read more

Kepler’s Harmonic Spheres: Like Plato, the German astronomer Johannes Kepler believed that five Platonic solids provided an essential blueprint for our universe. Six planets were known to Kepler, and he believed that they were carried around on nested globes that he called the celestial spheres. Kepler reasoned that five solids could correspond to six planets, if the solids—or more precisely, their bounding surfaces—marked the spaces between planetary spheres. He described this elegant construction in his Mysterium Cosmographium in 1596. Read more

Kelvin's Vortex Atoms: A tornado is just air in motion, but its ominous funnel gives an impression of autonomous existence. A tornado seems to be an object; its pattern of flux possesses an impressive degree of permanence. The Great Red Spot of Jupiter is a tornado writ large, and it has retained its size and shape for at least three hundred years. The powerful notion of vortices in fluids abstracts the mathematical essence of such objects, and led William Thomson, the 19th century physicist whose work earned him the title Lord Kelvin, to ask: Could atoms themselves be vortices in a ether that pervades space? Read more

It's wonderful, and comforting, that each of my Beautiful Losers, though wrong, was in its own way fruitful. Today more than ever physicists working at the frontiers of knowledge are inspired by beauty. In the alien realms of the very large, the very small, and the extremely complex, experiments can be difficult to perform and everyday experience offers little guidance. Beauty is almost all we've got!

## Beautiful Losers: Plato's Geometry of Elements

Dec

29

This essay is part of the series Beautiful Losers.

Plato believed that he could describe the Universe using five simple shapes. These shapes, called the Platonic solids, did not originate with Plato. In fact, they go back thousands of years before Plato; you can find stone models (perhaps dice?) of each of the Platonic solids in the Ashmolean Museum at Oxford dating to around 2000 BC, as pictured below. But Plato made these solids central to a vision of the physical world that links ideal to real, and microcosm to macrocosm in an original, and truly remarkable, style.

AN1927.2727-31 Neolithic Carved Sandstone Balls, Copyright Ashmolean Museum, University of Oxford.

Let me explain, first, what the Platonic solids are. To begin, consider something simpler: regular polygons. Regular polygons, by definition, are two-dimensional shapes bounded by sides of equal length, each making the same angles with its neighbors. Equilateral triangles, squares, regular pentagons, and so on are all regular polygons. Platonic solids are the three-dimensional analog of regular polygons, and prove to be far more interesting. Platonic solids are bounded by regular polygons, all of the same size and shape. One can prove mathematically that there are exactly five Platonic solids. Here they are:

The Platonic solids and their proposed identification with fundamental world-elements

The tetrahedron has four triangular faces, the cube six square faces, the octahedron eight triangular faces, the dodecahedron twelve pentagonal faces, and the icosahedron twenty triangular faces. Plato proposed that four of these solids built the Four Elements: sharp-pointed tetrahedra give the sting of Fire, smooth-sliding octahedra give easily-parted Air, droplety icosahedra give Water, and lumpish, packable cubes give Earth. The dodecahedron, at last, is the shape of the Universe as a whole. Later Aristotle emended Plato's system, suggesting that dodecahedra provide a fifth essence—the space-filling Ether.

Plato's ideas lent dignity and grandeur to the study of geometry, and greatly stimulated its development. The thirteen and final book of Euclid's Elements, the grand synthesis of Greek geometry that is the founding text of axiomatic mathematics, culminates with the construction of the five Platonic solids, and a proof that they exhaust the possibilities. Scholars speculate that Euclid planned the Elements with that climax in mind from the start.

From a modern scientific perspective, of course, Plato's mapping from mathematical ideals to physical reality looks hopelessly wrong. The four (or five) ancient “elements” are not simple substances, nor are they usable building blocks for constructing the material world. Today's rich and successful analysis of matter involves entirely different concepts. And yet...

In its general approach, and its ambition, Plato's utterly mistaken theory anticipated the spirit of modern theoretical physics. His program of describing the material world by analyzing (“reducing”) it to a few atomic substances, each with simple properties, existing in great numbers of identical copies, coincides with modern understanding.

Deeper still penetrates his insight that symmetry defines structure. Plato sensed enormous potential in the fact that asking for perfect symmetry leads one to discover a small number of possible structures. Based on that foundation, and a few clues from experience, the outlandish synthesis that his philosophy suggested should be possible, to realize the World as Ideas, might be achievable. And clues were there to be found: Near-coincidence between the number of perfect solids (five) and the number of suspected elements (four); suggestions of how observed qualities might reflect underlying shapes (e.g., the sting of fire from the sharp points of tetrahedra). One must also admire the boldness of genius in seeing an apparent defect in the theory—five solids for four elements—as an opportunity for crowning creation, either with the Universe as a whole (Plato) or with space itself (Aristotle).

Modern physicists, when seeking equations to describe the unfamiliar laws of the microcosm, must make guesses based on fragmentary information. Optimistically—and lacking constructive alternatives—they have turned, as Plato did, to symmetry as their guide. Symmetry of equations is perhaps a less familiar idea than symmetry of shapes, but there is nothing obscure or mystical about it. We say an equation, like a shape, displays symmetry when it allows changes that make no change. So for instance the equation

X = Y

has a nice symmetry to it, because exchanging X for Y changes it into this:

Y = X

and this transformed equation expresses exactly the same content as the original. On the other hand X = Y + 2, say, turns into Y = X + 2, which expresses something else entirely. As this baby example demonstrates, symmetric equations can be rare and special, even when the symmetry involves quite simple transformations.

The equations of interest for physics are considerably richer, of course, and the "changes that make no change" we hope they allow are much more extensive and elaborate. But the central idea inspiration remains, as it was for Plato, the hope that symmetry defines a few interesting structures, and that Nature chooses one (or all!) of those most beautiful possibilities.

Plato's Beautiful Loser was, in hindsight, a product of premature, and immature, ambition. He tried to leap directly from beautiful mathematics, some imaginative numerology, and primitive, cherry-picked observations to a Theory of Everything. In this his ambition was premature. Also, Plato failed to draw out specific consequences from his ideas, or to test them critically. He sketched an inspiring world-model, but was content to "declare victory" without engaging any serious battles. The mature and challenging form of scientific ambition, which aspires to understand specific features of the world in detail and with precision, emerged only centuries later.

## Beautiful Losers: Kepler's Harmonic Spheres

Dec

29

This essay is part of the series Beautiful Losers.

Like Plato, the German astronomer Johannes Kepler believed that the five Platonic solids provided an essential blueprint for our universe. Six planets were known to Kepler, and he believed that they were carried around on nested globes that he called the celestial spheres. Kepler reasoned that five solids could correspond to six planets, if the solids—or more precisely, their bounding surfaces—marked the spaces between planetary spheres. He described this elegant construction in his Mysterium Cosmographium in 1596.

Kepler proposed that Mercury's sphere supports a circumscribed octahedron, which is inscribed within Venus's sphere. Then we have icosahedron, dodecahedron, tetrahedron, cube interpolating respectively Venus-Earth, Earth-Mars, Mars-Jupiter and at last Jupiter-Saturn. This revelation of cosmic order was, for Kepler, rapturous:

I wanted to become a theologian; for a long time I was unhappy. Now, behold, God is praised by my work even in astronomy.

It immediately suggests a plan of construction that human artists can mimic—as Kepler did himself—in worthy, gorgeous models:

A model of Kepler's solar system, on display at the Technical Museum, Vienna.

Photo by Sam_Wise. Source.

Though equally (that is, completely) wrong, Kepler's conception reaches a higher level, scientifically, than Plato's speculations, for it makes concrete numerical predictions about the relative sizes of planetary orbits, which can be compared to their observed values. The agreement, while not precise, was close enough to convince Kepler he might be on the right track. Encouraged, he set out, courageously, to prove it.

Thus Kepler's discovery set him on his storied career in astronomy. As his work developed, however, problems with his original model emerged. Late in the 16th century, the astronomer Tycho Brahe was making exquisitely accurate observations of the motions of the stars and planets. As Kepler strove to do justice to Tycho's work, he discovered that the orbit of Mars is not circular, but follows an ellipse. This and other discoveries fatally undermined Kepler's beautiful system of celestial spheres. (Eventually even the numerology collapsed, with the discovery of Uranus in 1781, though Kepler was spared that ignominy.) Through the arduous, devoted labor his vision inspired he found other regularities among the orbits of the planets—his famous three laws of planetary motion—whose accuracy could not be doubted. In the end he'd arrived at a different universe than the one he first envisioned, and hoped for. He reported back:

I write the book. Whether it is to be read by the people of the present or of the future makes no difference: let it await its reader for a hundred years, if God himself has stood ready for six thousand years for one to study him.

Kepler's hoped-for reader emerged not quite a hundred years later: Isaac Newton. To me, the first illustration of Newton's Principia (1687) worthily transmits the most consequential thought-experiment ever. With a few cartoonish strokes, it both presages a new, universal theory of gravity and embodies a new concept of scientific beauty: dynamic beauty.

Newton's dynamical thought-experiment for the universality of gravitation. Via Wikimedia Commons

Imagine standing atop a tall mountain on a spherical earth, throwing stones horizontally, harder and harder. To keep things clear, remove from thought the damping influence of the atmosphere. At first it is clear what will happen, based on everyday experience: The stones will travel further and further before landing. Eventually, when the initial velocity becomes large enough, the stones will pass over the horizon; then its landing point will circuit the planet. Visualizing the developing situation, as in Newton's diagram, it is easy to imagine the progress of trajectories leading to a circle (duck!). In this way we begin to see how the same force that pulls bodies to Earth might also support orbital motion. We see that orbiting is a process of constantly falling, but toward a (relatively) moving target.

I like to think that the images in this diagram reveal the deep inspiration, pre-mathematical and even pre-verbal, behind young Newton's program of research (parallel to how young Kepler's harmonic spheres inspired his). It contains the germ of universal gravitation: With (imaginary) taller mountains as launching pads, we fill the sky with possible orbits. Might the Moon occupy one of them? And if Earth's Moon, why not Jupiter's moons, or the Sun's planets? And this question too begs for an answer: Throw harder still—what are the shapes of the resulting trajectories? The answer, which Newton's mathematics allowed him to answer, is that they make more and more eccentric ellipses—the very shape that Kepler had used to fit planetary orbits!

In Newton's dynamical approach, the beauty of planetary motion is not embodied in any one orbit, or in the particular set of orbits realized in the Solar System. Rather it is in the totality of possible orbits, which also contains trajectories of falling bodies. Putting it another way: The deepest beauty lies in the equations themselves, not in their particular solutions. Classical physics, initiated by Newton's brilliantly successful celestial mechanics, suggests that it is misguided to expect, as Kepler and Plato did, to find ideal symmetry embodied in any individual physical object, be it the Solar System or an elemental atom. Astronomers in recent years have identified dozens of extrasolar planetary systems, and found that they come in a wide variety of shapes and sizes. And yet...

Physical requirements can privilege, among the infinity of possible solutions to beautiful dynamical equations, special ones—often especially beautiful ones. Consider crystals: They are of course quite real and tangible natural objects; they can be grown in controlled, reproducible conditions; and their form is often highly symmetric. Kepler himself wrote a monograph featuring the six-fold symmetry of snow crystals.

We discover the same thing, spectacularly, in the quantum theory of atoms. An electron interacting with a proton obeys the same species of force law as a planet orbiting the Sun. Schrödinger's equation, no less than Newton's, allows an enormous infinity of complicated solutions. (In fact, much more so!) But if we focus on the solutions with the lowest energies—the solutions that coldish hydrogen atoms will settle into, after radiating—we pick out a special few. And those special solutions exhibit rich and intricate symmetry. They fulfill, as they transcend, the visions of Plato, Kepler, and Newton.

The quantum mechanical wave function for a typical stationary state of the hydrogen atom. Image from http://vqm.uni-graz.at, used with permission of Bernd Thaller.

## Beautiful Losers: Kelvin's Vortex Atoms

Dec

29

This essay is part of the series Beautiful Losers.

A tornado is just air in motion, but its ominous funnel gives an impression of autonomous existence. A tornado seems to be an object; its pattern of flux possesses an impressive degree of permanence. The Great Red Spot of Jupiter is a tornado writ large, and it has retained its size and shape for at least three hundred years. The powerful notion of vortices in fluids abstracts the mathematical essence of such objects, and led William Thomson, the 19th century physicist whose work earned him the title Lord Kelvin, to ask: Could atoms themselves be vortices in an ether that pervades space?

Kelvin's idea was inspired by the work of Hermann Helmholtz, who first realized that the core of a vortex—analogous to the eye of a hurricane—is a line-like filament that can become tangled up with other filaments in a knotted loop that cannot come undone. Helmoltz also demonstrated that vortices exert forces on one another, and those forces take a form reminiscent of the magnetic forces between wires carrying electric currents.

To Thomson, these results seemed wonderfully suggestive. At the time, evidence from chemistry and the theory of gases had persuaded most physicists that matter was indeed composed of atoms. But there was no physical model indicating how a few types of atoms, each existing in very large numbers of identical copies—as required by chemistry—could possibly arise.

In seemingly unrelated work, physicists were discovering that space-filling entities are an essential tool in Nature's workshop. Today we accept those entities—known as electric and magnetic fields—on their own terms, as fundamental; but Thomson and his contemporaries believed them to be manifestations of an underlying fluid: an updated version of Aristotle's Aether.

Thomson's bold ambition, and instinct for unity, led him to a propose a synthesis: The theory of vortex atoms. The Ethereal fluid, being so fundamental, should be capable of supporting stable vortices, he reasoned. Those vortices, according to Helmholtz' theorems, would fall into distinct species corresponding to different types of knots. Multiple knots might aggregate into a variety of quasi-stable "molecules." All this remarkably fits the heart's desire, in a theory of atoms: Naturally stable building-blocks, whose possibilities for combination seem sufficiently rich to do justice to chemistry.

Thomson himself, a restless intellect, moved on to gush other ideas, but his friend and colleague Peter Guthrie Tait, enthralled by the vortex atom theory, set to work. Thus inspired, he did pioneering work on the theory of knots, producing a systematic classification of knots with up to 10 crossings.

A table of knots. The 'Unknot' was thought to represent hydrogen; to its right, the knot thought to represent carbon. By Jkasd (Own work, Public domain), via Wikimedia Commons

Alas this beautiful and mathematically fruitful synthesis is, as a physical theory of atoms, a Beautiful Loser. Its failure was not so much due to internal contradictions—it was too vague and flexible for that!—but by a certain sterility. Above all, it was put out of business by more successful competitors. Eventually the mechanical Ether was discredited by Einstein's relativity, and the triumphant Maxwell equations for electric and magnetic fields do not support vortices. The modern, successful quantum theory of atoms is based on entirely different ideas. And yet...

It's easy to understand the appeal of vortex atoms, not only as fascinating mathematics, but as potential elements for world-building. When we turn from understanding the natural world to designing micro-worlds on our own, we might come to treasure their virtues. Vortices can have an impressive degree of stability; they can be knotted into topologically distinct forms, which are also quasi-stable; and their interactions are complex and intricate, yet reproducible.

Those attractive features can be embodied in artificial “atoms” specifically designed to be building blocks for quantum engineering. For quantum theory, though it made the vortex theory of natural atoms obsolete, provides us with a variety of far more reliable, and far more perfect, aethers than the old Aetherial fantasies. Classical fluids, whether they are real liquids or speculative substrates, are inherently imperfect. Any motion in them will stir up little waves that carry away energy, and eventually dissipate the flow. Quantum fluids, such as superfluid helium and a variety of superconductors, on the contrary, support flows that, in theory, will persist unchanged forever. And in practice, too—that’s why we call them “super”! The deep point is that in quantum mechanics energy comes packaged in discrete lumps (quanta). If you operate at low temperatures, where there’s very little energy available, it can become impossible to stir up those little waves that bedevil classical fluids at all. In quantum fluids, vortices really are forever.

There is lots of room for creativity in designing and constructing artificial aethers. Many materials become perfect (quantum) fluids at low temperature.

By choosing the right media, we can tailor our fluids to have useful properties. Physicists and engineers have become quite adept at designing useful fluids, such as the liquid crystals that enable computer monitors and LCD displays. In those examples the fluids have internal structure, which can be manipulated electrically to change their appearance. So far most of the effort has gone into classical fluids, but physicists are beginning to awaken to some promising new possibilities offered by quantum fluids. Though the details can be quite different—as I said, there’s lots of room for creativity here—the basic inspiration, to make fluids that we can manipulate externally to make them do something useful, is the same.

Designer quantum fluids can offer us a variety of vortex atoms, and the opportunity to design new chemistries that accomplish something we want done. Perhaps the most intriguing possibility is to embody, in real materials, the so-far theoretical concept of anyons. Anyons are particles that interact in a special, peculiarly quantum-mechanical way. Anyons don’t exert any forces upon one another, but when you wind one anyon around another, you make interesting, predictable changes in the wave function that describes your system. Quantum computers are, in principle, nothing but machines that process wave functions. (Since wave functions can simulate a tape, or more generally a collection of tapes, that encode data, operations on wave functions can be massively parallel operations on data.) On paper, at least, theorists have proposed ways whereby one might orchestrate the motion of anyons to construct a general-purpose quantum computer. The future will tell whether this beautiful idea blossoms into reality, or proves another seductive Beautiful Loser.

In topological quantum computing, information is processed by braiding anyons. Image courtesy of the University of Glasgow.

## Maybe Higgs: What the LHC Might or Might Not Have Seen

Dec

15

Tuesday’s big announcement from CERN was that something important might or might not have been discovered at the Large Hadron Collider (LHC). That something, if it’s anything, could well be the long-awaited Higgs particle, also called the God particle.

“Higgs” is Peter Higgs, a professor at Edinburgh, who made some interesting suggestions along the lines I’ll discuss below in 1964. The name “Higgs particle,” though standard, is not entirely fair, for several reasons: the basic idea has a significant pre-history; what’s original with Higgs has co-claimants; and the modern, mature version of the theory involves many ideas that were not anticipated in 1964. I’ll leave those issues for historians of science and the Swedish Academy to sort out.

God on the other hand deserves full credit, or blame.

Herewith a brief introduction, in question and answer format, for the buzz-curious.

What’s the basic idea?

Suppose that a species of fish evolved to the point that some of them became physicists, and began to ponder how things move. At first the fish-physicists would, by observation and measurement, derive very complicated laws. But eventually a fish-genius would imagine a different, ideal world ruled by much simpler laws of motion–the laws we humans call Newton’s laws. The great new idea would be that motion looks complicated, in the everyday fish-world, because there’s an all-pervasive medium–water!–that complicates how things move.

Modern physics proposes something very similar for our world. We can use much nicer equations if we’re ready to assume that the “space” of our everyday perception is actually a medium whose influence complicates how matter is observed to move.

Are there precedents for such an outrageous dodge?

Yes. In fact it’s a time-honored, successful strategy.

For example: In its basic equations, Newtonian mechanics postulates complete symmetry among the three dimensions of space. Yet in everyday experience there’s a big difference between motion in vertical, as opposed to horizontal, directions. The difference is ascribed to a medium: a pervasive gravitational field.

A much more modern example occurs in quantum chromodynamics (QCD), our fundamental theory of the strong force between quarks and gluons. There we discover that the universe is filled with a medium, the sigma (σ) field, that forms a sort of cosmic molasses for protons and neutrons. The σ field slows protons and neutrons down. Allowing a bit of poetic license, we can say that the σ field gives protons and neutrons mass. Many consequences of the σ field have been calculated and successfully observed, so that to modern physicists it is now every bit as real as Earth’s gravity field. But the σ field exists everywhere and everywhen; it is not tied to Earth.

What’s the new idea, then?

In the theory of the weak force, we need to do a similar trick for less familiar particles, the W and Z bosons. We could have beautiful equations for those particles if their masses were zero; but their masses are observed not to be zero. So we postulate the existence of a new all-pervasive field, the so-called Higgs condensate, which slows them down. This proposal, which here I’ve described only loosely and in words, comes embodied in specific equations and leads to many testable predictions. This proposal has been resoundingly successful.

What is the Higgs particle, conceptually?

Trouble is, no known form of matter has the right properties to make the Higgs condensate. In order to build that medium, we need to add to our inventory of world-ingredients. The simplest, “minimal” implementation introduces exactly one new elementary particle: the Higgs particle.

What is the Higgs particle, specifically?

There’s a quotation I love from Heinrich Hertz, about Maxwell’s equations, that’s relevant here.

To the question: "What is Maxwell’s theory?" I know of no shorter or more definite answer than the following: "Maxwell’s theory is Maxwell’s system of equations."

Similarly, Higgs particles are the entities that obey the equations of Higgs particle theory. Those equations prescribe everything about how Higgs particles move, interact with other particles, and decay—with just one, albeit glaring, exception: The equations do not determine the mass of the Higgs particle. The theory can accommodate a wide range of values for that mass.

What is a Higgs particle, operationally?

A Higgs particle is a highly unstable particle, visible only through its decay products. It has zero electric charge, and—unlike all other known elementary particles—no intrinsic rotation, or “spin.” These null properties reflect the fact that many Higgs particles, uniformly distributed through space, build up the Higgs condensate, which we sense as emptiness or pure vacuum. (Although individual Higgs particles are highly unstable, a uniform distribution of them is stabilized through their mutual interactions. Visible Higgs particles are disturbances above that uniform background.)

As mentioned before, theory does not predict what mass a Higgs particle should have. Masses anywhere from 10 Giga-electron Volts (GeV) to 800 GeV might be accommodated, though problems start to emerge near either extreme. (Physicists commonly use GeV as the unit of mass for elementary particles. One GeV is close to, but slightly more than, the mass of one proton.)

Because Higgs particles are unstable, to study them one must produce them. That requires concentrating lots of energy into a very small space to create enormous energy density. The required concentration of energy is achieved at particle colliders. At the LHC, two counter-rotating beams of high energy protons are made to pass through one another, or cross, at a few points. At each crossing some fraction of the protons, which are moving in opposite directions at very close to the speed of light, collide. The collisions produce fireballs that explode into tens or hundreds of stable or near-stable particles including electrons and positrons, pi mesons, photons, protons and antiprotons, and several other possibilities.

Known physical processes account for the vast majority of this debris. Production and decay of Higgs particles, if they exist, will produce some additional debris. To get evidence for the existence of Higgs particles, therefore, one must identify some distinctive patterns in the observed debris that could result from Higgs particle decays but which are difficult to produce with conventional processes.

Putting it another way: If you’re looking for needles in a haystack, you’d better have a really good grip on what hay can look like—and it helps to look for needles that are hard to mistake!

Several patterns play an important role in the analysis, but I’ll discuss just one—a crucial one—to give a flavor of what’s involved. One process of Higgs particle production and decay is depicted in this sketch:

The sequence of events in the sketch above unfolds reading upwards. Gluons inside the fast-moving protons convert, by quantum fluctuations, into a “virtual” top quark and its antiparticle. The virtual top quark and antiquark swiftly recombine into a Higgs particle. Then the Higgs particle decays by a similar mechanism: quantum fluctuations convert it into a particle-antiparticle pair, which recombine into two photons. At the end of the day, it is those two photons that are observed. (I’m particularly fond of this exotically beautiful quantum process, which I discovered theoretically in 1977.) The point is that more conventional processes, i.e. processes that don’t involve Higgs particles, but which produce two energetic photons are fairly rare. Thus the calculated contribution from Higgs particles, should they exist, can be discerned above the background.

So, does it exist?

Short answer: We still don’t know for sure. There’s been dramatic progress on the question, however.

A very large range of potential mass-values has been ruled out. Only a small window in the range between 115 and 127 GeV remains viable.

On the other hand, an excess of events, above expectations from known processes, has been observed in the two-photon channel mentioned above and (less clearly) in several others. The excesses are compatible with, and could be explained by, the existence of Higgs particles with mass close to 125 GeV.

The observed excess might also be compatible with a statistical fluctuation in the background processes—e.g., an improbable run of normal processes leading to photon pairs, comparable to rolling four consecutive sixes at dice. With more data a true signal will grow more rapidly than any plausible fluctuation, so the ambiguity in interpretation will disappear. If the LHC continues to function brilliantly, as it has so far, we should have a definitive answer within the next few months.

What will it mean, if the hints pan out?

First of all, it will be a dazzling triumph for theoretical physics. Physicists will have used intricate equations and difficult calculations to predict not only the mere existence of the Higgs particle, but also (given its mass) its rate of production in the complex, extreme conditions of ultra high energy proton-proton collisions. Those equations will also have accurately rendered the relative rates at which the Higgs particle decays in different ways. Yet the most challenging task of all may be computing the much larger, competing background “noise” from known processes, in order to successfully contrast the Higgs’ “signal.” Virtually every aspect of our current understanding of fundamental physics comes into play, and gets a stringent workout, in crafting these predictions.

The animating spirit of research in fundamental physics, captured in the maxim “Today’s sensation is tomorrow’s calibration,” will not rest in that triumph, however. A Higgs particle at mass 125 GeV would portend a new level of fundamental understanding and discovery. Let me explain why.

Within our current theories of the fundamental interactions, embodied in the so-called Standard Model, the Higgs particle mass might, as previously mentioned, have any value within a wide range. Yet there are good reasons to suspect that despite its many virtues, the Standard Model is incomplete. Notably, its equations postulate four different forces (strong, weak, electromagnetic and gravitational) and six different materials they act on. It would be prettier to have a more coherent, unified theory. And in fact there are beautiful, concrete proposals for unified field theories, within which we have just one force and just one kind of material. But to make the unified theory work quantitatively, in detail, we need to expand the equations of the Standard Model so that they integrate a concept called supersymmetry.

Supersymmetry has many aspects and ramifications, but two are most relevant here. First, supersymmetry (for experts: more specifically, focus point supersymmetry) predicts that the Higgs particle mass should lie in the range 120-130 GeV. Finding Higgs particles with mass in that range would give strong circumstantial evidence both for supersymmetry and for the unification that supersymmetry enables.

Second, supersymmetry predicts the existence of many additional new fundamental particles, besides the Higgs particle, that should be accessible to the LHC. So if supersymmetry is right, the LHC will have many more years of brilliant discovery in front of it.

And if not?

I’ll be heartbroken. Mother Nature will have shown that Her taste is very different from mine. I don’t doubt that it’s superior, but I’ll have to struggle to understand it.

## Hail and Farewell, Grand Colliders

Sep

30

Physicists, to their credit, are notoriously unsentimental and future-oriented. "Today's sensation is tomorrow's calibration" describes their modus operandi.

Nevertheless! Today America's ﬂagship collider, the Tevatron at Fermilab in Illinois, will cease operation, and Europe's Large Hadron Collider (LHC) at CERN, near Geneva, will lead the exploration of the deep microcosmos. That changing of the guard inspires reﬂection, and merits celebration too.

Some historical perspective will highlight the special role of colliders in fundamental physics. The goal of experimental work in fundamental physics, crudely speaking, is to ﬁnd out what the smallest, most basic building-blocks of the material world are, and how they behave (which, if you think about it, effectively deﬁnes what they are). In the early days of science, optical microscopes revealed the existence of tiny creatures and the cellular structure of life in general. But light cannot resolve structures much smaller than its wavelength, and the wavelength of ordinary light is tens of thousands of times larger than the size of atoms.

X-rays, discovered much later, get closer to atomic scales. Rosiland Franklin's x-ray pictures of DNA crystals enabled Crick and Watson to decipher DNA's molecular structure. At this point, a simple question may suggest itself: Why was any deciphering necessary—can't you just look at the darned picture? The answer is profound, and central to our story. It's easy to take for granted a most fortunate and unusual feature of ordinary light, namely that lenses can bend it and (when suitably arranged) automatically form images of illuminated objects. Our very own eyes do that trick, which is what makes it so easy to overlook! But there are no good lenses for x-rays. Instead of images, when we scatter x-rays off matter we get patterns of greater or lesser brightness called diffraction patterns. Then we've got to use our brains to make models, using everything else we know about x-rays and matter, for what could have caused the observed diffraction pattern.

To probe structures much smaller than atoms we must get well beyond x-rays, using more extreme forms of illumination. Energetic particles are the tools for this job; and the smaller the structures we aspire to "see," the higher the energies we need. Physicists study what emerges when those projectiles impact on matter—or, in the jargon, scatter on targets. Then, like Crick and Watson, they make models of what could be responsible. (Actually, nowadays theorists usually provide myriad models in advance, and then experimentalists disprove all but one of them!) In the early days of nuclear physics, particles emitted in natural radioactivity (especially "alpha particles," later identified as helium nuclei) were the workhorse probes; later cosmic rays—high-energy particles raining down from space—despite their obvious inconvenience, and unreliability, took the lead. Those techniques led to some tantalizing discoveries, but their limitations were crippling.

Further progress required that experimentalists wean themselves from natural sources. They had to learn how to pump up the energy of particles, collect them, and guide them to targets. A long series of brilliant innovations led to the modern collider. One innovation in particular is so unlikely-sounding, yet so crucial, that it deserves special mention. According to the theory of relativity, particles moving close to the speed of light are ﬂattened in the direction of motion, but retain their size in the transverse direction, so that they appear as narrow pancakes. For our purposes, that's great—it allows the probes to be sharply localized, so they can take high-resolution pictures. It's so advantageous that physicists double down on it. Rather than impacting energetic particles on a stationary target, at a modern collider highly energetic particles moving in one direction impact other highly energetic particles moving in the opposite direction. At the Tevatron protons collide with antiprotons; at the LHC it's protons on protons. To make such collisions happen, though, is no mean feat, because the targets are comparatively few, and each is very small indeed. It takes powerful, intricately patterned electric and magnetic control ﬁelds, and ultra-fast monitoring and feedback, to bring tight counter-circulating beams to the same place at the same time.

For this and many other reasons modern colliders are fantastic engineering projects. They employ instruments and ideas more complex and much more varied than are involved, for example, in space exploration. They are big, and expensive. The main Tevatron ring, where the beams circulate, is almost four miles around, and the various pieces of the project cost about \$1 billion altogether. The LHC is about ﬁve times as big, and ﬁve times as expensive.

These great colliders are, I think, monuments to our dynamic scientific civilization. They are our pyramids; but they are better motivated and much better engineered than the originals. There's been some progress in ﬁve thousand years!

A bittersweet corollary of dynamism, however, is that once-great things eventually become passé. With the coming of the LHC, which makes more and more energetic collisions, the Tevatron, its glory days gone, is ready for retirement.

What did the Tevatron teach us? I think most physicists would agree that its single most spectacular achievement was the discovery of the top quark, in 1995. The top quark is the next-to-last piece in the wildly successful Standard Model of fundamental physics. That set of ideas provides a reasonably compact census of the building blocks of matter, and precise, beautiful equations for their observed interactions; but it is less informative when it comes to their masses. The mere existence of the top quark was a ﬁrm prediction of the Standard Model since at least 1977, but theory gave no ﬁrm prediction for its mass. In fact the large value of that mass—about 185 times the mass of a proton, and more than 40 times the mass of the next-heaviest quark (bottom)—came as a shock to most. Together with the large mass comes an extraordinarily short lifetime, estimated at 5 ×10−25 seconds. Its large mass makes the top quark difficult to produce, and its short lifetime makes it challenging to detect, so the discovery was a tremendous technical achievement.

How can a single elementary particle be so heavy? We still don't know the answer to that question, or even whether it's a sensible question to ask. (A better question, I think, is why the other quarks are so light; but that's a story for another time.) In any case, the striking divergence among masses of otherwise very similar particles—i.e., different kinds of quarks—brings us face to face with our ignorance.

Pending deeper understanding, we can already draw important inferences from the large top-quark mass. Masses of quarks, within the Standard Model, reﬂect the strength of their interaction, or coupling, with the mass-giving Higgs ﬁeld. The large top-quark mass implies quite a strong coupling. That coupling is in effect a powerful new force, that must be taken into account in constructing more encompassing models of physics. Its ultimate significance is presently unclear, but it makes the idea of supersymmetry—a most interesting and attractive hypothesis on other grounds—work more smoothly; so that is the direction it might be pointing us toward.

Several other pretty discoveries were made at the Tevatron, but I think its other most important result, besides the top quark discovery, was to conﬁrm, in many demanding quantitative tests, the correctness of the core theories of the Standard Model. These triumphs of a beautiful, economical theory put many gratuitous speculations to rest, and demonstrated Nature's good taste. As a practical matter, this result provides a ﬁrm platform upon which we can stand, as we reach toward still more beautiful, uniﬁed, and encompassing understanding.

What's next?

The last, still missing piece of the Standard Model is the so-called Higgs particle. Just as it predicted the top quark, theory ﬁrmly predicts the existence of the Higgs particle, but not the value of its mass. The Tevatron was able to constrain that mass to a fairly narrow range (between about 122 and 160 proton masses), but ran out of time before reaching a conclusive result. The LHC will get to the ﬁnish line, very likely, within the next year or so. A Higgs particle in that mass range would be yet another favorable omen for supersymmetry. Unless Nature is a shameless tease, we'll see supersymmetry itself—that is, some of the new particles supersymmetry predicts—discovered at the LHC, though that might take longer. Should those profound discoveries occur, as I hope and expect, they will bring our understanding of Nature's foundational principles to a new, higher level. We will build upon the Tevatron's achievements even as we transcend them.