After this crowning triumph it seems appropriate to reflect on the big picture. What does the Standard Model teach us? What does it mean?

To answer that, I’ve adopted the List of Ten format pioneered by God and copied by Letterman and the “For Dummies” series. Here follow ten big lessons from the Standard Model organized into four categories: epistemology, natural philosophy, emergent simplicity, and unfinished business.

*1. Reductionism Works*: The premise of reductionism is that you can understand how things work by breaking them down into constituent parts with simple properties and interactions and then building back up. As a strategy for understanding the physical world, it’s been brilliantly successful! Matter as we know it, in all its richness, can be represented as vast numbers of identical copies of a few ingredients whose properties and interactions we can describe quite fully and accurately. It appears that we *have* achieved, following the reductionist program, a foundation in fundamental physical law for all applications of physics to chemistry, biology, materials science, engineering in general, astrophysics, and major aspects of cosmology.

*2. The Surface Appearance of the Physical World is Quite Different from its Deep Structure*: In general, quantum theory presents a picture of the world that appears quite different from the everyday world of experience. There is still considerable work to be done, I think, to show convincingly how observed macroscopic “classical” behavior follows from the underlying quantum equations, with their indeterminacy and discreteness, even at the level of mundane low-energy physics. The specifics of the Standard Model make this strangeness even stranger. We find that the familiar, effective building blocks of low-energy physics, i.e. protons, neutrons, electrons, and photons, are themselves complicated objects when expressed in terms of more truly fundamental entities. Protons and neutrons are complex bound states of quarks, antiquarks, and gluons. Quarks, antiquarks, and gluons obey simple mathematical equations, while protons and neutrons don’t—but protons and neutrons are what we get to see. Even “structureless” electrons arise, in the basic equations, from mixtures of massless particles. It is those more basic entities, not the emergent electron, whose properties are ideally simple.

**Natural Philosophy**

*3. Relativity (Poincare Symmetry), Quantum Mechanics, and Local (Gauge) Symmetry Rule*: It might have been the case that the basic recipe for nature would be like the basic recipe for a human being (i.e., the human genome) or the software that runs a complex program like Word—a long list of instructions, containing many logically independent modules and accidental details. That is *not* what we find. Instead, there are three powerful principles that can serve as axioms, allowing us to formulate the Standard Model deductively. Two of these guiding principles, special relativity and quantum mechanics, date from the early twentieth century. The third, gauge symmetry, only came to full fruition in the 1970s, though its roots go back decades further. Gauge symmetry is (even) more abstract and mathematical than relativity and quantum mechanics, and perhaps for that reason it is less well known to the general public. I will not be able to remedy that situation here, though I’ll be attempting it in a forthcoming book (“The Beautiful Question”). Let me just advertise, as an illustration of its power, that gauge symmetry is what allowed David Gross and me to predict the existence of gluons and their detailed properties prior to their observation.

The Standard Model is best understood not as a list of particles, but as a realization of principles. That nature’s basic operating system can be so understood, in detail, marks a profound revolution in natural philosophy.

*4. The Distinction between “Matter” and “Light” is Superficial*: Like light, the building-blocks of matter are essentially massless, and they can be created and destroyed. Conversely, as its wavelength shortens, “light” as we ordinarily sense it is on a continuum with the sorts of gamma rays that leave tracks at the LHC, which are manifestly particles.

*5. “Empty Space” is a Substance*: What we perceive to be “empty space” is actually filled with pervasive condensates and also exhibits spontaneous activity. Both the condensates, and the fluctuating activity (“virtual particles,” vacuum polarization) radically alter the qualitative properties of particles moving through space.

*6. Nature Loves Transformations*: Superficially, the Standard Model seems to contain dozens of independent ingredients—for instance, quarks with different colors and flavors and gluons galore. But many of these particles are related to one another by symmetry and can physically transform into one another. Thus the unwieldy account using dozens of degrees of freedom becomes the story of a much smaller number of underlying ur-substances and their transformations.

**Emergent Simplicity**

*7. The Behavior of Matter at High Energy Simplifies and Reveals Its Deep Structure*: Flows of particles we can observe at accelerators follow patterns we can calculate using fundamental equations directly. The observed flows are found to be organized into narrow jets: If we turn down the resolution and lump observed particles moving in the same direction into units, adding their energy and momenta, then those units obey the equations of quarks and gluons. In that very strong sense, they *are* quarks and gluons.

It is like an impressionist painting where you have to blur the resolution to see the shapes.

Because the fundamental equations themselves are much easier to work with than the complicated “solutions” we find embodied in everyday (low-energy) matter, it is profoundly correct to say that the behavior we see at the Large Hadron Collider is simpler than the behavior we study in an undergraduate chemistry laboratory.

*8. The Early Universe is Open to Rational Reconstruction*: At the extremely high energy density of the Big Bang, the equations for matter simplify dramatically, as we just discussed. This allows us to make serious, scientifically grounded models for what happened in the very early Universe and to draw observable conclusions from them. Moreover, by reproducing those extreme conditions on a small scale at accelerators, we can check our equation.

**Unfinished Business**

*9. We’ve Got Vexing Family Problems*: When the muon—a more massive, unstable version of the electron—was discovered in 1934, I.I. Rabi famously asked, “Who ordered that?” Since then, the discovery of the tau lepton (1973) has given the electron a yet heavier, more unstable brother and the up and down quarks have also sprouted triplets. When we expand the Standard Model to describe the properties of these unexpected kin, its elegant core grows an oversized appendix, uncomfortably like the “long list of instructions” we were happy to avoid. This is where the Standard Model, otherwise remarkably beautiful and tight, gets sloppy.

There is one bright spot. An adequate account of it would require a separate post, but to make a long story short, we can fix the worst of our family problems by postulating the existence of a new particle, the *axion*. Axions are predicted to be Higgs-like particles, but much lighter and much more weakly interacting. If they exist at all, they will contribute a large fraction of the astronomical “dark matter.” Difficult but promising experiments to search for axions are in progress.

*10. Unification Looks Good, and Suggests Supersymmetry*: The standard model contains three (mathematically) similar but distinct forces and neatly accommodates a fourth—gravity, in the form of general relativity. It also contains, even after we count particles that transform into another by symmetry as the same and ignore the family triplication, five distinct ur-particles. We would like to build a more economical description, without so many independent parts. And in fact we can transcend the standard model, by pushing its ideas further. If the gauge symmetries that lead to the different forces are all part of a larger, better hidden but more encompassing symmetry, then the different forces are not truly independent, but instead are merely different aspects of a single basic force. Remarkably, then, the different ur-particles turn out to be different aspects of a single basic particle, too.

Unification dynamics can work as a quantitative idea and explain the relative strength of the different forces (including gravity). But for this to work in detail, we need to augment the standard model. The most convincing idea in that direction is supersymmetry. Supersymmetry, too, deserves a post of its own, and here I will only mention its most immediate, testable consequence—namely that for every particle we currently known there must be a heavier superpartner with different spin (and of course mass) from its conventional mate, but sharing other properties like electric and strong color charges. Hopes are high that the next round of LHC experiments, at higher energy, will uncover some of the new particles supersymmetry requires.

**Bonus Item, for our baker’s list of 10:**

The standard model, as we currently understand it, does not account for the astronomical dark matter. On the other hand, that dark matter appears to be composed of some kind of relic gas of particles, surviving from the earliest moments of the Big Bang. Axions are a good candidate to be that particle, as are possible stable superpartners.

**Go Deeper**

*Editor’s picks for further reading*

CERN: The Standard Model

Discover the Standard Model in this brief primer from CERN.

Science: Does Dark Matter Consist of Weird Particles Called Axions?

In this video archived from a 2013 live chat, Frank Wilczek and Gianpaolo Carosi, spokesperson for the Axion Dark Matter Experiment team, discuss axions and how physicists hope to detect them.

Einstein himself understood the importance of breaking free from the idea that there is an objective, universal “now.” Yet, paradoxically, today’s standard formulation of quantum mechanics makes heavy use of that discredited “now.” Playing with paradoxes is part of a theoretical physicist’s vocation, as well as high-class recreation. Let’s play with this one.

First, some background. Despite special relativity’s freedom in assigning times, for each choice there is a definite ordering of events into earlier and later. In a classic metaphor, time flows like a river through all space, and the flow never reverses.^{1} Figures 1, 2, and 3 tell the central story.

To organize our thoughts, let us make a definite choice of time; in the jargon, let us fix a frame of reference. Then we can frame the history of the world as shown in Figure 1. Here time runs vertically, while space runs horizontally. Since we’re going to be considering several versions of time, we’ll name this one t_{1}. For convenience in drawing, we are restricting attention to a one-dimensional slice of space—in other words, a line. One-dimensional “spaces” of events sharing the same value of time t_{1} would appear as horizontal lines (which I haven’t drawn). The meaning of the colored regions and their labels will be elucidated presently.

Observers moving at constant velocity with respect to our frame of reference will need to use their own physically appropriate, different versions of “time,” corresponding to how their clocks run. Figures 2 and 3 display the lines for which two different versions of time, t_{2} and t_{3}, are constant. t_{2} is the appropriate measure of time for observers moving at a certain constant velocity toward the right, while t_{3} is the appropriate measure of time for observers moving at a certain velocity toward the left—that is, in our figures, in the horizontal, “spatial” direction—relative to our reference frame. For observers with higher speeds, the tilt of these lines will be steeper. But the tilt never exceeds 45 degrees, because 45 degrees corresponds to the limiting speed, namely the speed of light.

With this background, we are ready to appreciate the distinctions shown in Figure 1. In the center of the diagram is a blue point b representing a specific event. Some events—those that lie in the green future region of space-time—occur at a later time than b, whether we use t_{1}, t_{2}, t_{3}, or any other allowed observer’s measure of time. We say that these events are in b’s causal future (or, if there is no danger of confusion, simply b’s future). What happens at b can affect events in b’s causal future, without upsetting any observer’s sense that a cause—b—must occur before its effect. Closely connected is the fact that signals from b can reach events in b’s future without ever exceeding the speed of light. We call such physically allowed signals “subluminal” signals.

Similarly, we can define b’s causal past, depicted in red. It consists of all events that can affect b. There is a nice symmetry here: If we draw cones emanating from an event a in b’s causal past, we will find b in the upper colored region. An event a is in b’s causal past, if and only if b is in a’s causal future.

But many events fall into neither of those regions; they are neither in b’s causal future, nor in b’s causal past. We say that such events are “space-like” with respect to b. The event a, which appears in Figures 2 and 3, is of that kind. According to t_{2}, a occurs after b; but according to t_{3}, a occurs before b. Neither a nor b can send subluminal signals to the other.

In a similar way, we can consider the regions that are future, past, or space-like with respect to a. This leads us to a more elaborate division of space-time, illustrated in Figure 4. The orange region contains events in the common (causal) past of both a and b, the purple region their common future, and so forth. This colorful diagram hints at a potentially rich subject, the geometry of causation, that could be developed much further. (Specifically, it could add some spice to high-school geometry and analytical geometry courses, and provide material for independent projects.)

As we’ve seen, if a and b are space-like separated, then either can come before the other, according to different moving observers. So it is natural to ask: If a third event, c, is space-like separated with respect to both a and b, can all possible time-orderings, or “chronologies,” of a, b, c be achieved? The answer, perhaps surprisingly, is No. We can see why in Figures 5 and 6. Right-moving observers, who use up-sloping lines of constant time, similar to the lines of constant t_{2} in Figure 2, will see b come before both a and c (Figure 5). But c may come either after or before a, depending on how steep the slope is. Similarly, according to left-moving observers (Figure 6), a will always come before b and c, but the order of b and c varies. The bottom line: c never comes first, but other than that all time-orderings are possible.

These exercises in special relativity are entertaining in themselves, but there are also serious issues in play. They arise when we combine special relativity with quantum mechanics.

Two distinct kinds of difficulties arise as we attempt to combine those two great theories. They are the difficulties of construction and the difficulties of interpretation.

The difficulties of construction dominated 20^{th} century physics. (One measure of this: By my conservative count six separate Nobel Prizes, shared by 12 individuals, were awarded primarily for advances on this problem.) The tough issues that arose here, in the construction of relativistic quantum theories, are in some sense technical. Combining special relativity and quantum mechanics leads to quantum field theory, and the equations of quantum ﬁeld theory are dicey to solve. If you try to solve those equations in a straightforward way, you find nonsensical results—for example, inﬁnitely strong forces. In fact it emerged, after many adventures, that most quantum ﬁeld theories really don’t make sense! They are mathematically inconsistent. Those that do make sense can only be defined using tricky mathematical procedures. Passing in silence over that epic, we reach the bottom line: After heroic struggles, the difficulties of construction were eventually (mostly) overcome, and today quantum ﬁeld theory forms the foundation of our immensely successful Standard Model.

The difficulties of interpretation have a different flavor. Closely related to our issues with time-orderings, they arise because labeling events by time plays an absolutely central role in the conventional formulation of quantum mechanics.

The quantum state of the world is represented by its wave function, which is a mathematical object defined on surfaces of constant time. Furthermore, measurements “collapse” the wave function, introducing a drastic, discontinuous change. Suppose, for example, that we decide to use t_{1} as our time. Then a measurement at t_{1} = 0 changes the wave function everywhere at all times subsequent to t_{1} = 0.

But what if we had chosen t_{2} or t_{3}? The occurrence of that sort of collapse implies that there is a drastic difference between the formal descriptions of quantum mechanics based on our choice of reference frame. If we work with t_{2}, then measurements at b will collapse the wave function seen at a, since b comes before a. For the same reason, measurements at b do not collapse the wave function at a. But if we work with t_{3}, since the time-ordering between a and b is reversed, the situation is just the opposite!

Yet special relativity demands that either t_{2} or t_{3} can be used in a valid description of nature. Have we discovered a contradiction?

Not necessarily.

The point is that quantum-mechanical wave functions are tools for describing nature, rather than nature herself. Mathematically, quantum-mechanical wave functions contain a lot of excess (unobservable) baggage and redundancy, so that wave functions that look drastically different can nevertheless give the same results for most, or possibly all feasible physical observations.

While it falls short of outright contradiction, there remains, it seems fair to say, considerable tension at the interface between quantum mechanics and special relativity. During the long struggle to construct quantum ﬁeld theories, several physicists speculated that the inﬁnitely strong forces they calculated were surface symptoms of a fundamentally rotten core, whose rottenness was indicated more directly by the difficulties with interpretation. It didn’t work out that way. We have been able to construct theories that are not only consistent but also immensely successful, despite their near-contradictions and excess baggage.

As new technologies for probing the nano-world render possible what were once purely thought experiments, we have wonderful new opportunity to ask creative questions, confronting the paradoxes of quantum mechanics head on. Maybe we’ll ﬁnd some surprising answers—that’s what makes paradoxes fun.

^{1} There are more speculative possibilities: that time exhibits cycles, or branches, or even has several dimensions of it own. In general relativity we let time bend together with space, and in describing the Big Bang and black holes we encounter singularities, where time begins or ends. This is fascinating stuff! But “flat, unidirectional” time is the basis for almost all practical physics, and it already provides rich food for thought, so that’s what I’ll be considering here.

**Go Deeper**

*Editor’s picks for further reading*

arXiv: Constraints on Chronologies

Read the author’s technical paper on chronologies, written with theoretical particle physicist Alfred Shapere.

FQXi: Cheating the Causal Game

In this article, discover how researchers at the University of Vienna are deconstructing the physics of cause and effect.

Relativity for the Questioning Mind

Explore the fundamentals of relativity in this book by Oberlin College physics professor Dan Styer.

A few months ago, when the evidence was suggestive but not yet conclusive, I discussed here the nature of the Higgs particle, and what its discovery would mean for the enterprise of physics. Now I will supplement that discussion, focusing on

Physicists had to overcome three challenges to discover the Higgs particle: producing it, detecting it, and proving that they really had produced and detected it.

To put these challenges in context, let me introduce another perspective on what the Higgs particle is: The Higgs particle is *The Quantum of Ubiquitous Resistance*. I’m referring here to a universe-filling medium that offers resistance to the motion of many elementary particles, thus producing what we commonly think of as their mass.

The Standard Model of physics—our best-yet model of the matter and forces that make our universe—requires, for consistency of its equations, that many of its ingredients are particles with zero mass. These particles should travel at the speed of light in empty space, but in reality, some of them—like quarks, leptons, and W and Z bosons—travel more slowly. What is slowing them down?

Our Standard Model comes equipped with a Standard Reconciliation: Space is never empty! Space is filled with a material that resists the motion of those particles. Over the past decades, physicists have deduced many of the properties of the Ubiquitous Resistance by observing its effects on the forms of matter we can see. They even gave it a name: the Higgs field. But none of the known particles had the right properties to build up the Ubiquitous Resistance. So theorists drew up the specifications for a particle that would do the job. They called it the Higgs particle.

But wishing doesn’t make it so. Only experiments can grant (or deny) theorists’ wishes. With that in mind, let us consider the three challenges facing experimental observation of the Higgs particle.

**Producing it**

Any physical material, hit hard enough, is bound to break. The smallest possible shard reveals the most basic unit of the material: its “quantum.” For the Ubiquitous Resistance, that quantum is the Higgs particle.

To break off a piece of the Ubiquitous Resistance, though, requires producing disturbances of unprecedented intensity, albeit confined to tiny volumes of space for tiny intervals of time. That is what the Large Hadron Collider (LHC) is all about. By accelerating beams of protons to extremely high energy, and bringing them into collision, the LHC creates “Little Bangs” systematically.

**Detecting it**

Once you’ve produced a Higgs particle, the next challenge is to detect it. This isn’t as easy as it sounds, as the Higgs rapidly decays into other particles. We can look for those secondary particles, but most of them are useless for detection because they are produced more abundantly by other processes. The Higgs’ tiny signal competes with a cacophony of noise. The most likely mode of Higgs decay, into a bottom quarks and its antiparticle, in particular, is diluted by garden-variety strong interaction processes which produce those particles in droves.

So detection requires cunning.

Some decay processes that we might be able to detect are sketched below. Each has its own advantages and limitations, and each adds information, so experimenters have pursued them all. (For more information on the characters you’ll encounter below—W bosons, Z bosons, and the rest of the particle zoo, this is a good starting point.)

#1: Photon pairs

After a Higgs particle is created, quantum fluctuations convert it into a particle-antiparticle pair, which recombines into two photons.

The observable signal, in this case, is the pair of photons emerging from the decay. From the energy and momentum of the two photons, one can reconstruct the mass of the Higgs particle. This is significant because there are many other ways to make photons in collisions at the LHC that don’t require the production and decay of Higgs particles. The Higgs signal would be swamped, if not for the redeeming feature that randomly produced photons will “add up” to indicate random masses for their hypothetical progenitors, and only by rare accident land on the Higgs particle mass, whatever it happens to be. The signature of the Higgs, then, is an excess of photon pairs in a very narrow mass range. The mass where there’s an excess is fingered as the Higgs particle mass. Since the energy and momentum of photons can be measured accurately, this method gives an excellent measurement of the Higgs particle mass.

The main limitation of this technique, besides the unavoidable background “noise,” is the fact that this decay process is quite rare compared to other possibilities.

#2: W boson+ (Higgs -> bottom-antibottom)

Here is one of those other possibilities: In this case, the Higgs particle is produced as a byproduct of the creation of a W boson. The W boson itself decays, but in ways that experimentalists are thoroughly familiar with, and can often identify with confidence. The presence of the W boson, itself a relatively rare occurrence, helps this class of event to stand out above the strong interaction background. Thus the most common Higgs decay, into bottom-antibottom pairs, becomes discernable when you demand an accompanying W.

There are two more possibilities:

#3: Higgs -> WW -> lepton + antilepton + neutrino + antineutrino

#4: H -> ZZ -> 2 leptons + 2 antileptons

In Processes 3 and 4, the observed particles are leptons (l), which is just another way of saying that they might be either electrons or muons, and their antiparticles; the ghostly neutrinos escape detection. The Higgs boson barely interacts with those light particles, but it can communicate with them indirectly, through fluctuations in the W and Z boson fields (a.k.a. “virtual particles”). Process 4 is special, in that it is the only case where the background is so small that individual events, as opposed to enhanced probabilities, can be ascribed with confidence to Higgs particles.

By measuring the rates of all of these processes, one can determine how powerfully the Higgs communicates with many different things: two gluons, two photons, two Z bosons, two W bosons, and bottom-antibottom pairs. Their different rates are logically independent, of course, but theory connects them.

**Proving it**

This is the final challenge. Finding the Higgs boson depends on assuming that the Standard Model is reliable, so we can work around the “background noise”. Here years of hard bread-and-butter work at earlier accelerators—especially the Large Electron-Positron Collider (LEP), which previously occupied the same CERN tunnel in which the LHC resides today, and the Tevatron at Fermilab, as well as at the LHC itself—pays off big. Over the years, many thousands of quantitative predictions of the Standard Model have been tested and verified. Its record is impeccable; it has earned our trust.

The next step is to search for data that the Standard Model can’t explain, like excesses of the decay products discussed earlier, and compare them against our predictions for yields from a hypothetical Higgs boson. Insofar as these quantitative predictions match the observations, which they do, one can speak of proof.

Future observations may reveal new effects, or small quantitative discrepancies in the effects already observed. (I’ll be surprised if they don’t!) But the original, simplest sketch of what The Quantum of Ubiquitous Resistance could possibly be resembles reality enough to pass muster, at least as its first draft.

Finally, I’d like to reprise the conclusion of my earlier piece, in which I considered what might happen if the hints of the Higgs did *not* pan out:

And if not?I’ll be heartbroken. Mother Nature will have shown that Her taste is very different from mine. I don’t doubt that it’s superior, but I’ll have to struggle to understand it.

Thanks, Mom!

]]>What is all the buzz about the Higgs boson, aka the “God particle”?

“Higgs” is Peter Higgs, a professor at Edinburgh, who made some interesting suggestions along the lines I’ll discuss below in 1964. The name “Higgs particle,” though standard, is not entirely fair, for several reasons: the basic idea has a significant pre-history; what’s original with Higgs has co-claimants; and the modern, mature version of the theory involves many ideas that were not anticipated in 1964. I’ll leave those issues for historians of science and the Swedish Academy to sort out.

God on the other hand deserves full credit, or blame.

Herewith a brief introduction, in question and answer format, for the buzz-curious.

**What’s the basic idea?**

Suppose that a species of fish evolved to the point that some of them became physicists, and began to ponder how things move. At first the fish-physicists would, by observation and measurement, derive very complicated laws. But eventually a fish-genius would imagine a different, ideal world ruled by much simpler laws of motion–the laws we humans call Newton’s laws. The great new idea would be that motion looks complicated, in the everyday fish-world, because there’s an all-pervasive medium–water!–that complicates how things move.

Modern physics proposes something very similar for our world. We can use much nicer equations if we’re ready to assume that the “space” of our everyday perception is actually a medium whose influence complicates how matter is observed to move.

Are there precedents for such an outrageous dodge?

Yes. In fact it’s a time-honored, successful strategy.

For example: In its basic equations, Newtonian mechanics postulates complete symmetry among the three dimensions of space. Yet in everyday experience there’s a big difference between motion in vertical, as opposed to horizontal, directions. The difference is ascribed to a medium: a pervasive gravitational field.

A much more modern example occurs in quantum chromodynamics (QCD), our fundamental theory of the strong force between quarks and gluons. There we discover that the universe is filled with a medium, the sigma (σ) field, that forms a sort of cosmic molasses for protons and neutrons. The σ field slows protons and neutrons down. Allowing a bit of poetic license, we can say that the σ field gives protons and neutrons mass. Many consequences of the σ field have been calculated and successfully observed, so that to modern physicists it is now every bit as real as Earth’s gravity field. But the σ field exists everywhere and everywhen; it is not tied to Earth.

**What’s the new idea, then?**

In the theory of the weak force, we need to do a similar trick for less familiar particles, the W and Z bosons. We could have beautiful equations for those particles if their masses were zero; but their masses are observed not to be zero. So we postulate the existence of a new all-pervasive field, the so-called Higgs condensate, which slows them down. This proposal, which here I’ve described only loosely and in words, comes embodied in specific equations and leads to many testable predictions. This proposal has been resoundingly successful.

**What is the Higgs particle, conceptually?**

Trouble is, no known form of matter has the right properties to make the Higgs condensate. In order to build that medium, we need to add to our inventory of world-ingredients. The simplest, “minimal” implementation introduces exactly one new elementary particle: the Higgs particle.

**What is the Higgs particle, specifically? **

There’s a quotation I love from Heinrich Hertz, about Maxwell’s equations, that’s relevant here.

To the question: “What is Maxwell’s theory?” I know of no shorter or more definite answer than the following: “Maxwell’s theory is Maxwell’s system of equations.”

Similarly, Higgs particles are the entities that obey the equations of Higgs particle theory. Those equations prescribe everything about how Higgs particles move, interact with other particles, and decay—with just one, albeit glaring, exception: The equations do not determine the mass of the Higgs particle. The theory can accommodate a wide range of values for that mass.

**What is a Higgs particle, operationally?**

A Higgs particle is a highly unstable particle, visible only through its decay products. It has zero electric charge, and—unlike all other known elementary particles—no intrinsic rotation, or “spin.” These null properties reflect the fact that many Higgs particles, uniformly distributed through space, build up the Higgs condensate, which we sense as emptiness or pure vacuum. (Although individual Higgs particles are highly unstable, a uniform distribution of them is stabilized through their mutual interactions. Visible Higgs particles are disturbances above that uniform background.)

As mentioned before, theory does not predict what mass a Higgs particle should have. Masses anywhere from 10 Giga-electron Volts (GeV) to 800 GeV might be accommodated, though problems start to emerge near either extreme. (Physicists commonly use GeV as the unit of mass for elementary particles. One GeV is close to, but slightly more than, the mass of one proton.)

Because Higgs particles are unstable, to study them one must produce them. That requires concentrating lots of energy into a very small space to create enormous energy density. The required concentration of energy is achieved at particle colliders. At the LHC, two counter-rotating beams of high energy protons are made to pass through one another, or cross, at a few points. At each crossing some fraction of the protons, which are moving in opposite directions at very close to the speed of light, collide. The collisions produce fireballs that explode into tens or hundreds of stable or near-stable particles including electrons and positrons, pi mesons, photons, protons and antiprotons, and several other possibilities.

Known physical processes account for the vast majority of this debris. Production and decay of Higgs particles, if they exist, will produce some additional debris. To get evidence for the existence of Higgs particles, therefore, one must identify some distinctive patterns in the observed debris that could result from Higgs particle decays but which are difficult to produce with conventional processes.

Putting it another way: If you’re looking for needles in a haystack, you’d better have a really good grip on what hay can look like—and it helps to look for needles that are hard to mistake!

Several patterns play an important role in the analysis, but I’ll discuss just one—a crucial one—to give a flavor of what’s involved. One process of Higgs particle production and decay is depicted in this sketch:

The sequence of events in the sketch above unfolds reading upwards. Gluons inside the fast-moving protons convert, by quantum fluctuations, into a “virtual” top quark and its antiparticle. The virtual top quark and antiquark swiftly recombine into a Higgs particle. Then the Higgs particle decays by a similar mechanism: quantum fluctuations convert it into a particle-antiparticle pair, which recombine into two photons. At the end of the day, it is those two photons that are observed. (I’m particularly fond of this exotically beautiful quantum process, which I discovered theoretically in 1977.) The point is that more conventional processes, i.e. processes that don’t involve Higgs particles, but which produce two energetic photons are fairly rare. Thus the calculated contribution from Higgs particles, should they exist, can be discerned above the background.

**What did we know about the Higgs before July 4, 2012?**

Prior to the July 4 announcement, we already knew that a very large range of potential mass-values had been ruled out. Only a small window in the range between 115 and 127 GeV remains viable.

On the other hand, an excess of events, above expectations from known processes, had been observed in the two-photon channel mentioned above and (less clearly) in several others. The excesses are compatible with, and could be explained by, the existence of Higgs particles with mass close to 125 GeV.

The observed excess might also be compatible with a statistical fluctuation in the background processes—e.g., an improbable run of normal processes leading to photon pairs, comparable to rolling four consecutive sixes at dice.

**What will it mean if we find the Higgs? **

First of all, it will be a dazzling triumph for theoretical physics. Physicists will have used intricate equations and difficult calculations to predict not only the mere existence of the Higgs particle, but also (given its mass) its rate of production in the complex, extreme conditions of ultra high energy proton-proton collisions. Those equations will also have accurately rendered the relative rates at which the Higgs particle decays in different ways. Yet the most challenging task of all may be computing the much larger, competing background “noise” from known processes, in order to successfully contrast the Higgs’ “signal.” Virtually every aspect of our current understanding of fundamental physics comes into play, and gets a stringent workout, in crafting these predictions.

The animating spirit of research in fundamental physics, captured in the maxim “Today’s sensation is tomorrow’s calibration,” will not rest in that triumph, however. A Higgs particle at mass 125 GeV would portend a new level of fundamental understanding and discovery. Let me explain why.

Within our current theories of the fundamental interactions, embodied in the so-called Standard Model, the Higgs particle mass might, as previously mentioned, have any value within a wide range. Yet there are good reasons to suspect that despite its many virtues, the Standard Model is incomplete. Notably, its equations postulate four different forces (strong, weak, electromagnetic and gravitational) and six different materials they act on. It would be prettier to have a more coherent, unified theory. And in fact there are beautiful, concrete proposals for unified field theories, within which we have just one force and just one kind of material. But to make the unified theory work quantitatively, in detail, we need to expand the equations of the Standard Model so that they integrate a concept called supersymmetry.

Supersymmetry has many aspects and ramifications, but two are most relevant here. First, supersymmetry (for experts: more specifically, focus point supersymmetry) predicts that the Higgs particle mass should lie in the range 120-130 GeV. Finding Higgs particles with mass in that range would give strong circumstantial evidence both for supersymmetry and for the unification that supersymmetry enables.

Second, supersymmetry predicts the existence of many additional new fundamental particles, besides the Higgs particle, that should be accessible to the LHC. So if supersymmetry is right, the LHC will have many more years of brilliant discovery in front of it.

**And if not?**

I’ll be heartbroken. Mother Nature will have shown that Her taste is very different from mine. I don’t doubt that it’s superior, but I’ll have to struggle to understand it.

]]>Or has it?

A millennium from today, historians will look back at the twentieth century primarily as the age of a rich flowering of science. Within a few decades molecular biology unveiled the body and soul of the genetic code, cosmology reconstructed the history of the universe, and geophysics disclosed a home planet more dynamic than ever previously imagined. Yet the biggest revolution of all, from which all those others drew, was ironically the smallest: the conquest of microphysics.

History does not come with time-stamps affixed, but two epochal experiments roughly bracket the High Golden Age of microphysics. In 1912, Ernest Rutherford decoded atoms experimentally, revealing that each has a tiny nucleus containing all its positive charge and almost all its mass. That nucleus is surrounded by electrons, which are bound to it by electric forces. Just a year later, Niels Bohr introduced strange new ideas about the laws of motion in the microworld. His breakthrough matured into quantum mechanics.

Quantum mechanics broke the code of the microworld, but it took decades to master its text. Finally, in the “November Revolution” of 1974, two separate experimental groups announced the discovery of a striking set of new particles, the charmonium system. Their discoveries provided a brilliant confirmation of, and a fertile proving ground for, the collection of theoretical ideas we now call the Standard Model. The charm quarks (and their antiquarks) that make the charmonium particles rounded out the theory of the weak interaction, and the forces that bind them were just right for the theory of the strong interaction. Those theories tamed nuclear physics, and together with electromagnetism and gravity they complete the description of matter.

The Standard Model provides, we believe (after very thorough, rigorous, quantitative testing!), a complete mathematical explanation of how subatomic particles combine to make atoms, atoms to make molecules, and molecules to make materials, and how all this stuff interacts with light and radiation. Its equations are comprehensive yet economical, symmetrical but spiced with interesting detail, austere yet strangely beautiful. The Standard Model provides a complete, secure foundation for astrophysics, materials science, chemistry, and physical biology. Good stuff!

The Standard Model marks the ultimate triumph of reductionism. As Isaac Newton put it, we *analyze* matter by finding complete and simple laws governing the behavior of its elementary components, and then use those laws to *synthesize* the properties of macroscopic objects.

Triumph on that scale has a dark side: It’s a tough act to follow. By the late 1980s, articles and books with titles like “The End of Physics” began to appear. At the same time, “Theory of Everything” hyperbole erupted.

Neither reaction, however unseemly, was entirely baseless. The achievements of this golden age did mark the end of a certain special—and especially wonderful—kind of physics. After plumbing the bottom of ordinary matter (that is, physical material that’s reasonably accessible and usefully stable), where do you go? As physicists deciphered the atom, they revolutionized chemistry and enabled microelectronics; as they deciphered the nucleus, they revolutionized not only astrophysics and physical cosmology, but also bomb technology and medicine. There is no realistic prospect that the sort of frontier physics explored at the Large Hadron Collider, as esoteric and expensive as it is marvelous, will yield practical fruit. (This is not to say that the *indirect* value of this work, which serves as “the moral equivalent of war” for many talented, enthusiastic, creative young seekers, will not repay the money invested in it. It will, handsomely.) Its application in the natural world is likely to be restricted to the extremely early universe, and (maybe) a few super-extreme astrophysical situations, like Hawking’s black hole explosions.

But lamenting the passing of a golden age, or professing to reanimate it, are exercises in nostalgia. A healthier attitude, and an attitude that is truer to the unselfconscious exploratory spirit of the golden age itself, is to engage with its legacy of unanswered challenges and new opportunities. What a legacy it is, and what opportunities there are!

For the Standard Model, despite its practical success in describing ordinary matter, leaves many loose ends and unanswered questions. One of its ingredients, the Higgs particle, has not yet been observed directly. That embarrassment may soon be remedied, but other flaws run deeper. Its equations remain lopsided in peculiar ways. They beg to be embedded in a larger, still more symmetric theory. There are, in fact, excellent ideas for advancing toward such unification. Those ideas suggest new lines of experimental investigation, notably the search for proton decay and for supersymmetric particles. The other interactions, and indeed quantum mechanics itself, have not yet been organically united with gravity. String theory might help with those problems, but it’s clear that crucial ideas still await discovery.

We’d also like to understand why the laws of microphysics appear so nearly unchanged if we run time backwards. The only known explanation predicts the existence of a remarkable new class of particles called axions. These wraithlike cousins of photons, more elusive even than neutrinos, plausibly provide the astronomical dark matter. And if axions don’t—what does?

These and other unanswered challenges amply refute the notion that physicists are, in any *meaningful* sense, close to having a “Theory of Everything” (or that we’ve reached “The End of Physics”).

Yet the biggest challenges, I think, are of a different kind. The art of using our comprehension of microphysics is an open-ended invitation to creativity. Music-making doesn’t end when you’ve learned how your instrument works—it begins.

Can we engineer quantum computers, and through them fashion truly alien forms of intelligence? Can we tune in to the messages the universe itself broadcasts in gravity waves, in neutrinos, and in axions? Can we understand the human mind, molecule by molecule, and systematically improve it? To ask these questions is to discover, in the ripeness of one golden age, the seeds of new ones.

]]>It seems that if one is working from the point of view of getting beauty in one’s equations, and if one has really a sound insight, one is on a sure line of progress. If there is not complete agreement between the results of one’s work and experiment, one should not allow oneself to be too discouraged, because the discrepancy may well be due to minor features that are not properly taken into account and that will get cleared up with further developments of the theory.

The poet John Keats expressed it more concisely:

Beauty is truth, truth beauty – that is all

Ye know on earth, and all ye need to know.

But, in science, does a beautiful hypothesis necessarily lead to deep truth about nature?

Several famous success stories suggest that it does, at least in physics:

James Clerk Maxwell arrived at his celebrated system of equations for electromagnetism by codifying what was thought to be known experimentally about electricity and magnetism, noting a mathematical inconsistency, and fixing it. In doing so, he moved from truth to beauty. The Maxwell equations of 1861, which survive intact as a foundation of today’s physics, are renowned for their beauty. The normally sober Heinrich Hertz, whose experimental work to test Maxwell’s theory gave birth to radio and kickstarted modern telecommunications, was moved to rhapsodize:

*One cannot escape the feeling that these mathematical formulae have an independent existence and an intelligence of their own, that they are wiser than we are, wiser even than their discoverers, that we get more out of them than was originally put into them.*

Albert Einstein, on the contrary, arrived at his equations for gravity—the general theory of relativity—with minimal guidance from experiment. Instead he looked for beautiful equations. After years of struggle, in 1915 he found them. At first, and for decades afterwards, few testable predictions distinguished Einstein’s new theory of gravity from Newton’s venerable one. Now there are many such tests, and it is amply clear that Einstein moved from beauty to truth.

Yet even in physics, the record is more mixed than is commonly known. Despite Keats and Dirac, beauty’s seductions don’t always give birth to truth. There have been fascinating theories that are both gorgeous and wrong: Beautiful Losers.

Like surgeons, physicists bury their failures. But the most beautiful of the Beautiful Losers deserve a better fate than oblivion, and here they’ll receive it. I’ve written brief accounts of three Beautiful Losers: Plato’s Geometry of Elements, Kepler’s Harmonic Spheres, and Kelvin’s Vortex Atoms.

**Plato’s Geometry of Elements**: Plato believed that he could describe the Universe using five simple shapes. These shapes, called the Platonic solids, did not originate with Plato. In fact, they go back thousands of years before Plato; you can find stone models (perhaps dice?) of each of the Platonic solids in the Ashmolean Museum at Oxford dating to around 2000 BC. But Plato made these solids central to a vision of the physical world that links ideal to real, and microcosm to macrocosm in an original, and truly remarkable, style. **Read more**

**Kepler’s Harmonic Spheres**: Like Plato, the German astronomer Johannes Kepler believed that five Platonic solids provided an essential blueprint for our universe. Six planets were known to Kepler, and he believed that they were carried around on nested globes that he called the celestial spheres. Kepler reasoned that five solids could correspond to six planets, if the solids—or more precisely, their bounding surfaces—marked the spaces between planetary spheres. He described this elegant construction in his *Mysterium Cosmographium* in 1596. **Read more**

**Kelvin’s Vortex Atoms**: A tornado is just air in motion, but its ominous funnel gives an impression of autonomous existence. A tornado seems to be an object; its pattern of flux possesses an impressive degree of permanence. The Great Red Spot of Jupiter is a tornado writ large, and it has retained its size and shape for at least three hundred years. The powerful notion of vortices in fluids abstracts the mathematical essence of such objects, and led William Thomson, the 19th century physicist whose work earned him the title Lord Kelvin, to ask: Could atoms themselves be vortices in a ether that pervades space? **Read more
**

It’s wonderful, and comforting, that each of my Beautiful Losers, though wrong, was in its own way fruitful. Today more than ever physicists working at the frontiers of knowledge are inspired by beauty. In the alien realms of the very large, the very small, and the extremely complex, experiments can be difficult to perform and everyday experience offers little guidance. Beauty is almost all we’ve got!

Plato believed that he could describe the Universe using five simple shapes. These shapes, called the Platonic solids, did not originate with Plato. In fact, they go back thousands of years before Plato; you can find stone models (perhaps dice?) of each of the Platonic solids in the Ashmolean Museum at Oxford dating to around 2000 BC, as pictured below. But Plato made these solids central to a vision of the physical world that links ideal to real, and microcosm to macrocosm in an original, and truly remarkable, style.

Let me explain, first, what the Platonic solids are. To begin, consider something simpler: regular polygons. Regular polygons, by definition, are two-dimensional shapes bounded by sides of equal length, each making the same angles with its neighbors. Equilateral triangles, squares, regular pentagons, and so on are all regular polygons. Platonic solids are the three-dimensional analog of regular polygons, and prove to be far more interesting. Platonic solids are bounded by regular polygons, all of the same size and shape. One can prove mathematically that there are exactly five Platonic solids. Here they are:

The tetrahedron has four triangular faces, the cube six square faces, the octahedron eight triangular faces, the dodecahedron twelve pentagonal faces, and the icosahedron twenty triangular faces. Plato proposed that four of these solids built the Four Elements: sharp-pointed tetrahedra give the sting of Fire, smooth-sliding octahedra give easily-parted Air, droplety icosahedra give Water, and lumpish, packable cubes give Earth. The dodecahedron, at last, is the shape of the Universe as a whole. Later Aristotle emended Plato’s system, suggesting that dodecahedra provide a fifth essence—the space-filling Ether.

Plato’s ideas lent dignity and grandeur to the study of geometry, and greatly stimulated its development. The thirteen and final book of Euclid’s *Elements*, the grand synthesis of Greek geometry that is the founding text of axiomatic mathematics, culminates with the construction of the five Platonic solids, and a proof that they exhaust the possibilities. Scholars speculate that Euclid planned the *Elements* with that climax in mind from the start.

From a modern scientific perspective, of course, Plato’s mapping from mathematical ideals to physical reality looks hopelessly wrong. The four (or five) ancient “elements” are not simple substances, nor are they usable building blocks for constructing the material world. Today’s rich and successful analysis of matter involves entirely different concepts. And yet…

In its general approach, and its ambition, Plato’s utterly mistaken theory anticipated the spirit of modern theoretical physics. His program of describing the material world by analyzing (“reducing”) it to a few atomic substances, each with simple properties, existing in great numbers of identical copies, coincides with modern understanding.

Deeper still penetrates his insight that *symmetry defines structure*. Plato sensed enormous potential in the fact that asking for perfect symmetry leads one to discover a small number of possible structures. Based on that foundation, and a few clues from experience, the outlandish synthesis that his philosophy suggested should be possible, to realize the World as Ideas, might be achievable. And clues were there to be found: Near-coincidence between the number of perfect solids (five) and the number of suspected elements (four); suggestions of how observed qualities might reflect underlying shapes (e.g., the sting of fire from the sharp points of tetrahedra). One must also admire the boldness of genius in seeing an apparent defect in the theory—five solids for four elements—as an opportunity for crowning creation, either with the Universe as a whole (Plato) or with space itself (Aristotle).

Modern physicists, when seeking equations to describe the unfamiliar laws of the microcosm, must make guesses based on fragmentary information. Optimistically—and lacking constructive alternatives—they have turned, as Plato did, to symmetry as their guide. Symmetry of equations is perhaps a less familiar idea than symmetry of shapes, but there is nothing obscure or mystical about it. We say an equation, like a shape, displays symmetry when it allows *changes that make no change*. So for instance the equation

**X = Y**

has a nice symmetry to it, because exchanging **X** for **Y** changes it into this:

**Y = X**

and this transformed equation expresses exactly the same content as the original. On the other hand **X = Y + 2**, say, turns into **Y = X + 2**, which expresses something else entirely. As this baby example demonstrates, symmetric equations can be rare and special, even when the symmetry involves quite simple transformations.

The equations of interest for physics are considerably richer, of course, and the “changes that make no change” we hope they allow are much more extensive and elaborate. But the central idea inspiration remains, as it was for Plato, the hope that symmetry defines a few interesting structures, and that Nature chooses one (or all!) of those most beautiful possibilities.

Plato’s Beautiful Loser was, in hindsight, a product of premature, and immature, ambition. He tried to leap directly from beautiful mathematics, some imaginative numerology, and primitive, cherry-picked observations to a Theory of Everything. In this his ambition was premature. Also, Plato failed to draw out specific consequences from his ideas, or to test them critically. He sketched an inspiring world-model, but was content to “declare victory” without engaging any serious battles. The mature and challenging form of scientific ambition, which aspires to understand specific features of the world in detail and with precision, emerged only centuries later.

]]>Like Plato, the German astronomer Johannes Kepler believed that the five Platonic solids provided an essential blueprint for our universe. Six planets were known to Kepler, and he believed that they were carried around on nested globes that he called the celestial spheres. Kepler reasoned that five solids could correspond to six planets, if the solids—or more precisely, their bounding surfaces—marked the spaces between planetary spheres. He described this elegant construction in his *Mysterium Cosmographium* in 1596.

Kepler proposed that Mercury’s sphere supports a circumscribed octahedron, which is inscribed within Venus’s sphere. Then we have icosahedron, dodecahedron, tetrahedron, cube interpolating respectively Venus-Earth, Earth-Mars, Mars-Jupiter and at last Jupiter-Saturn. This revelation of cosmic order was, for Kepler, rapturous:

*I wanted to become a theologian; for a long time I was unhappy. Now, behold, God is praised by my work even in astronomy.*

It immediately suggests a plan of construction that human artists can mimic—as Kepler did himself—in worthy, gorgeous models:

Though equally (that is, completely) wrong, Kepler’s conception reaches a higher level, scientifically, than Plato’s speculations, for it makes concrete numerical predictions about the relative sizes of planetary orbits, which can be compared to their observed values. The agreement, while not precise, was close enough to convince Kepler he might be on the right track. Encouraged, he set out, courageously, to prove it.

Thus Kepler’s discovery set him on his storied career in astronomy. As his work developed, however, problems with his original model emerged. Late in the 16th century, the astronomer Tycho Brahe was making exquisitely accurate observations of the motions of the stars and planets. As Kepler strove to do justice to Tycho’s work, he discovered that the orbit of Mars is not circular, but follows an ellipse. This and other discoveries fatally undermined Kepler’s beautiful system of celestial spheres. (Eventually even the numerology collapsed, with the discovery of Uranus in 1781, though Kepler was spared that ignominy.) Through the arduous, devoted labor his vision inspired he found other regularities among the orbits of the planets—his famous three laws of planetary motion—whose accuracy could not be doubted. In the end he’d arrived at a different universe than the one he first envisioned, and hoped for. He reported back:

*I write the book. Whether it is to be read by the people of the present or of the future makes no difference: let it await its reader for a hundred years, if God himself has stood ready for six thousand years for one to study him.*

Kepler’s hoped-for reader emerged not quite a hundred years later: Isaac Newton. To me, the first illustration of Newton’s *Principia* (1687) worthily transmits the most consequential thought-experiment ever. With a few cartoonish strokes, it both presages a new, universal theory of gravity and embodies a new concept of scientific beauty: *dynamic* beauty.

Imagine standing atop a tall mountain on a spherical earth, throwing stones horizontally, harder and harder. To keep things clear, remove from thought the damping influence of the atmosphere. At first it is clear what will happen, based on everyday experience: The stones will travel further and further before landing. Eventually, when the initial velocity becomes large enough, the stones will pass over the horizon; then its landing point will circuit the planet. Visualizing the developing situation, as in Newton’s diagram, it is easy to imagine the progress of trajectories leading to a circle (duck!). In this way we begin to see how the same force that pulls bodies to Earth might also support orbital motion. We see that orbiting is a process of constantly falling, but toward a (relatively) moving target.

I like to think that the images in this diagram reveal the deep inspiration, pre-mathematical and even pre-verbal, behind young Newton’s program of research (parallel to how young Kepler’s harmonic spheres inspired his). It contains the germ of universal gravitation: With (imaginary) taller mountains as launching pads, we fill the sky with possible orbits. Might the Moon occupy one of them? And if Earth’s Moon, why not Jupiter’s moons, or the Sun’s planets? And this question too begs for an answer: Throw harder still—what are the shapes of the resulting trajectories? The answer, which Newton’s mathematics allowed him to answer, is that they make more and more eccentric ellipses—the very shape that Kepler had used to fit planetary orbits!

In Newton’s dynamical approach, the beauty of planetary motion is not embodied in any one orbit, or in the particular set of orbits realized in the Solar System. Rather it is in the totality of possible orbits, which also contains trajectories of falling bodies. Putting it another way: The deepest beauty lies in the *equations themselves*, not in their *particular solutions*. Classical physics, initiated by Newton’s brilliantly successful celestial mechanics, suggests that it is misguided to expect, as Kepler and Plato did, to find ideal symmetry embodied in any individual physical object, be it the Solar System or an elemental atom. Astronomers in recent years have identified dozens of extrasolar planetary systems, and found that they come in a wide variety of shapes and sizes. And yet…

Physical requirements can privilege, among the infinity of possible solutions to beautiful dynamical equations, special ones—often especially beautiful ones. Consider crystals: They are of course quite real and tangible natural objects; they can be grown in controlled, reproducible conditions; and their form is often highly symmetric. Kepler himself wrote a monograph featuring the six-fold symmetry of snow crystals.

We discover the same thing, spectacularly, in the quantum theory of atoms. An electron interacting with a proton obeys the same species of force law as a planet orbiting the Sun. Schrödinger’s equation, no less than Newton’s, allows an enormous infinity of complicated solutions. (In fact, much more so!) But if we focus on the solutions with the lowest energies—the solutions that coldish hydrogen atoms will settle into, after radiating—we pick out a special few. And those special solutions exhibit rich and intricate symmetry. They fulfill, as they transcend, the visions of Plato, Kepler, *and* Newton.

A tornado is just air in motion, but its ominous funnel gives an impression of autonomous existence. A tornado seems to be an object; its pattern of flux possesses an impressive degree of permanence. The Great Red Spot of Jupiter is a tornado writ large, and it has retained its size and shape for at least three hundred years. The powerful notion of vortices in fluids abstracts the mathematical essence of such objects, and led William Thomson, the 19th century physicist whose work earned him the title Lord Kelvin, to ask: Could atoms themselves be vortices in an ether that pervades space?

Kelvin’s idea was inspired by the work of Hermann Helmholtz, who first realized that the core of a vortex—analogous to the eye of a hurricane—is a line-like filament that can become tangled up with other filaments in a knotted loop that cannot come undone. Helmoltz also demonstrated that vortices exert forces on one another, and those forces take a form reminiscent of the magnetic forces between wires carrying electric currents.

To Thomson, these results seemed wonderfully suggestive. At the time, evidence from chemistry and the theory of gases had persuaded most physicists that matter was indeed composed of atoms. But there was no physical model indicating how a few types of atoms, each existing in very large numbers of identical copies—as required by chemistry—could possibly arise.

In seemingly unrelated work, physicists were discovering that space-filling entities are an essential tool in Nature’s workshop. Today we accept those entities—known as electric and magnetic fields—on their own terms, as fundamental; but Thomson and his contemporaries believed them to be manifestations of an underlying fluid: an updated version of Aristotle’s Aether.

Thomson’s bold ambition, and instinct for unity, led him to a propose a synthesis: The theory of vortex atoms. The Ethereal fluid, being so fundamental, should be capable of supporting stable vortices, he reasoned. Those vortices, according to Helmholtz’ theorems, would fall into distinct species corresponding to different types of knots. Multiple knots might aggregate into a variety of quasi-stable “molecules.” All this remarkably fits the heart’s desire, in a theory of atoms: Naturally stable building-blocks, whose possibilities for combination seem sufficiently rich to do justice to chemistry.

Thomson himself, a restless intellect, moved on to gush other ideas, but his friend and colleague Peter Guthrie Tait, enthralled by the vortex atom theory, set to work. Thus inspired, he did pioneering work on the theory of knots, producing a systematic classification of knots with up to 10 crossings.

Alas this beautiful and mathematically fruitful synthesis is, as a physical theory of atoms, a Beautiful Loser. Its failure was not so much due to internal contradictions—it was too vague and flexible for that!—but by a certain sterility. Above all, it was put out of business by more successful competitors. Eventually the mechanical Ether was discredited by Einstein’s relativity, and the triumphant Maxwell equations for electric and magnetic fields do not support vortices. The modern, successful quantum theory of atoms is based on entirely different ideas. And yet…

It’s easy to understand the appeal of vortex atoms, not only as fascinating mathematics, but as potential elements for world-building. When we turn from *understanding* the natural world to *designing* micro-worlds on our own, we might come to treasure their virtues. Vortices can have an impressive degree of stability; they can be knotted into topologically distinct forms, which are also quasi-stable; and their interactions are complex and intricate, yet reproducible.

Those attractive features can be embodied in artificial “atoms” specifically designed to be building blocks for quantum engineering. For quantum theory, though it made the vortex theory of natural atoms obsolete, provides us with a variety of far more reliable, and far more perfect, aethers than the old Aetherial fantasies. Classical fluids, whether they are real liquids or speculative substrates, are inherently imperfect. Any motion in them will stir up little waves that carry away energy, and eventually dissipate the flow. Quantum fluids, such as superfluid helium and a variety of superconductors, on the contrary, support flows that, in theory, will persist unchanged forever. And in practice, too—that’s why we call them “super”! The deep point is that in quantum mechanics energy comes packaged in discrete lumps (quanta). If you operate at low temperatures, where there’s very little energy available, it can become impossible to stir up those little waves that bedevil classical fluids at all. In quantum fluids, vortices really are forever.

There is lots of room for creativity in designing and constructing artificial aethers. Many materials become perfect (quantum) fluids at low temperature.

By choosing the right media, we can tailor our fluids to have useful properties. Physicists and engineers have become quite adept at designing useful fluids, such as the liquid crystals that enable computer monitors and LCD displays. In those examples the fluids have internal structure, which can be manipulated electrically to change their appearance. So far most of the effort has gone into classical fluids, but physicists are beginning to awaken to some promising new possibilities offered by quantum fluids. Though the details can be quite different—as I said, there’s lots of room for creativity here—the basic inspiration, to make fluids that we can manipulate externally to make them do something useful, is the same.

Designer quantum fluids can offer us a variety of vortex atoms, and the opportunity to design new chemistries that accomplish something we want done. Perhaps the most intriguing possibility is to embody, in real materials, the so-far theoretical concept of anyons. Anyons are particles that interact in a special, peculiarly quantum-mechanical way. Anyons don’t exert any forces upon one another, but when you wind one anyon around another, you make interesting, predictable changes in the wave function that describes your system. Quantum computers are, in principle, nothing but machines that process wave functions. (Since wave functions can simulate a tape, or more generally a collection of tapes, that encode data, operations on wave functions can be massively parallel operations on data.) On paper, at least, theorists have proposed ways whereby one might orchestrate the motion of anyons to construct a general-purpose quantum computer. The future will tell whether this beautiful idea blossoms into reality, or proves another seductive Beautiful Loser.

]]>“Higgs” is Peter Higgs, a professor at Edinburgh, who made some interesting suggestions along the lines I’ll discuss below in 1964. The name “Higgs particle,” though standard, is not entirely fair, for several reasons: the basic idea has a significant pre-history; what’s original with Higgs has co-claimants; and the modern, mature version of the theory involves many ideas that were not anticipated in 1964. I’ll leave those issues for historians of science and the Swedish Academy to sort out.

God on the other hand deserves full credit, or blame.

Herewith a brief introduction, in question and answer format, for the buzz-curious.

**What’s the basic idea?**

Suppose that a species of fish evolved to the point that some of them became physicists, and began to ponder how things move. At first the fish-physicists would, by observation and measurement, derive very complicated laws. But eventually a fish-genius would imagine a different, ideal world ruled by much simpler laws of motion–the laws we humans call Newton’s laws. The great new idea would be that motion looks complicated, in the everyday fish-world, because there’s an all-pervasive medium–water!–that complicates how things move.

Modern physics proposes something very similar for our world. We can use much nicer equations if we’re ready to assume that the “space” of our everyday perception is actually a medium whose influence complicates how matter is observed to move.

Are there precedents for such an outrageous dodge?

Yes. In fact it’s a time-honored, successful strategy.

For example: In its basic equations, Newtonian mechanics postulates complete symmetry among the three dimensions of space. Yet in everyday experience there’s a big difference between motion in vertical, as opposed to horizontal, directions. The difference is ascribed to a medium: a pervasive gravitational field.

A much more modern example occurs in quantum chromodynamics (QCD), our fundamental theory of the strong force between quarks and gluons. There we discover that the universe is filled with a medium, the sigma (σ) field, that forms a sort of cosmic molasses for protons and neutrons. The σ field slows protons and neutrons down. Allowing a bit of poetic license, we can say that the σ field gives protons and neutrons mass. Many consequences of the σ field have been calculated and successfully observed, so that to modern physicists it is now every bit as real as Earth’s gravity field. But the σ field exists everywhere and everywhen; it is not tied to Earth.

**What’s the new idea, then?**

In the theory of the weak force, we need to do a similar trick for less familiar particles, the W and Z bosons. We could have beautiful equations for those particles if their masses were zero; but their masses are observed not to be zero. So we postulate the existence of a new all-pervasive field, the so-called Higgs condensate, which slows them down. This proposal, which here I’ve described only loosely and in words, comes embodied in specific equations and leads to many testable predictions. This proposal has been resoundingly successful.

**What is the Higgs particle, conceptually?**

Trouble is, no known form of matter has the right properties to make the Higgs condensate. In order to build that medium, we need to add to our inventory of world-ingredients. The simplest, “minimal” implementation introduces exactly one new elementary particle: the Higgs particle.

**What is the Higgs particle, specifically? **

There’s a quotation I love from Heinrich Hertz, about Maxwell’s equations, that’s relevant here.

To the question: “What is Maxwell’s theory?” I know of no shorter or more definite answer than the following: “Maxwell’s theory is Maxwell’s system of equations.”

Similarly, Higgs particles are the entities that obey the equations of Higgs particle theory. Those equations prescribe everything about how Higgs particles move, interact with other particles, and decay—with just one, albeit glaring, exception: The equations do not determine the mass of the Higgs particle. The theory can accommodate a wide range of values for that mass.

**What is a Higgs particle, operationally?**

A Higgs particle is a highly unstable particle, visible only through its decay products. It has zero electric charge, and—unlike all other known elementary particles—no intrinsic rotation, or “spin.” These null properties reflect the fact that many Higgs particles, uniformly distributed through space, build up the Higgs condensate, which we sense as emptiness or pure vacuum. (Although individual Higgs particles are highly unstable, a uniform distribution of them is stabilized through their mutual interactions. Visible Higgs particles are disturbances above that uniform background.)

As mentioned before, theory does not predict what mass a Higgs particle should have. Masses anywhere from 10 Giga-electron Volts (GeV) to 800 GeV might be accommodated, though problems start to emerge near either extreme. (Physicists commonly use GeV as the unit of mass for elementary particles. One GeV is close to, but slightly more than, the mass of one proton.)

Because Higgs particles are unstable, to study them one must produce them. That requires concentrating lots of energy into a very small space to create enormous energy density. The required concentration of energy is achieved at particle colliders. At the LHC, two counter-rotating beams of high energy protons are made to pass through one another, or cross, at a few points. At each crossing some fraction of the protons, which are moving in opposite directions at very close to the speed of light, collide. The collisions produce fireballs that explode into tens or hundreds of stable or near-stable particles including electrons and positrons, pi mesons, photons, protons and antiprotons, and several other possibilities.

Known physical processes account for the vast majority of this debris. Production and decay of Higgs particles, if they exist, will produce some additional debris. To get evidence for the existence of Higgs particles, therefore, one must identify some distinctive patterns in the observed debris that could result from Higgs particle decays but which are difficult to produce with conventional processes.

Putting it another way: If you’re looking for needles in a haystack, you’d better have a really good grip on what hay can look like—and it helps to look for needles that are hard to mistake!

Several patterns play an important role in the analysis, but I’ll discuss just one—a crucial one—to give a flavor of what’s involved. One process of Higgs particle production and decay is depicted in this sketch:

The sequence of events in the sketch above unfolds reading upwards. Gluons inside the fast-moving protons convert, by quantum fluctuations, into a “virtual” top quark and its antiparticle. The virtual top quark and antiquark swiftly recombine into a Higgs particle. Then the Higgs particle decays by a similar mechanism: quantum fluctuations convert it into a particle-antiparticle pair, which recombine into two photons. At the end of the day, it is those two photons that are observed. (I’m particularly fond of this exotically beautiful quantum process, which I discovered theoretically in 1977.) The point is that more conventional processes, i.e. processes that don’t involve Higgs particles, but which produce two energetic photons are fairly rare. Thus the calculated contribution from Higgs particles, should they exist, can be discerned above the background.

**So, does it exist? **

Short answer: We still don’t know for sure. There’s been dramatic progress on the question, however.

A very large range of potential mass-values has been ruled out. Only a small window in the range between 115 and 127 GeV remains viable.

On the other hand, an excess of events, above expectations from known processes, has been observed in the two-photon channel mentioned above and (less clearly) in several others. The excesses are compatible with, and could be explained by, the existence of Higgs particles with mass close to 125 GeV.

The observed excess might also be compatible with a statistical fluctuation in the background processes—e.g., an improbable run of normal processes leading to photon pairs, comparable to rolling four consecutive sixes at dice. With more data a true signal will grow more rapidly than any plausible fluctuation, so the ambiguity in interpretation will disappear. If the LHC continues to function brilliantly, as it has so far, we should have a definitive answer within the next few months.

**What will it mean, if the hints pan out? **

First of all, it will be a dazzling triumph for theoretical physics. Physicists will have used intricate equations and difficult calculations to predict not only the mere existence of the Higgs particle, but also (given its mass) its rate of production in the complex, extreme conditions of ultra high energy proton-proton collisions. Those equations will also have accurately rendered the relative rates at which the Higgs particle decays in different ways. Yet the most challenging task of all may be computing the much larger, competing background “noise” from known processes, in order to successfully contrast the Higgs’ “signal.” Virtually every aspect of our current understanding of fundamental physics comes into play, and gets a stringent workout, in crafting these predictions.

The animating spirit of research in fundamental physics, captured in the maxim “Today’s sensation is tomorrow’s calibration,” will not rest in that triumph, however. A Higgs particle at mass 125 GeV would portend a new level of fundamental understanding and discovery. Let me explain why.

Within our current theories of the fundamental interactions, embodied in the so-called Standard Model, the Higgs particle mass might, as previously mentioned, have any value within a wide range. Yet there are good reasons to suspect that despite its many virtues, the Standard Model is incomplete. Notably, its equations postulate four different forces (strong, weak, electromagnetic and gravitational) and six different materials they act on. It would be prettier to have a more coherent, unified theory. And in fact there are beautiful, concrete proposals for unified field theories, within which we have just one force and just one kind of material. But to make the unified theory work quantitatively, in detail, we need to expand the equations of the Standard Model so that they integrate a concept called supersymmetry.

Supersymmetry has many aspects and ramifications, but two are most relevant here. First, supersymmetry (for experts: more specifically, focus point supersymmetry) predicts that the Higgs particle mass should lie in the range 120-130 GeV. Finding Higgs particles with mass in that range would give strong circumstantial evidence both for supersymmetry and for the unification that supersymmetry enables.

Second, supersymmetry predicts the existence of many additional new fundamental particles, besides the Higgs particle, that should be accessible to the LHC. So if supersymmetry is right, the LHC will have many more years of brilliant discovery in front of it.

**And if not?**

I’ll be heartbroken. Mother Nature will have shown that Her taste is very different from mine. I don’t doubt that it’s superior, but I’ll have to struggle to understand it.

]]>