This essay is part of the series Beautiful Losers.
A tornado is just air in motion, but its ominous funnel gives an impression of autonomous existence. A tornado seems to be an object; its pattern of flux possesses an impressive degree of permanence. The Great Red Spot of Jupiter is a tornado writ large, and it has retained its size and shape for at least three hundred years. The powerful notion of vortices in fluids abstracts the mathematical essence of such objects, and led William Thomson, the 19th century physicist whose work earned him the title Lord Kelvin, to ask: Could atoms themselves be vortices in an ether that pervades space?
Kelvin's idea was inspired by the work of Hermann Helmholtz, who first realized that the core of a vortex—analogous to the eye of a hurricane—is a line-like filament that can become tangled up with other filaments in a knotted loop that cannot come undone. Helmoltz also demonstrated that vortices exert forces on one another, and those forces take a form reminiscent of the magnetic forces between wires carrying electric currents.
To Thomson, these results seemed wonderfully suggestive. At the time, evidence from chemistry and the theory of gases had persuaded most physicists that matter was indeed composed of atoms. But there was no physical model indicating how a few types of atoms, each existing in very large numbers of identical copies—as required by chemistry—could possibly arise.
In seemingly unrelated work, physicists were discovering that space-filling entities are an essential tool in Nature's workshop. Today we accept those entities—known as electric and magnetic fields—on their own terms, as fundamental; but Thomson and his contemporaries believed them to be manifestations of an underlying fluid: an updated version of Aristotle's Aether.
Thomson's bold ambition, and instinct for unity, led him to a propose a synthesis: The theory of vortex atoms. The Ethereal fluid, being so fundamental, should be capable of supporting stable vortices, he reasoned. Those vortices, according to Helmholtz' theorems, would fall into distinct species corresponding to different types of knots. Multiple knots might aggregate into a variety of quasi-stable "molecules." All this remarkably fits the heart's desire, in a theory of atoms: Naturally stable building-blocks, whose possibilities for combination seem sufficiently rich to do justice to chemistry.
Thomson himself, a restless intellect, moved on to gush other ideas, but his friend and colleague Peter Guthrie Tait, enthralled by the vortex atom theory, set to work. Thus inspired, he did pioneering work on the theory of knots, producing a systematic classification of knots with up to 10 crossings.
A table of knots. The 'Unknot' was thought to represent hydrogen; to its right, the knot thought to represent carbon. By Jkasd (Own work, Public domain), via Wikimedia Commons
Alas this beautiful and mathematically fruitful synthesis is, as a physical theory of atoms, a Beautiful Loser. Its failure was not so much due to internal contradictions—it was too vague and flexible for that!—but by a certain sterility. Above all, it was put out of business by more successful competitors. Eventually the mechanical Ether was discredited by Einstein's relativity, and the triumphant Maxwell equations for electric and magnetic fields do not support vortices. The modern, successful quantum theory of atoms is based on entirely different ideas. And yet...
It's easy to understand the appeal of vortex atoms, not only as fascinating mathematics, but as potential elements for world-building. When we turn from understanding the natural world to designing micro-worlds on our own, we might come to treasure their virtues. Vortices can have an impressive degree of stability; they can be knotted into topologically distinct forms, which are also quasi-stable; and their interactions are complex and intricate, yet reproducible.
Those attractive features can be embodied in artificial “atoms” specifically designed to be building blocks for quantum engineering. For quantum theory, though it made the vortex theory of natural atoms obsolete, provides us with a variety of far more reliable, and far more perfect, aethers than the old Aetherial fantasies. Classical fluids, whether they are real liquids or speculative substrates, are inherently imperfect. Any motion in them will stir up little waves that carry away energy, and eventually dissipate the flow. Quantum fluids, such as superfluid helium and a variety of superconductors, on the contrary, support flows that, in theory, will persist unchanged forever. And in practice, too—that’s why we call them “super”! The deep point is that in quantum mechanics energy comes packaged in discrete lumps (quanta). If you operate at low temperatures, where there’s very little energy available, it can become impossible to stir up those little waves that bedevil classical fluids at all. In quantum fluids, vortices really are forever.
There is lots of room for creativity in designing and constructing artificial aethers. Many materials become perfect (quantum) fluids at low temperature.
By choosing the right media, we can tailor our fluids to have useful properties. Physicists and engineers have become quite adept at designing useful fluids, such as the liquid crystals that enable computer monitors and LCD displays. In those examples the fluids have internal structure, which can be manipulated electrically to change their appearance. So far most of the effort has gone into classical fluids, but physicists are beginning to awaken to some promising new possibilities offered by quantum fluids. Though the details can be quite different—as I said, there’s lots of room for creativity here—the basic inspiration, to make fluids that we can manipulate externally to make them do something useful, is the same.
Designer quantum fluids can offer us a variety of vortex atoms, and the opportunity to design new chemistries that accomplish something we want done. Perhaps the most intriguing possibility is to embody, in real materials, the so-far theoretical concept of anyons. Anyons are particles that interact in a special, peculiarly quantum-mechanical way. Anyons don’t exert any forces upon one another, but when you wind one anyon around another, you make interesting, predictable changes in the wave function that describes your system. Quantum computers are, in principle, nothing but machines that process wave functions. (Since wave functions can simulate a tape, or more generally a collection of tapes, that encode data, operations on wave functions can be massively parallel operations on data.) On paper, at least, theorists have proposed ways whereby one might orchestrate the motion of anyons to construct a general-purpose quantum computer. The future will tell whether this beautiful idea blossoms into reality, or proves another seductive Beautiful Loser.
In topological quantum computing, information is processed by braiding anyons. Image courtesy of the University of Glasgow.
To celebrate Thanksgiving, we've asked some of our contributors and friends to tell us what physics they are most thankful for. We hope you'll join the conversation by sharing your own thoughts in the comments section. On Thanksgiving day, we'll have more thoughts from physicists, plus some of our favorite reader comments, on our Twitter feed, @novaphysics. Just look for the #thanksphysics hashtag. Wishing a safe, happy, and inspiring holiday to all!
Frank Wilczek: I'm thankful that the world gives us puzzles we can solve, but not too easily.
James Stein: I’m thankful that physics both intrigues the intellect and is a major driver of technological improvement. While this is true of science in general, energy is the ultimate coin of the universe, and physics is the means by which we discover how it is produced, how it is transformed, and how we can use it to better our lives.
Delia Schwartz-Perlov: I'm grateful to be living at this moment in history, when dark energy hasn’t been dominating our universe for too long, and we are therefore still able to see our magnificent universe. Creatures living billions of years in the future will not be able to see all of this.
Jim Gates: Physics is the only piece of magic I've ever seen. I'm grateful for real magic.
Sean Carroll: I'm thankful for the arrow of time, pointing from the past to the future. Without that, every moment would look the same.
Clifford Johnson: I'm thankful for the "Hoyle resonance" of carbon 12. It is an excited state of carbon that allows it to be produced in stars from helium collisions. Hoyle realized that this is the only way that the carbon we are all made of could be produced, and so reasoned the fact that he (a human, made of 18% carbon) was around to puzzle over the problem was a prediction of the existence of this state. The resonance was later discovered by nuclear physicists, with exactly the properties he said it should have.
James Stein: I’m thankful for Michael Faraday’s discovery of the principle of electromagnetic induction. It made it possible to use electricity to advance the human condition, and I think it is the single most productive discovery in the history of physics.
Delia Schwartz-Perlov: I’m eight months pregnant, so I am grateful for the physics that enables ultrasound. It is pretty amazing and exciting to see what’s going on in there.
Edward Farhi: As the physicist Ron Johnson once said, I'm grateful to quantum mechanics for an interesting life.
Why is quantum mechanics like cricket?
Because for me, no matter how many times the rules are explained, I can’t seem to get my head around what the game is actually about.
Is quantum theory a system of equations? A description of the behavior of invisible particles? A philosophy for the post-post-modern age?
And how strange is it that we even have to ask? Unlike other scientific theories, quantum physics is so slippery that its formalism—the equations that add up to a mathematical representation of what we humans call reality—is divorced from its physical interpretation. Sure, we can solve the Schrödinger equation for the case of a particle stuck in a box, but what is that telling us about how the natural world really works?
This isn’t a question you’d even think to ask about classical mechanics. Remember Newton’s Second Law, the one relating force to mass and acceleration? Its formalism is F=ma, and its interpretation is pretty simple: If you want to know the force an object is exerting, just multiply its mass by its acceleration.
That’s F=ma. But what about:
“Quantum mechanics needs an explanation worse than other theories do because others always had a physical picture that guided the formulation of the mathematics,” explains John Cramer, a physicist at the University of Washington who also happens to be the author of his own interpretation of quantum mechanics—more on that later. Newton had his (possibly apocryphal) apples, his inclined planes, his cannonballs. Werner Heisenberg, one of the “fathers” of quantum mechanics, by contrast, had some elegant mathematics, a vision more akin to numerology than to a picture of the physical world, in Cramer's view.
"The Copenhagen interpretation is like a religious text," says MIT physicist Max Tegmark. "It leaves a lot open to interpretation."
Yet Heisenberg, like his colleague Niels Bohr, felt that quantum mechanics needed no further interpretation. This view, which is now known as the Copenhagen interpretation, holds that there is no “objective reality” lurking beneath the formalism. If the equations say that I have a 50% chance of measuring a particle in a certain state—say, spin up—and then I go ahead and measure it in that state, what more is there to say? To guess at what the particle was doing before I made the measurement would be worse than speculation; nothing can be said about the particle except in the context of a measurement. “Reality” is no more and no less than what our instruments and senses reveal it to be. The Copenhagen interpretation may give you a headache, but according to Anton Zeilinger, the University of Vienna physicist most famous for his teleportation experiments, "It works, is useful to understand our experiments, and makes no unnecessary assumptions."
Still, many physicists find this notion unsatisfying. “Quantum mechanics is full of strange things that cry out for an interpretation,” says Cramer. There’s the problem of “spooky action at a distance,” the apparent connection between “entangled” particles that seems to violate the finite speed of light; and there’s Einstein’s famous discomfort with the idea that no reality exists outside of our own perceptions. As Einstein put it: “Do you really think the moon isn't there if you aren't looking at it?"
There’s also a niggling problem with exactly what defines “looking at it”—or, in quantum-speak, what defines a “measurement.” If we truly cannot say anything definite about a particle until after we’ve measured its state, then the act of measuring it must be pretty special. But why? What happens in that moment? Physicists often talk about it as the “collapse of the wavefunction”—that is, the moment when all of the possible particle states represented in the probability equation called the wavefunction collapse into a single, measured state. The instantaneous collapse of an entity that wasn’t physically real to start with is weird in itself. But physicist Steven Weinberg pointed to another weak link in this interpretation in a 2005 article in Physics Today: “The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe.”
If not Copenhagen, then what? Let’s take a quick tour of a handful of the (many!) competing interpretations of quantum mechanics.
- Copenhagen interpretation: This is the interpretation we’ve just met, and the one you’ll see in most physics books—though even Heisenberg and Bohr didn’t always agree on the particulars. To put it in terms of our cricket analogy, let’s say that you’re following a cricket match on your cell phone. Actually let’s make it a baseball game because, as I've already confessed, I don't understand cricket. So you’re using one of those apps that updates the box score every time you press “refresh,” but you can’t actually see the game in progress. According to the Copenhagen interpretation, there is no game—just the results you get when you ping the server. So it’s no use talking about whether the batter is getting into the pitcher’s head, or the appearance of the rally squirrel, or even the trajectory the ball takes on its way into the first baseman’s glove. The box score is real; the game isn’t.
- Consistent histories: The Copenhagen interpretation applies to a situation in which an observer (the baseball fan) makes a measurement (checks the score) on some external system. But what happens when the observer is himself part of the system—say, the shortstop? That's the problem that a special breed of physicists called quantum cosmologists encounter when they attempt to study the entire universe as a single quantum system. The Copenhagen interpretation falls short in this case, but the consistent histories interpretation, developed in the 1980s and early 1990s, does away with external "observers" and "measurements"—they are treated as part of one big system.
- Many worlds: We talked earlier about the problem of the collapsing wavefunction. But what if the wavefunction never actually collapses? What if every possibility it represents really does happen in its own universe? With every measurement, each universe branches off into countless others, each of which in turn branches into ever more universes. The many worlds interpretation was first proposed in the 1950s by the young physicist Hugh Everett, and though it never gained much traction at the time, its star is now ascending: In the film Parallel Worlds, Parallel Lives, Tegmark called the many worlds interpretation “one of the most important discoveries of all time in science,” and he and his colleagues recently posited that Everett's parallel universes might be congruent with the parallel universes proposed by cosmologists. Of course, plenty of physicists can't stomach the idea of a multiplicity of fundamentally unobservable universes. Yet—back to baseball for a moment—there is something appealing about an interpretation that insists upon the existence of a universe in which the baseball rolls squarely into Buckner’s glove; an interpretation that guarantees that every heartbreaker in our universe is shadowed by a heroic comeback in another; an interpretation in which the Red Sox and the Yankees win, year after year after year.
- Transactional interpretation: The transactional interpretation might solve some of quantum theory’s biggest quandaries, if you can get your head around the idea of a wave with negative energy that travels back in time. The transactional interpretation was first proposed in the 1980s by John Cramer, and suggests that the wavefunction includes not just one but two probability waves—the familiar one that travels forward in time, plus an exotic twin that travels backward. When they meet, they exchange a “handshake” across space-time, says Cramer; at other points, they cancel each other out completely, removing any telltale traces of the journey backward in time.
So, is there any way to know which interpretation is right or wrong? "Unless you can catch an interpretation deviating from the mathematics, you can't rule it out," says Cramer. And though some experiments could maybe, possibly tip the scales in favor of one interpretation or another, there is no consensus that any of the contenders above have been favored or nixed by experiment. Perhaps, some physicists argue, the pursuit of an interpretation is a flawed endeavor. "There is no logical necessity of a realistic worldview to always be obtainable," wrote Christopher Fuchs and Asher Peres in a Physics Today opinion piece titled, transparently, "Quantum Theory Needs No 'Interpretation'." "If the world is such that we can never identify a reality independent of our experimental activity, then we must be prepared for that, too." Perhaps the interpretation problem isn't a problem of quantum physics at all, but a problem of human beings.
In 1878—before Einstein was born, before quantum mechanics, before we knew that our galaxy was one among many—a well-known physicist named Phillip von Jolly told young Max Planck, a student aspiring to a career in physics, “In this field, almost everything is already discovered, and all that remains is to fill a few unimportant holes.”
Little did von Jolly realize how seriously he had underestimated the depth and quantity of those “unimportant holes,” and he certainly had no idea that Planck was to play a vital role in helping to fill them. Fortunately for us, Planck was not turned off by Jolly’s remark, and replied that he was not so much interested in discovering new things as in understanding what was known. This might sound unusual, as most scientists are motivated by a combination of two things: a desire to understand, coupled with the urge to discover. Discovery and understanding go hand-in-hand; together they move science forward, and as science moves forward, the quality of our lives improves. Planck’s career was ultimately characterized by the discovery of something truly new, something which would lead to a deeper understanding of perhaps one of the great questions in all science: how the universe enables life to exist.
Chemistry tells us that the smallest amount of water is a water molecule, and any container of water consists of a staggering number of identical water molecules. In order to resolve an underlying problem in the theory of energy distribution, Planck wondered, What if energy worked the same way? What if there were a smallest unit of energy, just as there is a smallest unit of water? The idea that energy could be expressed in discrete units, or “quantized,” was fundamental to the development of quantum theory. Indeed, you might say that Planck put the “quanta” in quantum mechanics.
So what is this smallest unit of energy? Planck hypothesized the existence of a constant, now known as Planck’s constant, or h, which links a wave or particle’s frequency with its total energy. Today, we know that
h = 6.6262 x 10-34 Joule⋅second
Planck’s constant has had profound ramifications in three important areas: our technology, our understanding of reality, and our understanding of life itself. Of the universal constants—the cosmic numbers which define our Universe—the speed of light gets all the publicity (partially because of its starring role in Einstein’s iconic equation E = mc2), but Planck’s constant is every bit as important. Planck’s constant has also enabled the construction of the transistors, integrated circuits, and chips that have revolutionized our lives.
More fundamentally, the discovery of Planck’s constant advanced the realization that, when we probe the deepest levels of the structure of matter, we are no longer looking at “things” in the conventional meaning of the word. A “thing"—like a moving car—has a definite location and velocity; a car may be 30 miles south of Los Angeles heading east at 40 miles per hour. The concepts of location, velocity, and even existence itself blur at the atomic and subatomic level. Electrons do not exist in the sense that cars do, they are, bizarrely, everywhere at once, but much more likely to be in some places than in others. Reconciling the probabilistic subatomic world with the macroscopic everyday world is one of the great unsolved problems in physics—a not-so-unimportant hole that even von Jolly would have recognized as such.
Finally, Planck’s constant tells us how the universe is numerically fine-tuned to permit life to exist. Carl Sagan, one of the great popularizers of science, was fond of saying that “We are all star stuff”—the chemicals which form our bodies are produced in the explosions of supernovas. The fundamental nuclear reaction eventually leading to the explosion of a supernova is the fusion of four hydrogen atoms to produce a single atom of helium. In the process, approximately 0.7% of the mass is converted to energy via E=mc2. That’s not much, but there is so much hydrogen in the Sun that it has been radiating enough energy to warm our planet for more than four billion years—even from a distance of 93,000,000 miles—and will continue to do so for another five billion years.
This 0.7% is known as the efficiency of hydrogen fusion, and our understanding of it is one of the consequences of Planck’s investigations. It requires a great deal of heat to enable hydrogen to fuse to helium, and the hydrogen atoms in the sun are moving at different speeds, much like cars on a freeway move at different speeds. The slower-moving hydrogen atoms just bounce off each other; they are insufficiently hot to fuse. Higher speeds, though, mean higher temperatures, and there is a small fraction of hydrogen atoms moving at sufficiently high speeds to fuse to helium.
The 0.7% efficiency of hydrogen fusion is what is sometimes referred to as a “Goldilocks number.” Like the porridge that Goldilocks eventually ate, which was neither too hot nor too cold, but just right, the 0.7% efficiency of hydrogen fusion is “just right” to permit the emergence of life as we know it. The process of hydrogen fusion is an intricate high-speed, high-temperature ballet. The first step of this reaction produces deuterium, an isotope of hydrogen whose nucleus consists of one proton and one neutron. In this process, two protons slam into one another, causing one of the protons to shed its electrical charge and metamorphose into a neutron. If the efficiency of hydrogen fusion were as low as 0.6%, the neutron and proton would not bond to each other to form a deuterium atom. In this case, we’d still have stars—huge glowing balls of hydrogen—but no star stuff would ever form because the porridge would be too cold to create helium, the first step on the road to creating the elements necessary for life.
On the other hand, if hydrogen fusion had an efficiency of 0.8%, it would be much too easy for helium to form. The hydrogen in the stars would become helium so quickly that there wouldn’t be much hydrogen left to form the molecule most essential for life—water. Star stuff would be produced, but without water life as we know it would not exist. Maybe something else would take the place of water, and maybe life could evolve—but not ours.
Planck’s quantization of energy was an essential step on the road to the theory of quantum mechanics, which is critical to our understanding of stellar evolution. Science hasn’t filled in all the pieces of the puzzle of how life actually evolved, but quantum mechanics did begin to answer the question of how the pieces got there in the first place, and probably even Philipp von Jolly would recognize that as an important hole in our knowledge of the universe that desperately needed to be filled. But perhaps the greater lesson is this: The very moment when it feels like “almost everything is already discovered” may be the moment that the universe is about to yield up its biggest surprises—if you’re not afraid to dig in to a few holes.
Physicists, to their credit, are notoriously unsentimental and future-oriented. "Today's sensation is tomorrow's calibration" describes their modus operandi.
Nevertheless! Today America's ﬂagship collider, the Tevatron at Fermilab in Illinois, will cease operation, and Europe's Large Hadron Collider (LHC) at CERN, near Geneva, will lead the exploration of the deep microcosmos. That changing of the guard inspires reﬂection, and merits celebration too.
Some historical perspective will highlight the special role of colliders in fundamental physics. The goal of experimental work in fundamental physics, crudely speaking, is to ﬁnd out what the smallest, most basic building-blocks of the material world are, and how they behave (which, if you think about it, effectively deﬁnes what they are). In the early days of science, optical microscopes revealed the existence of tiny creatures and the cellular structure of life in general. But light cannot resolve structures much smaller than its wavelength, and the wavelength of ordinary light is tens of thousands of times larger than the size of atoms.
X-rays, discovered much later, get closer to atomic scales. Rosiland Franklin's x-ray pictures of DNA crystals enabled Crick and Watson to decipher DNA's molecular structure. At this point, a simple question may suggest itself: Why was any deciphering necessary—can't you just look at the darned picture? The answer is profound, and central to our story. It's easy to take for granted a most fortunate and unusual feature of ordinary light, namely that lenses can bend it and (when suitably arranged) automatically form images of illuminated objects. Our very own eyes do that trick, which is what makes it so easy to overlook! But there are no good lenses for x-rays. Instead of images, when we scatter x-rays off matter we get patterns of greater or lesser brightness called diffraction patterns. Then we've got to use our brains to make models, using everything else we know about x-rays and matter, for what could have caused the observed diffraction pattern.
To probe structures much smaller than atoms we must get well beyond x-rays, using more extreme forms of illumination. Energetic particles are the tools for this job; and the smaller the structures we aspire to "see," the higher the energies we need. Physicists study what emerges when those projectiles impact on matter—or, in the jargon, scatter on targets. Then, like Crick and Watson, they make models of what could be responsible. (Actually, nowadays theorists usually provide myriad models in advance, and then experimentalists disprove all but one of them!) In the early days of nuclear physics, particles emitted in natural radioactivity (especially "alpha particles," later identified as helium nuclei) were the workhorse probes; later cosmic rays—high-energy particles raining down from space—despite their obvious inconvenience, and unreliability, took the lead. Those techniques led to some tantalizing discoveries, but their limitations were crippling.
Further progress required that experimentalists wean themselves from natural sources. They had to learn how to pump up the energy of particles, collect them, and guide them to targets. A long series of brilliant innovations led to the modern collider. One innovation in particular is so unlikely-sounding, yet so crucial, that it deserves special mention. According to the theory of relativity, particles moving close to the speed of light are ﬂattened in the direction of motion, but retain their size in the transverse direction, so that they appear as narrow pancakes. For our purposes, that's great—it allows the probes to be sharply localized, so they can take high-resolution pictures. It's so advantageous that physicists double down on it. Rather than impacting energetic particles on a stationary target, at a modern collider highly energetic particles moving in one direction impact other highly energetic particles moving in the opposite direction. At the Tevatron protons collide with antiprotons; at the LHC it's protons on protons. To make such collisions happen, though, is no mean feat, because the targets are comparatively few, and each is very small indeed. It takes powerful, intricately patterned electric and magnetic control ﬁelds, and ultra-fast monitoring and feedback, to bring tight counter-circulating beams to the same place at the same time.
For this and many other reasons modern colliders are fantastic engineering projects. They employ instruments and ideas more complex and much more varied than are involved, for example, in space exploration. They are big, and expensive. The main Tevatron ring, where the beams circulate, is almost four miles around, and the various pieces of the project cost about $1 billion altogether. The LHC is about ﬁve times as big, and ﬁve times as expensive.
These great colliders are, I think, monuments to our dynamic scientific civilization. They are our pyramids; but they are better motivated and much better engineered than the originals. There's been some progress in ﬁve thousand years!
A bittersweet corollary of dynamism, however, is that once-great things eventually become passé. With the coming of the LHC, which makes more and more energetic collisions, the Tevatron, its glory days gone, is ready for retirement.
What did the Tevatron teach us? I think most physicists would agree that its single most spectacular achievement was the discovery of the top quark, in 1995. The top quark is the next-to-last piece in the wildly successful Standard Model of fundamental physics. That set of ideas provides a reasonably compact census of the building blocks of matter, and precise, beautiful equations for their observed interactions; but it is less informative when it comes to their masses. The mere existence of the top quark was a ﬁrm prediction of the Standard Model since at least 1977, but theory gave no ﬁrm prediction for its mass. In fact the large value of that mass—about 185 times the mass of a proton, and more than 40 times the mass of the next-heaviest quark (bottom)—came as a shock to most. Together with the large mass comes an extraordinarily short lifetime, estimated at 5 ×10−25 seconds. Its large mass makes the top quark difficult to produce, and its short lifetime makes it challenging to detect, so the discovery was a tremendous technical achievement.
How can a single elementary particle be so heavy? We still don't know the answer to that question, or even whether it's a sensible question to ask. (A better question, I think, is why the other quarks are so light; but that's a story for another time.) In any case, the striking divergence among masses of otherwise very similar particles—i.e., different kinds of quarks—brings us face to face with our ignorance.
Pending deeper understanding, we can already draw important inferences from the large top-quark mass. Masses of quarks, within the Standard Model, reﬂect the strength of their interaction, or coupling, with the mass-giving Higgs ﬁeld. The large top-quark mass implies quite a strong coupling. That coupling is in effect a powerful new force, that must be taken into account in constructing more encompassing models of physics. Its ultimate significance is presently unclear, but it makes the idea of supersymmetry—a most interesting and attractive hypothesis on other grounds—work more smoothly; so that is the direction it might be pointing us toward.
Several other pretty discoveries were made at the Tevatron, but I think its other most important result, besides the top quark discovery, was to conﬁrm, in many demanding quantitative tests, the correctness of the core theories of the Standard Model. These triumphs of a beautiful, economical theory put many gratuitous speculations to rest, and demonstrated Nature's good taste. As a practical matter, this result provides a ﬁrm platform upon which we can stand, as we reach toward still more beautiful, uniﬁed, and encompassing understanding.
The last, still missing piece of the Standard Model is the so-called Higgs particle. Just as it predicted the top quark, theory ﬁrmly predicts the existence of the Higgs particle, but not the value of its mass. The Tevatron was able to constrain that mass to a fairly narrow range (between about 122 and 160 proton masses), but ran out of time before reaching a conclusive result. The LHC will get to the ﬁnish line, very likely, within the next year or so. A Higgs particle in that mass range would be yet another favorable omen for supersymmetry. Unless Nature is a shameless tease, we'll see supersymmetry itself—that is, some of the new particles supersymmetry predicts—discovered at the LHC, though that might take longer. Should those profound discoveries occur, as I hope and expect, they will bring our understanding of Nature's foundational principles to a new, higher level. We will build upon the Tevatron's achievements even as we transcend them.