Yet some physicists are already beginning to theorize about what might lie beyond quantum computers. You might think that this is a little premature, but I disagree. Think of it this way: From the 1950s through the 1970s, the intellectual ingredients for quantum computing were already in place, yet no one broached the idea. It was as if people were afraid to take the known laws of quantum physics and see what they implied about computation. So, now that we know about quantum computing, it’s natural not to want to repeat that mistake! And in any case, I’ll let you in on a secret: Many of us care about quantum computing less for its (real but modest) applications than because it defies our preconceptions about the ultimate limits of computation. And from that standpoint, it’s hard to avoid asking whether quantum computers are “the end of the line.”

Now, I’m emphatically not asking a philosophical question about whether a computer could be conscious, or “truly know why” it gave the answer it gave, or anything like that. I’m restricting my attention to math problems with definite right answers: e.g., what are the prime factors of a given number? And the question I care about is this: Is there any such problem that *couldn’t* be solved efficiently by a quantum computer, but *could* be solved efficiently by some other computer allowed by the laws of physics?

Here I’d better explain that, when computer scientists say “efficiently,” they mean something very specific: that is, that the amount of time and memory required for the computation grows like the size of the task raised to some fixed power, rather than exponentially. For example, if you want to use a classical computer to find out whether an n-digit number is prime or composite—though not what its prime factors are!—the difficulty of the task grows only like n cubed; this is a problem classical computers can handle efficiently. If that’s too technical, feel free to substitute the everyday meaning of the word “efficiently”! Basically, we want to know which problems computers can solve not only in principle, but in practice, in an amount of time that won’t quickly blow up in our faces and become longer than the age of the universe. We don’t care about the exact speed, e.g., whether a computer can do a trillion steps or “merely” a billion steps per second. What we care about is the *scaling behavior*: How does the number of steps grow as the number to be factored, the molecule to be simulated, or whatever gets bigger and bigger? Scaling behavior is where we see profound differences between today’s computers and quantum computers; it’s the whole reason why anyone wants to build quantum computers in the first place. So, could there be a physical device whose scaling behavior is better than quantum computers’?

**The Simulation Machine **

A quantum computer, as normally envisioned, would be a very specific kind of quantum system: one built up out of “qubits,” or quantum bits, which exist in “superpositions” of the “0” and “1” states. It’s not immediately obvious that a machine based on qubits could simulate other kinds of quantum-mechanical systems, for example, systems involving particles (like electrons and photons) that can move around in real space. And if there are systems that are hard to simulate on standard, qubit-based quantum computers, then those systems themselves could be thought of as more powerful kinds of quantum computers, which solve at least one problem—the problem of *simulating themselves*—faster than is otherwise possible.

So maybe Nature could allow more powerful kinds of quantum computers than the “usual” qubit-based kind? Strong evidence that the answer is “no” comes from work by Richard Feynman in the 1980s, and by Seth Lloyd and many others starting in the 1990s. They showed how to take a wide range of realistic quantum systems and simulate them using nothing but qubits. Thus, just as today’s scientists no longer need wind tunnels, astrolabes, and other analog computers to simulate classical physics, but instead represent airflow, planetary motions, or whatever else they want as zeroes and ones in their digital computers, so too it looks likely that a single device, a quantum computer, would in the future be able to simulate all of quantum chemistry and atomic physics efficiently.

So far, we’ve been talking about computers that can simulate “standard,” non-relativistic quantum mechanics. If we want to bring special relativity into the picture, we need quantum field theory—the framework for modern particle physics, as studied at colliders like the LHC—which presents a slew of new difficulties. First, many quantum field theories aren’t even rigorously defined: It’s not clear what we should program our quantum computer to simulate. Also, in most quantum field theories, even a vacuum is a complicated object, like an ocean surface filled with currents and waves. In some sense, this complexity is a remnant of processes that took place in the moments after the Big Bang, and it’s not obvious that a quantum computer could efficiently simulate the dynamics of the early universe in order to reproduce that complexity. So, is it possible that a “quantum field theory computer” could solve certain problems more efficiently than a garden-variety quantum computer? If nothing else, then at least the problem of simulating quantum field theory?

While we don’t yet have full answers to these questions, over the past 15 years we’ve accumulated strong evidence that qubit quantum computers are up to the task of simulating quantum field theory. First, Michael Freedman, Alexei Kitaev, and Zhenghan Wang showed how to simulate a “toy” class of quantum field theories, called topological quantum field theories (TQFTs), efficiently using a standard quantum computer. These theories, which involve only two spatial dimensions instead of the usual three, are called “topological” because in some sense, the only thing that matters in them is the global topology of space. (Interestingly, along with Michael Larsen, these authors also proved the converse: TQFTs can efficiently simulate everything that a standard quantum computer can do.)

Then, a few years ago, Stephen Jordan, Keith Lee, and John Preskill gave the first detailed, efficient simulation of a “realistic” quantum field theory using a standard quantum computer. (Here, “realistic” means they can simulate a universe containing a specific kind of particle called scalar particles. Hey, it’s a start.) Notably, Jordan and his colleagues solve the problem of creating the complicated vacuum state using an algorithm called “adiabatic state preparation” that, in some sense, mimics the cooling the universe itself underwent shortly after the Big Bang. They haven’t yet extended their work to the full Standard Model of particle physics, but the difficulties in doing so are probably surmountable.

So, if we’re looking for areas of physics that a quantum computer would have trouble simulating, we’re left with just one: quantum gravity. As you might have heard, quantum gravity has been the white whale of theoretical physicists for almost a century. While there are deep ideas about it (most famously, string theory), no one really knows yet how to combine quantum mechanics with Einstein’s general theory of relativity, leaving us free to project our hopes onto quantum gravity—including, if we like, the hope of computational powers beyond those of quantum computers!

**Boot Up Your Time Machine**

But is there anything that could support such a hope? Well, quantum gravity might force us to reckon with breakdowns of causality itself, if *closed timelike curves* (i.e., time machines to the past) are possible. A time machine is *definitely* the sort of thing that might let us tackle problems too hard even for a quantum computer, as David Deutsch, John Watrous and I have pointed out. To see why, consider the “Shakespeare paradox,” in which you go back in time and dictate Shakespeare’s plays to him, to save Shakespeare the trouble of writing them. Unlike with the better-known “grandfather paradox,” in which you go back in time and kill your grandfather, here there’s no logical contradiction. The only “paradox,” if you like, is one of “computational effort”: somehow Shakespeare’s plays pop into existence without anyone going to the trouble to write them!

Using similar arguments, it’s possible to show that, if closed timelike curves exist, then under fairly mild assumptions, one could “force” Nature to solve hard combinatorial problems, just to keep the universe’s history consistent (i.e., to prevent things like the grandfather paradox from arising). Notably, the problems you could solve that way include the *NP-complete problems*: a class that includes hundreds of problems of practical importance (airline scheduling, chip design, etc.), and that’s believed to scale exponentially in time even for quantum computers.

Of course, it’s also possible that quantum gravity will simply tell us that closed timelike curves can’t exist—and maybe the computational superpowers they would give us if they *did* exist is evidence that they must be forbidden!

**Simulating Quantum Gravity**

Going even further out on a limb, the famous mathematical physicist Roger Penrose has speculated that quantum gravity is literally impossible to simulate using either an ordinary computer or a quantum computer, even with unlimited time and memory at your disposal. That would put simulating quantum gravity into a class of problems studied by the logicians Alan Turing and Kurt Gödel in the 1930s, which includes problems way harder than even the NP-complete problems—like determining whether a given computer program will ever stop running (the “halting problem”). Penrose further speculates that the human brain is sensitive to quantum gravity effects, and that this gives humans the ability to solve problems that are fundamentally unsolvable by computers. However, virtually no other expert in the relevant fields agrees with the arguments that lead Penrose to this provocative position.

What’s more, there are recent developments in quantum gravity that seem to support the opposite conclusion: that is, they hint that a standard quantum computer could efficiently simulate even quantum-gravitational processes, like the formation and evaporation of black holes. Most notably, the AdS/CFT correspondence, which emerged from string theory, posits a “duality” between two extremely different-looking kinds of theories. On one side of the duality is AdS (Anti de Sitter): a theory of quantum gravity for a hypothetical universe that has a negative cosmological constant, effectively causing the whole universe to be surrounded by a reflecting boundary. On the other side is a CFT (Conformal Field Theory): an “ordinary” quantum field theory, without gravity, that lives only on the *boundar*y of the AdS space. The AdS/CFT correspondence, for which there’s now overwhelming evidence (though not yet a proof), says that any question about what happens in the AdS space can be translated into an “equivalent” question about the CFT, and vice versa.

This suggests that, if we wanted to simulate quantum gravity phenomena in AdS space, we might be able to do so by first translating to the CFT side, then simulating the CFT on our quantum computer, and finally translating the results back to AdS. The key point here is that, since the CFT doesn’t involve gravity, the difficulties of simulating it on a quantum computer are “merely” the relatively prosaic difficulties of simulating quantum field theory on a quantum computer. More broadly, the lesson of AdS/CFT is that, even if a quantum gravity theory seems “wild”—even if it involves nonlocality, wormholes, and other exotica—there might be a dual description of the theory that’s more “tame,” and that’s more amenable to simulation by a quantum computer. (For this to work, the translation between the AdS and CFT descriptions also needs to be computationally efficient—and it’s possible that there are situations where it isn’t.)

**The Black Hole Problem**

So, is there any other hope for doing something in Nature that a quantum computer couldn’t efficiently simulate? Let’s circle back from the abstruse reaches of string theory to some much older ideas about how to speed up computation. For example, wouldn’t it be great if you could program your computer to do the first step of a computation in one second, the second step in half a second, the third step in a quarter second, the fourth step in an eighth second, and so on—halving the amount of time with each additional step? If so, then much like in Zeno’s paradox, your computer would have completed infinitely many steps in a mere two seconds!

Or, what if you could leave your computer on Earth, working on some incredibly hard calculation, then board a spaceship, accelerate to close to the speed of light, then decelerate and return to Earth? If you did this, then Einstein’s special theory of relativity firmly predicts that, depending on just how close you got to the speed of light, millions or even trillions of years would have elapsed in Earth’s frame of reference. Presumably, civilization would have collapsed and all your friends would be long dead. But if, hypothetically, you could find your computer in the ruins and it was still running, then you could learn the answer to your hard problem!

We’re now faced with a puzzle: What goes wrong if you try to accelerate computation using these sorts of tricks? The key factor is *energy*. Even in real life, there are hobbyists who “overclock” their computers, or run them faster than the recommended speed; for example, they might run a 1000 MHz chip at 2000 MHz. But the well-known danger in doing this is that your microchip might overheat and melt! Indeed, it’s precisely because of the danger of overheating that your computer has a fan. Now, the faster you run your computer, the more cooling you need—that’s why many supercomputers are cooled using liquid nitrogen. But cooling takes energy. So, is there some fundamental limit here? It turns out that there is. Suppose you wanted to cool your computer so completely that it could perform about 10^{43} operations per second—that is, one about operation per Planck time (where a Planck time, ~10^{-43} seconds, is the smallest measurable unit of time in quantum gravity). To run your computer that fast, you’d need so much energy concentrated in so small a space that, according to general relativity, your computer would collapse into a black hole!

And the story is similar for the “relativity computer.” There, the more you want to speed up your computer, the closer you have to accelerate your spaceship to the speed of light. But the more you accelerate the spaceship, the more energy you need, with the energy diverging to infinity as your speed approaches that of light. At some point, your spaceship will become so energetic that it, too, will collapse into to a black hole.

Now, how do we know that collapse into a black hole is inevitable—that there’s no clever way to avoid it? The calculation combines Newton’s gravitational constant G with Planck’s constant h, the central constant of quantum mechanics. That means one is doing a quantum gravity calculation! I’ll end by letting you savor the irony: Even as some people hope that a quantum theory of gravity might let us surpass the known limits of quantum computers, quantum gravity might play just the opposite role, *enforcing* those limits.

**Go Deeper**

*Editor’s picks for further reading*

Computer History Museum

Explore 2000 years of computer history at the web site of this unique Mountain View, California museum.

Quantum Computing Since Democritus

Described by its author as “a candidate for the weirdest book ever to be published by Cambridge University Press,” Scott Aaronson’s book is a romp through the past and present of math, physics, and computer science.

The New York Times: Quantum Computing Promises New Insights, Not Just Supermachines

In this essay, Scott Aaronson argues that the popular conception of quantum computers “misses the most important part of the story,” and that the greatest payoff may be a deeper understanding of quantum mechanics.

I’m talking, of course, about the search for the one theory of everything, a theory of physics that works in all circumstances no matter how extreme, is motivated by observations, and can be expressed in a few elegant axioms. While some theorists devote their careers to finding the one, others believe that this ideal may be fundamentally unattainable. So we are left with the big question: Is the hope for the one theory of everything realistic, or should we be satisfied to settle down and grow alongside the theories we have?

We are currently living with a beautiful theory, a theory that *almost* has it all: the Standard Model. It is the most precise theory in human history. The Standard Model can make predictions that match experiments to one part in 10 billion. That is like measuring the width of the United States to the accuracy of a human hair. The Standard Model explains the Sun’s glow, the inner workings of computers, and every atom that makes up our bodies. This theory is in our hands, it’s reliable, and we’re pretty happy…but it isn’t perfect.

There is one huge, glaring omission in the Standard Model: It doesn’t explain gravity. Of the four forces in the universe—electromagnetism, the strong force (holding nuclei together), the weak force (governing radioactive decay), and gravity—gravity is the black sheep. Not only is it the runt of the forces, with a strength around a trillion trillion trillion times weaker than the typical strength of the other forces, but our current theory of gravity is completely separate from, and at odds with, the Standard Model. Until we reconcile the two, humanity’s understanding of the universe will be incomplete.

Some scientists believe that this reconciliation is just around the corner. Theoretical physicist Garrett Lisi, for one, thinks that extensions of his model, which aims to unify general relativity and the Standard Model within a single framework, can “reproduc[e] all known ﬁelds and dynamics through pure geometry.”

“The theory currently evolving from this observation is wonderfully complex and gives me hope that we might be getting close to the full picture,” says Lisi.

Others, like Dartmouth physicist Marcelo Gleiser, argue that we are stuck with at least partial ignorance. “As long as we can’t measure all there is in the natural world—and the point is that we simply can’t—we can’t have a theory of everything. As a consequence, any theory that we may have that purports to explain ‘all’ that we know of the world is also necessarily incomplete.”

Indeed, the more closely we examine the universe, the more levels of complexity we find. Will observing the world more deeply finally lead us to a theory of everything? Or will we be perpetually pulling layers from an infinite onion—a prospect that would make innumerable physicists cry?

If it is the latter, then we must be content with what physicists call an “effective field theory.” The idea is that one should describe the world with the same degree of complexity one wishes to understand. The smaller the details you aim to study, the greater the complexity you should expect from your theory. It is like looking at the sun. If we peer at it just for a moment, it seems to be a smooth, bright, glowing sphere (see on the left, below). This is an “effective theory” of the sun. But if you zoom in (on the right), there is more going on: solar flares, sunspots, and streams of hot plasma shooting into space.

In the same way, if you peer at a collision between two particles (say, electrons), then the effective theory would describe this as a simple bounce off each other (see on the left, below), but when you zoom in (on the right), there is more complexity: the electrons exchange other particles, causing them to repel each other and “bounce.”

By looking closer, you realize your perfect, simple picture of what you may have thought was “the one” correct description is just an approximation of something more complicated and complete.

Indeed, every time we find a theory that seems to “have it all,” a closer look reveals gaps and errors in the theory. Yet from the time of Archimedes, who fathered the idea that we could describe all of nature from just a few axioms, to the modern era of the Standard Model, physicists have kept searching for the one, refusing to settle for “good enough” when flaws and omissions in their theories were revealed.

It is natural to wonder, then, is the Standard Model just an effective theory, the latest in a long line of close-but-not-quite ideas? The consensus among physicist is a resounding “yes,” leaving us with questions: Is there an infinite number of layers, or if we look closely enough can we find that there is one final center onion-core? And how does gravity fit into the onion?

There is reason to believe there may be a “core” to the onion. While historically physicists have peeled back the layers of the onion by “zooming in” on ever-smaller size scales, Heisenberg’s uncertainty principle may limit the ultimate resolution at which we can observe the universe. If you go small enough, then particles don’t have a definite position, you can’t tell where they are, and looking closer could not improve the resolution. The size at which this occurs is called the “Planck scale.”

At the tiny size where we lose particle resolution, gravity, once the runt of the forces, may intensify to a strength similar to that of the forces in the standard model. Gravity would no longer be the weak outcast, giving hope that gravity may “fit in” with the standard model, producing a theory of everything that unites quantum and gravitational phenomena.

Unfortunately, current experiments are far from being able to confirm such a unification. Physicists’ most powerful experiment for examining tiny-distance physics is the Large Hadron Collider (LHC), with its incredible smashing ability that can peel away layers of the onion. But to explore the physics of the Planck scale, we would need a machine more than a million billion times more powerful.

Despite experimental limitations, some of the greatest scientists have searched for a theory of everything. Einstein spent last decades of his life looking for a theory of everything. Unfortunately he passed away before he was able to find it. Stephen Hawking also searched for the theory of everything, before having a change of heart. “Some people will be very disappointed if there is not an ultimate theory that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind,” he has said. Richard Feynman, who is often considered “the best mind since Einstein” once said, “If it turns out there is a simple ultimate law which explains everything, so be it—that would be very nice to discover. If it turns out it’s like an onion with millions of layers… then that’s the way it is.”

So, while searching for a theory of everything is exciting, we may be well advised to take time to appreciate what we already have.

**Go Deeper**

*Author’s suggestions for further reading*

American Museum of Natural History: Isaac Asimov Memorial Debate: Theory of Everything

A panel of acclaimed physicists, including Lee Smolin, Brian Greene, and Janna Levin, debates whether it is possible to explain the universe with a single, unifying theory.

Godel and the End of Physics

In this lecture, Stephen Hawking asks whether it is possible to find a complete set of laws of nature.

The Island of Knowledge: The Limits of Science and the Search for Meaning

In his forthcoming book, Marcelo Gleiser asks if there are fundamental limits to how much science can explain.

NOVA: A Theory of Everything

In this essay, Brian Greene explores how string theory could unite quantum mechanics and general relativity.

**Go Deeper**

*Editor’s picks for further reading*

Cosmos: Carl Sagan: The 4th Dimension

In this scene from the classic “Cosmos” series, Carl Sagan imagines what happens when a three-dimensional character enters a two-dimensional world.

FQXi: Taking on String Theory’s 10-D Universe with 8-D Math

In this article, discover how theorists Tevian Dray and Corinne Manogue are using ten-dimensional math to describe subatomic particles.

NOVA: Imagining Other Dimensions

Journey from a two-dimensional “flatland” to the ten- (or more) dimensional world of superstring theory in this illustrated essay.

Physicists have never been much for conventional wisdom, though, and the dream of studying quantum gravity is too enthralling to give up. Right now, physicists don’t really know how gravity works—they have quantum theories for every force of nature except this one. And as Einstein showed, gravity is not just any old force, but a reflection of the structure of spacetime on which all else depends. In a quantum theory of gravity, all the principles that govern nature will come together. If physicists can observe some distinctively quantum feature of gravity, they will have glimpsed the underlying unity of the natural world.

Even if they can’t crank up their particle accelerators to the requisite energies, that hasn’t stopped them from devising indirect experiments—ones that don’t try to swallow the whole problem in one gulp, but nibble at it. My award-winning colleague Michael Moyer describes one in *Scientific American*‘s February cover story, and lots of others are burbling, too. Rather than matter and antimatter, “quantum gravity” and “experiment” are more like peanut butter and chocolate. They actually go together quite tastily.

An example came out at the American Astronomical Society meeting in Austin earlier this month. Robert Nemiroff of Michigan Technological University presented his team’s study of extremely high-energy, short-wavelength cosmic gamma rays. The idea, which goes back to the late 1990s, is that short-wavelength photons may be more sensitive than long-wavelength ones to the microscopic quantum structure of spacetime, just as a car with small tires rattles with road bumps that a monster truck doesn’t even feel. The effect might be slight, but if the photons travel for billions of years, even the minutest slowdown or speed-up can appreciably change their time of arrival. Nemiroff’s team focused on gamma-ray burst GRB 090510A, observed by the Fermi space telescope. It went off about 7 billion years ago, and photons of short and long wavelength arrived at almost the same time—no more than about 1 millisecond apart. Any speed difference was at most one part in 10^{20}, implying that quantum gravity hardly waylaid these photons at all.

Theoretical physicists have long debated whether quantum gravity would alter photon speed, and most were not surprised by the negative result. But what’s important is the change of mindset. Experimenters and observers care less about what we *should* see than what we *can* see. These are people who love to build stuff. If they can build some gizmo that might bring gravity and quantum mechanics into contact, they’ll do it, whatever the theorists might say. They take an “if you build it, something will come” attitude. Historically, physics has been well-served by going out to look at nature with a minimum of prejudice.

The latest brainstorm is to apply techniques from quantum optics and related disciplines, which manipulate photons of light and other particles in order to build encrypted communications links, develop the components of a quantum computer, and study matter at extremely low temperatures. The tool of this trade is an interferometer, an apparatus that probes the wave nature of particles. It consists of a particle source, a particle detector, and two paths to get from one to the other. Being quantum, a particle goes both ways. That is to say, the wave corresponding to the particle splits in two, travels the distance, and fuses back together again. The relative length of the paths (or anything else that differentiates them) determines whether the waves will mutually reinforce or cancel and therefore what the detector will detect.

At first glance, these setups are the last place you’d go to look for quantum gravity. They are decidedly low-energy experiments, usually conducted on lab benches the size of dining-room tables. There is nary a gamma ray or accelerated particle to be found. But Moyer’s cover story describes how an interferometer can serve as an extremely precise ranging instrument. Any change in the paths’ relative length, as you might expect if spacetime is roiled by quantum fluctuations, will register at the detector.

Last spring, a team of physicists in Vienna led by Çaslav Brukner explored another use of interferometers: to see whether quantum particles truly obey gravity as Einstein conceived it. This isn’t quantum gravity, per se—the particles are quantum, but gravity behaves in a strictly classical way. Nonetheless, it is a fascinating case of how the two theories interact. You might think that the gravity on a single particle is way too feeble to measure, but an interferometer can manage it. You set it up so that the two paths are at different heights and therefore experience a different gravitational potential, which registers at the detector.

The Vienna team proposed sending not just any particle through the interferometer, but one that acts like a miniature clock—marking time by rotating or decaying. General relativity predicts that clocks run slower the deeper they get into a gravitational field, which, in this experiment, would act to wash away the wave nature of the particle altogether. The fading-away of the wave properties would be the unmistakable fingerprint of general relativity and a stepping-stone to quantum gravity. Current interferometers lack the necessary precision to look for this effect, but it is just a matter of time. (Sorry, couldn’t resist.) For more, see the authors’ own blog post and their paper in Nature Communications last fall.

It it also possible that quantum gravity could modify Heisenberg’s famous uncertainty principle. As Sabine Hossenfelder at Backreaction described last Wednesday, gravitational effects may set a minimum length that anything in nature could ever have, which means that no matter how much momentum imprecision you’re willing to accept, a position measurement could never be more precise than the minimum length. Experiments like this one could use tiny mirrors and springboards to pick up that effect.

Still another approach suggested by the ever-inventive Viennese is to define quantum gravitational ideas in concrete rather than abstract terms. Theorists think that quantum fluctuations in spacetime might make cause-effect sequences ambiguous, with the practical consequence of changing the types of correlations physicists observe in the lab. But the Viennese suggest thinking about it the other way round: Physicists observe certain types of correlations in the lab and, from these, draw conclusions about spacetime.

Some such correlations—those that muddle cause and effect—would be be inexplicable in ordinary physics. When quantum effects enter into play, “spacetime” loses some of the most basic features we associate with it, such as the notion that objects reside in certain places at certain times. In the Viennese scenario, you lose the ability to tell a story: One thing happened, then another, then another. It becomes a Dadaist jumble.

This approach hasn’t lent itself to a specific experiment yet, but is generally inspired by the experimentalist mindset. In this, it follows a trail blazed by Einstein himself, who developed his theories of relativity by thinking of abstract ideas in a concrete way. Even when experimenters can’t build actual experiments, their feet-on-the-ground mentality provides a fresh look at some of the hardest problems in modern science.

*This post is adapted and reprinted from Scientific American; find the original here.*

**Go Deeper**

*Editor’s picks for further reading*

FQXi: Journeying through the Quantum Froth

Are cosmic rays revealing the quantum nature of spacetime?

FQXi: Table-Top Tests for Quantum Gravity

In this podcast, discover how scientists are probing quantum gravity using quantum optics.

Perimeter Institute: Lectures on the experimental search for quantum gravity

Watch a series of scientific lectures on experimental probes of quantum gravity.