Quantum Physics

24
Jun

Can Quantum Computing Reveal the True Meaning of Quantum Mechanics?

Quantum mechanics says not merely that the world is probabilistic, but that it uses rules of probability that no science fiction writer would have had the imagination to invent. These rules involve complex numbers, called “amplitudes,” rather than just probabilities (which are real numbers between 0 and 1). As long as a physical object isn’t interacting with anything else, its state is a huge wave of these amplitudes, one for every configuration that the system could be found in upon measuring it. Left to itself, the wave of amplitudes evolves in a linear, deterministic way. But when you measure the object, you see some definite configuration, with a probability equal to the squared absolute value of its amplitude. The interaction with the measuring device “collapses” the object to whichever configuration you saw.

Those, more or less, are the alien laws that explain everything from hydrogen atoms to lasers and transistors, and from which no hint of an experimental deviation has ever been found, from the 1920s until today. But could this really be how the universe operates? Is the “bedrock layer of reality” a giant wave of complex numbers encoding potentialities—until someone looks? And what do we mean by “looking,” anyway?

binary_620
Could quantum computing help reveal what the laws of quantum mechanics really mean? Adapted from an image by Flickr user Politropix under a Creative Commons license.

There are different interpretive camps within quantum mechanics, which have squabbled with each other for generations, even though, by design, they all lead to the same predictions for any experiment that anyone can imagine doing. One interpretation is Many Worlds, which says that the different possible configurations of a system (when far enough apart) are literally parallel universes, with the “weight” of each universe given by its amplitude. In this view, the whole concept of measurement—and of the amplitude waves collapsing on measurement—is a sort of illusion, playing no fundamental role in physics. All that ever happens is linear evolution of the entire universe’s amplitude wave—including a part that describes the atoms of your body, which (the math then demands) “splits” into parallel copies whenever you think you’re making a measurement. Each copy would perceive only itself and not the others. While this might surprise people, Many Worlds is seen by many (certainly by its proponents, who are growing in number) as the conservative option: the one that adds the least to the bare math.

A second interpretation is Bohmian mechanics, which agrees with Many Worlds about the reality of the giant amplitude wave, but supplements it with a “true” configuration that a physical system is “really” in, regardless of whether or not anyone measures it. The amplitude wave pushes around the “true” configuration in a way that precisely matches the predictions of quantum mechanics. A third option is Niels Bohr’s original “Copenhagen Interpretation,” which says—but in many more words!—that the amplitude wave is just something in your head, a tool you use to make predictions. In this view, “reality” doesn’t even exist prior to your making a measurement of it—and if you don’t understand that, well, that just proves how mired you are in outdated classical ways of thinking, and how stubbornly you insist on asking illegitimate questions.

But wait: if these interpretations (and others that I omitted) all lead to the same predictions, then how could we ever decide which one is right? More pointedly, does it even mean anything for one to be right and the others wrong, or are these just different flavors of optional verbal seasoning on the same mathematical meat? In his recent quantum mechanics textbook, the great physicist Steven Weinberg reviews the interpretive options, ultimately finding all of them wanting. He ends with the hope that new developments in physics will give us better options. But what could those new developments be?

In the last few decades, the biggest new thing in quantum mechanics has been the field of quantum computing and information. The goal here, you might say, is to “put the giant amplitude wave to work”: rather than obsessing over its true nature, simply exploit it to do calculations faster than is possible classically, or to help with other information-processing tasks (like communication and encryption). The key insight behind quantum computing was articulated by Richard Feynman in 1982: to write down the state of n interacting particles each of which could be in either of two states, quantum mechanics says you need 2n amplitudes, one for every possible configuration of all n of the particles. Chemists and physicists have known for decades that this can make quantum systems prohibitively difficult to simulate on a classical computer, since 2n grows so rapidly as a function of n.

But if so, then why not build computers that would themselves take advantage of giant amplitude waves? If nothing else, such computers could be useful for simulating quantum physics! What’s more, in 1994, Peter Shor discovered that such a machine would be useful for more than physical simulations: it could also be used to factor large numbers efficiently, and thereby break most of the cryptography currently used on the Internet. Genuinely useful quantum computers are still a ways away, but experimentalists have made dramatic progress, and have already demonstrated many of the basic building blocks.

I should add that, for my money, the biggest application of quantum computers will be neither simulation nor codebreaking, but simply proving that this is possible at all! If you like, a useful quantum computer would be the most dramatic demonstration imaginable that our world really does need to be described by a gigantic amplitude wave, that there’s no way around that, no simpler classical reality behind the scenes. It would be the final nail in the coffin of the idea—which many of my colleagues still defend—that quantum mechanics, as currently understood, must be merely an approximation that works for a few particles at a time; and when systems get larger, some new principle must take over to stop the exponential explosion.

But if quantum computers provide a new regime in which to probe quantum mechanics, that raises an even broader question: could the field of quantum computing somehow clear up the generations-old debate about the interpretation of quantum mechanics? Indeed, could it do that even before useful quantum computers are built?

At one level, the answer seems like an obvious “no.” Quantum computing could be seen as “merely” a proposed application of quantum mechanics as that theory has existed in physics books for generations. So, to whatever extent all the interpretations make the same predictions, they also agree with each other about what a quantum computer would do. In particular, if quantum computers are built, you shouldn’t expect any of the interpretive camps I listed before to concede that its ideas were wrong. (More likely that each camp will claim its ideas were vindicated!)

At another level, however, quantum computing makes certain aspects of quantum mechanics more salient—for example, the fact that it takes 2n amplitudes to describe n particles—and so might make some interpretations seem more natural than others. Indeed that prospect, more than any application, is why quantum computing was invented in the first place. David Deutsch, who’s considered one of the two founders of quantum computing (along with Feynman), is a diehard proponent of the Many Worlds interpretation, and saw quantum computing as a way to convince the world (at least, this world!) of the truth of Many Worlds. Here’s how Deutsch put it in his 1997 book “The Fabric of Reality”:

Logically, the possibility of complex quantum computations adds nothing to a case [for the Many Worlds Interpretation] that is already unanswerable. But it does add psychological impact. With Shor’s algorithm, the argument has been writ very large. To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?

As you might imagine, not all researchers agree that a quantum computer would be “psychological evidence” for Many Worlds, or even that the two things have much to do with each other. Yes, some researchers reply, a quantum computer would take exponential resources to simulate classically (using any known algorithm), but all the interpretations agree about that. And more pointedly: thinking of the branches of a quantum computation as parallel universes might lead you to imagine that a quantum computer could solve hard problems in an instant, by simply “trying each possible solution in a different universe.” That is, indeed, how most popular articles explain quantum computing, but it’s also wrong!

The issue is this: suppose you’re facing some arbitrary problem—like, say, the Traveling Salesman problem, of finding the shortest path that visits a collection of cities—that’s hard because of a combinatorial explosion of possible solutions. It’s easy to program your quantum computer to assign every possible solution an equal amplitude. At some point, however, you need to make a measurement, which returns a single answer. And if you haven’t done anything to boost the amplitude of the answer you want, then you’ll see merely a random answer—which, of course, you could’ve picked for yourself, with no quantum computer needed!

For this reason, the only hope for a quantum-computing advantage comes from interference: the key aspect of amplitudes that has no classical counterpart, and indeed, that taught physicists that the world has to be described with amplitudes in the first place. Interference is customarily illustrated by the double-slit experiment, in which we shoot a photon at a screen with two slits in it, and then observe where the photon lands on a second screen behind it. What we find is that there are certain “dark patches” on the second screen where the photon never appears—and yet, if we close one of the slits, then the photon can appear in those patches. In other words, decreasing the number of ways for the photon to get somewhere can increase the probability that it gets there! According to quantum mechanics, the reason is that the amplitude for the photon to land somewhere can receive a positive contribution from the first slit, and a negative contribution from the second. In that case, if both slits are open, then the two contributions cancel each other out, and the photon never appears there at all. (Because the probability is the amplitude squared, both negative and positive amplitudes correspond to positive probabilities.)

Likewise, when designing algorithms for quantum computers, the goal is always to choreograph things so that, for each wrong answer, some of the contributions to its amplitude are positive and others are negative, so on average they cancel out, leaving an amplitude close to zero. Meanwhile, the contributions to the right answer’s amplitude should reinforce each other (being, say, all positive, or all negative). If you can arrange this, then when you measure, you’ll see the right answer with high probability.

It was precisely by orchestrating such a clever interference pattern that Peter Shor managed to devise his quantum algorithm for factoring large numbers. To do so, Shor had to exploit extremely specific properties of the factoring problem: it was not just a matter of “trying each possible divisor in a different parallel universe.” In fact, an important 1994 theorem of Bennett, Bernstein, Brassard, and Vazirani shows that what you might call the “naïve parallel-universe approach” never yields an exponential speed improvement. The naïve approach can reveal solutions in only the square root of the number of steps that a classical computer would need, an important phenomenon called the Grover speedup. But that square-root advantage turns out to be the limit: if you want to do better, then like Shor, you need to find something special about your problem that lets interference reveal its answer.

What are the implications of these facts for Deutsch’s argument that only Many Worlds can explain how a quantum computer works? At the least, we should say that the “exponential cornucopia of parallel universes” almost always hides from us, revealing itself only in very special interference experiments where all the “universes” collaborate, rather than any one of them shouting above the rest. But one could go even further. One could say: To whatever extent the parallel universes do collaborate in a huge interference pattern to reveal (say) the factors of a number, to that extent they never had separate identities as “parallel universes” at all—even according to the Many Worlds interpretation! Rather, they were just one interfering, quantum-mechanical mush. And from a certain perspective, all the quantum computer did was to linearly transform the way in which we measured that mush, as if we were rotating it to see it from a more revealing angle. Conversely, whenever the branches do act like parallel universes, Many Worlds itself tells us that we only observe one of them—so from a strict empirical standpoint, we could treat the others (if we liked) as unrealized hypotheticals. That, at least, is the sort of reply a modern Copenhagenist might give, if she wanted to answer Deutsch’s argument on its own terms.

There are other aspects of quantum information that seem more “Copenhagen-like” than “Many-Worlds-like”—or at least, for which thinking about “parallel universes” too naïvely could lead us astray. So for example, suppose Alice sends n quantum-mechanical bits (or qubits) to Bob, then Bob measures qubits in any way he likes. How many classical bits can Alice transmit to Bob that way? If you remember that n qubits require 2n amplitudes to describe, you might conjecture that Alice could achieve an incredible information compression—“storing one bit in each parallel universe.” But alas, an important result called Holevo’s Theorem says that, because of the severe limitations on what Bob learns when he measures the qubits, such compression is impossible. In fact, by sending n qubits to Bob, Alice can reliably communicate only n bits (or 2n bits, if Alice and Bob shared quantum correlations in advance), essentially no better than if she’d sent the bits classically. So for this task, you might say, the amplitude wave acts more like “something in our heads” (as the Copenhagenists always said) than like “something out there in reality” (as the Many-Worlders say).

But the Many-Worlders don’t need to take this lying down. They could respond, for example, by pointing to other, more specialized communication problems, in which it’s been proven that Alice and Bob can solve using exponentially fewer qubits than classical bits. Here’s one example of such a problem, drawing on a 1999 theorem of Ran Raz and a 2010 theorem of Boaz Klartag and Oded Regev: Alice knows a vector in a high-dimensional space, while Bob knows two orthogonal subspaces. Promised that the vector lies in one of the two subspaces, can you figure out which one holds the vector? Quantumly, Alice can encode the components of her vector as amplitudes—in effect, squeezing n numbers into exponentially fewer qubits. And crucially, after receiving those qubits, Bob can measure them in a way that doesn’t reveal everything about Alice’s vector, but does reveal which subspace it lies in, which is the one thing Bob wanted to know.

So, do the Many Worlds become “real” for these special problems, but retreat back to being artifacts of the math for ordinary information transmission?

To my mind, one of the wisest replies came from the mathematician and quantum information theorist Boris Tsirelson, who said: “a quantum possibility is more real than a classical possibility, but less real than a classical reality.” In other words, this is a new ontological category, one that our pre-quantum intuitions simply don’t have a good slot for. From this perspective, the contribution of quantum computing is to delineate for which tasks the giant amplitude wave acts “real and Many-Worldish,” and for which other tasks it acts “formal and Copenhagenish.” Quantum computing can give both sides plenty of fresh ammunition, without handing an obvious victory to either.

So then, is there any interpretation that flat-out doesn’t fare well under the lens of quantum computing? While some of my colleagues will strongly disagree, I’d put forward Bohmian mechanics as a candidate. Recall that David Bohm’s vision was of real particles, occupying definite positions in ordinary three-dimensional space, but which are jostled around by a giant amplitude wave in a way that perfectly reproduces the predictions of quantum mechanics. A key selling point of Bohm’s interpretation is that it restores the determinism of classical physics: all the uncertainty of measurement, we can say in his picture, arises from lack of knowledge of the initial conditions. I’d describe Bohm’s picture as striking and elegant—as long as we’re only talking about one or two particles at a time.

But what happens if we try to apply Bohmian mechanics to a quantum computer—say, one that’s running Shor’s algorithm to factor a 10,000-digit number, using hundreds of thousands of particles? We can do that, but if we do, talking about the particles’ “real locations” will add spectacularly little insight. The amplitude wave, you might say, will be “doing all the real work,” with the “true” particle positions bouncing around like comically-irrelevant fluff. Nor, for that matter, will the bouncing be completely deterministic. The reason for this is technical: it has to do with the fact that, while particles’ positions in space are continuous, the 0’s and 1’s in a computer memory (which we might encode, for example, by the spins of the particles) are discrete. And one can prove that, if we want to reproduce the predictions of quantum mechanics for discrete systems, then we need to inject randomness at many times, rather than only at the beginning of the universe.

But it gets worse. In 2005, I proved a theorem that says that, in any theory like Bohmian mechanics, if you wanted to calculate the entire trajectory of the “real” particles, you’d need to solve problems that are thought to be intractable even for quantum computers. One such problem is the so-called collision problem, where you’re given a cryptographic hash function (a function that maps a long message to a short “hash value”) and asked to find any two messages with the same hash. In 2002, I proved that, at least if you use the “naïve parallel-universe” approach, any quantum algorithm for the collision problem requires at least ~H1/5 steps, where H is the number of possible hash values. (This lower bound was subsequently improved to ~H1/3 by Yaoyun Shi, exactly matching an upper bound of Brassard, Høyer, and Tapp.) By contrast, if (with godlike superpower) you could somehow see the whole histories of Bohmian particles, you could solve the collision problem almost instantly.

What makes this interesting is that, if you ask to see the locations of Bohmian particles at any one time, you won’t find anything that you couldn’t have easily calculated with a standard, garden-variety quantum computer. It’s only when you ask for the particles’ locations at multiple times—a question that Bohmian mechanics answers, but that ordinary quantum mechanics rejects as meaningless—that you’re able to see multiple messages with the same hash, and thereby solve the collision problem.

My conclusion is that, if you believe in the reality of Bohmian trajectories, you believe that Nature does even more computational work than a quantum computer could efficiently simulate—but then it hides the fruits of its labor where no one can ever observe it. Now, this sits uneasily with a principle that we might call “Occam’s Razor with Computational Aftershave.” Namely: In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed. (Admittedly, some people would probably argue that the Many Worlds interpretation violates my “aftershave principle” even more flagrantly than Bohmian mechanics does! But that depends, in part, on what we count as “observation”: just our observations, or also the observations of any parallel-universe doppelgängers?)

Could future discoveries in quantum computing theory settle once and for all, to every competent physicist’s satisfaction, “which interpretation is the true one”? To me, it seems much more likely that future insights will continue to do what the previous ones did: broaden our language, strip away irrelevancies, clarify the central issues, while still leaving plenty to argue about for people who like arguing. In the end, asking how quantum computing affects the interpretation of quantum mechanics is sort of like asking how classical computing affects the debate about whether the mind is a machine. In both cases, there was a range of philosophical positions that people defended before a technology came along, and most of those positions still have articulate defenders after the technology. So, by that standard, the technology can’t be said to have “resolved” much! Yet the technology is so striking that even the idea of it—let alone the thing itself—can shift the terms of the debate, which analogies people use in thinking about it, which possibilities they find natural and which contrived. This might, more generally, be the main way technology affects philosophy.

Go Deeper
Editor’s picks for further reading

Nature News: Quantum Physics: What is really real?
Science writer Zeeya Merali discusses new experiments designed to rule out, or confirm, different interpretations of quantum mechanics.

The Nature of Reality: Debating the Meaning of Quantum Mechanics
An introduction to some of the leading interpretations of quantum mechanics.

Also by Scott Aaronson

The Nature of Reality: Is there Anything Beyond Quantum Computing?
Are quantum computers the ultimate limit of computing, or are there devices that could tackle problems too hard for quantum computers?

Quantum Computing Since Democritus
Described by its author as “a candidate for the weirdest book ever to be published by Cambridge University Press,” Scott Aaronson’s book is a journey through the past and present of math, physics, and computer science.

Shtetl-Optimized
Scott Aaronson has been blogging about quantum computing for almost ten years. Read his latest thoughts, and archived posts, here.

Why Philosophers Should Care About Computational Complexity
In this essay, discover connections between computational complexity theory and philosophical questions like the nature of mathematical knowledge, the problem of logical omniscience, the foundations of quantum mechanics, and more.

Tell us what you think on Twitter, Facebook, or email.

scott_96

Scott Aaronson

    Scott Aaronson is an Associate Professor of Electrical Engineering and Computer Science at MIT. Prior to joining MIT, he received his PhD in computer science from UC Berkeley, and did postdocs at the Institute for Advanced Study, Princeton, and the University of Waterloo. His research focuses on the capabilities and limits of quantum computers, and more generally on computational complexity theory and its relationship to physics. His first book, "Quantum Computing Since Democritus," was recently published by Cambridge University Press. Aaronson has written about quantum computing for Scientific American and the New York Times, and writes a popular blog (http://www.scottaaronson.com/blog). He's received the National Science Foundation's Alan T. Waterman Award, the United States PECASE Award, and MIT's Junior Bose Award for Excellence in Teaching.

    • pcusertoo

      Whew…

      • Mongo

        Yeah. Whew, pant, pant. I need some background.

    • Peter Morgan

      OK, but the binary values of classical digital computing is in fact carefully controlled analogue computing, making sure the EM field in some region is either one thing or another, only very rarely in between (despite noise levels that are also carefully controlled, but are not exploited as part of the computation). The comparison of “n interacting particles each of which could be in either of two states”, in contrast to “quantum mechanics says you need 2n amplitudes, one for every possible configuration of all n of the particles”, is misleading in that wider context. Stochastic classical field theory is rather closer to quantum field theory than is classical particle physics to quantum mechanics. Is it proven that quantum computing is more effective in principle than all possible uses of stochastic classical algorithms?

    • chris

      Great article, not an incredible amount of hype or metaphysics. But needs some equations!

    • Luca Turin

      Brilliant, inspired writing. Best nontechnical article on the subject ever.

      • Jack610

        Assuming your definition of non-technical is “not an expert in quantum physics but read about it a lot”.

        But yes, good writing, good article.

    • bruce golland

      How many angels can sit on a pin?How many absurd explanations are there for quantum mechanics? Same stupidity different centuries.

      • Dubiakw

        Just FYI. The “angel on pins” problem was never a real one. The medieval scholastics were not idiots. It was posed as a problem in logic, a homework assignment, so to speak, that was known to be absurd.

        In a like manner, the ancient Greek philosophers posed the question of what the results of thinking of humanity and nature as different things would be. That humans were not themselves part of nature was considered ridiculous and logical errors resulted, teaching students about “reductio ad absurdum”. But like the error crossed out on a blackboard, people seem to remember the error better than the truth and even today “natural” is absurdly considered to be something humans rise above or exist outside of. Humans are part of nature, and seen in this light, drugs are as much a product of nature as plant materials or dirt.

        Your facile attitude of dismissal simply shows how lazy you really are.

    • Bradley Branham
    • Napfény Nap

      Yes, this article one of the best in the subject. OK. I like it.

    • andrewp111

      I think the biggest impact of quantum computing will be previously undiscovered phenomena. This always happens in science. You do the experiments and discover something that was never anticipated before, and not predicted from the equations beforehand. New phenomena then lead to new insights and new theories, and all this talk of how to interpret quantum mechanics will become ancient and outdated drivel..

      • Dubiakw

        Scientism can be a disease. The belief that problems will be solved by science alone is akin to religious teleology. It’s like saying that a flashlight can make things that are invisible by nature, visible. And the “ancient and outdated” ever becomes new again, just as pre-Socratic Pyrrhonism is now called “mindfulness”.

        As Confucius said, it is difficult to find a black cat in a dark room, especially if there is no cat.

        • chris

          Well… good scientists know that certain problems will be solved by science (What happens when I throw a ball into the air)… bad scientists claim all problems can be solved by science, right now. The line between hubris is quite thin in the Physics-Journalism sphere. On the other hand, you need egotistic scientists to attack the big problems and find those leaps that bring light and understanding to the world.

        • DavidAppell

          Well, scientism has brought huge advancements in civilization, whereas religion has not. So it has that going for it.

      • chris

        😉 I kinda agree, the biggest success of Quantum Computing will be to prove that Theory is in some way real. We’ve gotten quite ahead of ourselves with interpretation and metaphysics recently. It will be nice when a lot of the hype turns into drivel.

    • BR

      “And if you haven’t done anything to boost the amplitude of the answer you want, then you’ll see merely a random answer—which, of course, you could’ve picked for yourself, with no quantum computer needed!”

      But there is no way to generate a ‘TRULY RANDOM’ number without recourse to quantum phenomena. Classical algorithms can only generate ‘PSEUDO RANDOM’ numbers with finite periods, no matter how large, not indefinitely large, as quantum phenomena can. Put another way, ‘quantum noise’ is purely white, and its spectrum runs to infinity as far as we know. There is no classical phenomenon known which reflects this noise, a key difference between physics and mysticism.

    • DireMouse

      Eternity

      William Blake, 1757 – 1827

      He who binds to himself a joy
      Does the winged life destroy
      He who kisses the joy as it flies
      Lives in eternity’s sunrise

      • Kristianna Thomas

        Does your arms hurt from hoisting the bust of William Blake to the height of piety? Yes William Blake was a great poet and philosopher; but Odin he is not.
        Poor Odin dead and gone
        left me here to sing this song
        pretty little girl with the red dress on
        Poor Odin dead and gone

    • Bridget Bengtson

      PEACE finally

    • Giulio

      Dear Distinguished Professor,
      very interesting analysis and great study! It would be a great honor if you could have time to answer a question of mine.
      You wrote about Occam’s Razor applied to the reality of Bohmian trajectories, but why don’t you admit that it can happen something like AdS/CFT correspondence where an almost intractable problem on one side/theory becomes much easier in a dual description?
      Thank you very much.

      • Michael Benes

        Because banana mash makes monkey hash you illiterate twit.-The Prof

        • Giulio

          you can’t debate and resort to insults: that shows you lost and you can’t reply anything

          • Dubiakw

            That would be the standard response. However, I prefer to think of insults as tags of a sort that mark the idiots so others will be forewarned.

    • rrtucci

      Can quantum computing reveal the true meaning of Christmas?

      • youtroll

        Can stupid, offensive comments be banned and/or their IP made public?

        • rrtucci

          Sorry, my levity is no match to your towering intellect.

          • levity

            Oh sorry, I’m stupid and I didn’t understand your levity. BTW is there any meaning in the Ramadan (Arabic:; رمضان) ? Is there any stupid, fanatic idiot Muslim here? LOL levity, yes you’re right LOL W Charlie Hebdo!!!

          • made_in_italy

            Beh certo, se ti definisci amico di Lubos, un arrogante razzista, la cui carriera nella fisica è stata giustamente ed opportunamente stroncata, non puoi che scrivere queste stupidate, si capisce… il tuo amico sì che è intelligente, no? Ah … e questo sarebbe il tuo blog? https://qbnets.wordpress.com/2015/06/22/how-the-conservatives-saved-physics-by-lubos-motl/ LOL forse volevi suggerire al tuo amico che il suo libro si può usare come carta igienica LOL

    • Kristianna Thomas

      With all of the stuff about Quantum stuff, the very heart of it lies on the believe that we are the center of life immortal. We are not the center of the universe; we are the only show in town. Nothing exists outside the domain of our existence, and nothing existed before the beginning of time April 16, 3;00 am, 13,800,000 billion years ago. A grand explosion occurred from the Great Nada; an ethereal vapor that existed from time immortal. Since the nebulous void existed for eternity, it had enough energy to create only one of itself; out universe. Does life exist outside our universe, no, Nada, exists outside our universe. For nada exists outside the Great Nada. f one was to travel to the edge of the universe; one would instantly turn into nada. I call this the Nada Paradox; like if you went to the edge of the world; one would fall off and hit you head on the Tiny void of Nada.. If other universes do exist; t would be created by the Grand Photocopier of time; feed by endless reams of photo paper stacked on an endless rolls or toilet paper (each sheet being a perfect copy of our universe). This is a theory that is safe for most septic systems and is biodegradable. So. The probability of endless rolls of the universe being feed by the infinite copy machine of life; making endless copies of ourselves; is highly probable; and mathematically attainable. For as a wise man once said, “you get more bounce to the once”.

      • DavidAppell

        If you want to provide a theory of quantum mechanics without observables, please, go right ahead. No one has found one yet, despite lots of very smart people looking.

    • Jonnan

      In the meantime an anthropomorphization of quantum physics sits giggling in the corner whispering “Wait till they figure out *both* interpretations are true”.