
Computing a Universe Simulation
Season 4 Episode 42 | 12m 4sVideo has Closed Captions
Is it possible to simulate the entire universe on a computer smaller than the universe?
Physics seems to be telling us that it’s possible to simulate the entire universe on a computer smaller than the universe
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback

Computing a Universe Simulation
Season 4 Episode 42 | 12m 4sVideo has Closed Captions
Physics seems to be telling us that it’s possible to simulate the entire universe on a computer smaller than the universe
Problems playing video? | Closed Captioning Feedback
How to Watch PBS Space Time
PBS Space Time is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipPhysics seems to be telling us that it's possible to simulate the entire universe on a computer smaller than the universe.
If we go along with this crazy notion, how powerful would that computer need to be?
And how long would it take?
Believe it or not, we can figure it out.
Look, I'm not saying the universe is a simulation.
I mean, it might be.
I'm just not saying it.
And perhaps, it doesn't make any difference.
Even if this is the prime, the original, physical universe, rather than somewhere deep in the simulation nest, we can still think of our universe's underlying mechanics as computation.
Imagine a universe in which the most elementary components are stripped of all properties besides some binary notion of existence or nonexistence like if the tiniest chunks of space time or chunks of quantum field or elements in the abstract space of quantum mechanical states can either be full or empty.
These elements interact with their neighbors by a simple set of rules, leading to oscillations, elementary particles, atoms, and ultimately to all of the emergent laws of physics, physical structure, and ultimately the universe.
I just described the cellular automaton hypothesis.
In this picture, the universe is a multi-dimensional version of Conway's game of life.
Such a universe could be reasonably thought of as a computation, cells stripped of all properties until they are indistinguishable from pure information.
And together, they form a sort of computer whose sole task is to compute its own evolution.
This may not be how our reality works, but it's an idea that many physicists take seriously.
And even if we aren't emergent patterns of a cellular automaton, we can think of any physical reality as a computation, so long as it's underlying mechanics are rules-based evolution over time.
That includes most formulations of quantum mechanics and proposals for theories of everything.
We'll come back to the question is the universe a computer, and we'll look at cellular automata and pan computationalism and, in general, the idea of digital physics and an informational universe.
But today, let's answer a simple question.
If the universe is a computer, how good a computer is it?
And an even more fun question-- could you build a computer inside this universe to simulate this universe?
In answering this, we'll also answer the recent challenge question.
And I encourage you to watch that episode before you watch this one.
The power of a computer can be crudely broken down into two things.
How much information can it store?
And how quickly can it perform computations on that information?
The laws of physics prescribe fundamental limits on both.
The first one, the memory capacity of the universe, is a topic we've looked at.
It's defined by the Bekenstein bound, which tells us the maximum information that can be stored in a volume of space is proportional to the surface area of that volume.
Specifically, it's the number of tiny Planck areas you can fit over that surface area divided by 4.
It was in studying black holes that Jacob Bekenstein realized that they must contain the maximum possible amount of hidden information, the maximum possible entropy.
If you fill a region of the universe with information equal to its Bekenstein bound, it'll immediately become a black hole.
We saw in our episode on the information content of the universe that the maximum information content, the Bekenstein bound, of the observable universe is around 10 to the power of 120 bits based on its surface area.
At the same time, the actual information content in matter and radiation is probably more like 10 to the power of 90 bits, roughly corresponding to the number of particles of matter and radiation.
Bizarrely, the Bekenstein bound suggests that we could hold all of the information in the observable universe within a storage device smaller than the observable universe, which brings us to the first part of the challenge question.
Assuming you can build a computer that stores information at the Bekenstein bound, essentially, your memory device is the event horizon of a black hole.
How large would that black hole need to be to store all of the information about all of the particles in the universe?
We'll figure out the case for just matter and for matter and radiation.
So there is something like 10 to the power of 80 hydrogen atoms in the universe.
If we instead count all the elementary particles with mass, we might get 10 to the 81 particles.
But let's just go with a nice, round order of magnitude-- 10 to the 80 bits assuming 1 bit per particle.
Again, the Bekenstein bound is the event horizon surface area in Planck areas with an extra factor of a quarter that Stephen Hawking figured out.
So our Bekenstein bound hard drive needs to be roughly 4 by 10 to the power of 80 Planck areas, corresponding to 40 billion square meters.
That corresponds to a spherical black hole with a radius around 100 kilometers.
So informationally speaking, you could store the entire observable universe of non-radiation particles on the surface area of a black hole the size of Switzerland.
The radius I gave is the Schwarzschild radius, the radius of the event horizon of a non-rotating neutral black hole, again, like Switzerland.
It's directly proportional to its mass.
The mass of a 100 kilometer radius black hole would be 30 times that of our sun.
It's a hard drive the size of a picturesque European nation with the mass of the heaviest stars in the universe and with the storage capacity to register every atom in the universe, not exactly portable, but you'd never need to defrag again.
If you want to include photons, neutrinos, dark matter, et cetera, and not just atoms, you need to scale up the surface area by a factor of 10 billion and the radius by a factor of 100,000.
You'd need a black hole a few million times the mass of the sun and 10 million kilometers in radius.
That's pretty close to the size of the supermassive black hole in the center of the Milky Way.
Now those are some big masses from our puny human perspective.
But remember, we're storing all of the information in the universe on just one of these black holes.
That is quite the zip file.
Storing all of the information in the universe is one thing, but a real computer must compute.
Every instant in time must progress to the next instant.
Every quantum state must be processed into the following state.
There's a fundamental limit to the speed with which quantum states can change.
The Margolus-Levitin theorem tells us the maximum rate at which the quantum states of a system can shift into completely independent quantum states or orthogonal states in quantum jargon-- the radius proportional to the average energy of the system.
And that makes some intuitive sense.
The more energy in the system, the quicker it's quantum states can evolve.
This maximum rate of state changes is also the maximum speed of logical operations for any computer.
For example, a simple quantum system would be a group of electrons with spins pointing up or down, corresponding to a single bit the information each.
Flipping the spin of an electron is a change to an orthogonal state, but it can also be thought of as a single operation, in this case, the not operation.
This sort of quantum spin array is exactly the system used in most quantum computers.
And so the Margolus-Levitin theorem also gives us the speed limit of operations in quantum computing, in fact, for any computing, but only quantum computing can expect to get close to this limit.
Using this theorem, we can also figure out the computational capacity of the entire universe.
This follows a paper by Seth Lloyd that I pointed you to in the challenge question.
For the energy of the system, we use the mass of the observable universe, around 10 to the power of 52 kilograms, and then apply good old e equals mc squared to get energy.
We get that, if every single particle in the universe were used to make a computation, it should process 5 by 10 to the power of 102 logical operations per second, not bad.
That's why the universe can dial up its graphic settings so high.
The universe has been around for 13.8 billion years or 4 by 10 to the power of 17 seconds.
So it could have performed around 10 to the power of 120 operations in that time.
And that's actually independent of the number of particles or degrees of freedom the universe is using to do that computation.
The number is based on its energy content, and it has to spread that computational power over its particles.
On the other hand, our Switzerland-sized black hole computer is a little slower.
Instead of using the mass of the universe to figure out the computation speed, we only have 30 solar masses.
It can do 3 by 10 to the power of 82 logical ops per second, a factor of 10 to the power of 20 slower than the computational speed of the whole universe.
Assuming the universe is computing its own evolution at maximum speed, our black hole computer would take a factor of 10 to the power of 20 longer or 10 to the power of 30 years to simulate the universe to the present day.
Protons will have started to decay by the time we simulate last Monday.
Our supermassive black hole scale computer does a bit better, taking only 10 to the power of 25 years.
I don't know-- maybe if we turn off anti-aliasing?
Again, this assumes that we needed all of those 10 to the power of 120 ops.
We can instead estimate the computational history of the universe by assuming that all entropy generated over the history of the universe comes from its internal computation.
The Landauer limit gives the entropy cost for computation.
And approximating very crudely, it's 1 bit of entropy per operation.
Matter and radiation combined have an entropy of 10 to the power of 90 bits.
So maybe they processed 10 to the 90 ops since the big bang.
Our small black hole computer could do that in a year and our big one in five minutes.
Unfortunately, you can only read out the simulation results in Hawking radiation as those black holes evaporate, which will take 10 to the power of 70 years minimum-- hell of an input-output bottleneck.
But hey, you could simulate 10 to the power of 70 universes in that time.
OK, so wildly different results depending on your assumptions-- by the way, getting your results of the simulation will take forever, nearly literally.
Some of you gave answers close to these by various ingenious methods.
We chose six correct enough responses at random listed below.
If you see your name, you are a lucky winner of a space-time t-shirt.
Shoot us an email at pbsspacetime@gmail.com with your name, address, US t-shirt size-- small, medium, extra extra large, whatever-- and let us know which tee you'd like.
For me, the big takeaway from this exercise isn't so much the numbers, which are enormous and hard to get our heads around.
It's first that our universe can be simulated within the universe.
I suppose that has implications for the simulation hypothesis.
But I remain dubious.
It's not clear that programming and then reading out from an event horizon computer is even possible.
What blows my mind even more is that we can use our current incomplete understanding of physics, in particular very standard ideas on general relativity and quantum mechanics, to figure out the computational properties of our universe.
This will be important as quantum computers develop.
And these insights may also lead to real paradigm shifts, perhaps ultimately revealing the fundamentally informational
- Science and Nature
A series about fails in history that have resulted in major discoveries and inventions.
Support for PBS provided by: