Imaging scientist Katie Bouman helped construct the first ever photo of a black hole, but she didn’t expect this kind of excitement — or attention.
The Event Horizon Telescope released the image on Wednesday, and since then Bouman has been swamped with phone calls, text messages and emails. Although she was not the only woman to work on this image (more than 200 scientists around the world contributed to the project), she has become a symbol for women’s achievement in computer science and astronomy. Political figures like Alexandria Ocasio-Cortez have encouraged Bouman to “take your rightful seat in history.”
But what exactly did the 29-year-old Bouman do to capture an image of the supermassive black hole at the center of the M87 galaxy, located 55 million light years away?
“No telescope actually takes a picture,” Bouman told the PBS NewsHour in an interview. Instead, all of the disparate data collected by the planet-sized telescope back in 2017 needed to be processed and translated into an image.
On Wednesday after the image was released, Bouman explained to NewsHour how she crafted an algorithm to incorporate it all.
The conversation has been edited for length and clarity.
How did you get involved in the Event Horizon Telescope project?
I did my PhD in computer vision at the Computer Science and Artificial Intelligence Laboratory at MIT, working on analyzing images and understanding images. I just love images. [In 2013], I heard about this meeting, and decided to tag along to hear Shep Doeleman [project director of the Event Horizon Telescope] and a couple of other people with the Event Horizon Telescope group.
I sat in on that meeting for like, two hours, and I understood almost nothing Shep said. I hardly knew what a black hole was, but I remember thinking at the end of the meeting that I really wanted to work on this project.
They were interested in getting someone to start working on imaging, because at the time they were still trying to get the instrument together. They hadn’t really gotten into what they were going to do once they got the data.
Wait, rewind. What do you mean, getting the instrument together?
In order to see a black hole, you need an Earth-sized telescope.
That black hole is so tiny from Earth, it’s about the same as if you were trying to see an orange on the surface of the moon.
[Note: M87’s black hole is actually huge — roughly 24 billion miles across or the size of our solar system. But at a distance of 55 million light years, it does appear very small.]
The law of diffraction says that if you know the resolution you need to achieve, and the wavelength you are observing at, then you can figure out what your telescope size should be.
We needed an Earth-sized telescope, and obviously we couldn’t build an Earth-sized telescope dish. Instead, we took eight different telescopes from all around the world that were built for other purposes, and we joined them together to act as one dish.
That’s what the Event Horizon Telescope is.
Once this telescope was put together, what did you do?
We had telescopes and observers in Hawaii and Chile and Mexico and Spain, and they all had to have good weather at the same time, down to the picosecond [a trillionth of a second]. I observed from Mexico, 15,000 feet above sea level.
No telescope actually takes a picture. What happens is, the light from the black hole travels 55 million light years and then every dish collects a single stream of the light that it sees at the same time.
That’s recorded onto these hard drives. We can’t send that data over the internet because it’s way too big — [the hard drives are] sent on airplanes to a central location, where they’re computationally processed.
But it’s incomplete.
The process of imaging is taking the incomplete information that we get from a couple of places on our virtual telescope, and trying to fill in all the missing information to get the picture an actual Earth-sized telescope would have produced.
That is a hard problem.
How do you combine the information and get an image you trust?
There’s an infinite number of possible images that could have been created from the sparse measurements that we took. The goal of imaging is to find the image that not only reconstructs and matches the data that we measured, but also is the one that is most likely.
We have to impose some information about what the image should look like in order to recover that image. Some stuff that we impose is natural and easy — we know that light is positive. You can’t have negative light.
Other things we might impose would be how smooth the image is. You wouldn’t expect an image of a black hole to look like the white noise you get when you pull a cable out of your television.
You really don’t want to accidentally tell our imaging algorithms that, for example, “Oh, what is likely is this ring shape,” because then we just recover that ring back, and we’ve learned nothing.
To avoid shared bias, we split ourselves into four different teams that had different focuses and different kinds of algorithms. We worked separately for a month, not talking to each other about anything.
Then after one month we all gathered together in Cambridge, Massachusetts, and we put all the images up on a screen at one time. I think that was the most amazing moment, because even though each of the other images had different underlying assumptions and looked different, this ring appeared in all of the images.
The ring was always the same size, and it was brighter in the south. That was huge.
That was in late July of last year. Since then, we’ve spent months trying to break our images by training our algorithms on synthetic data. [In other words, the teams tried to mislead their algorithms with fake data that portrayed a flat disk with no hole in the center. They then applied the actual information collected by the Event Horizon Telescope to those new, misled algorithms.]
Even when we then applied those algorithms to the real data, we still got the ring in the end. You would have to bend over crazy backwards to not get this ring.
In the end, what was shown today was from three different pipelines, three different methods that we trained on the synthetic data. We got an image from each of those, and we blurred and averaged them together so they were all consistent.
[As of April 10, at 4 p.m. ET] I haven’t called my family yet, I haven’t shown them the image. I’ve been lips-sealed for a year, and so I am excited to get the chance to talk to them. I didn’t expect…I’ve been getting messages like crazy all day. I’m excited that people are so excited.
But you know, this was a team effort. I don’t know why I’m getting so much press myself…lots of people, processing those petabytes of data, that’s what made it possible.
So many people from the imaging team really should be acknowledged — Andrew Chael, Kazunori Akiyama, Michael Johnson and Jose Gomez.
I brought the computer science mindset, but the project brought in people from so many different areas.
That’s what made it possible, no one person did this.