Imaging a Black Hole: How Software Created a Planet-sized Telescope

Publikované v Coder stories

14. 5. 2020

14 min.

Imaging a Black Hole: How Software Created a Planet-sized Telescope
autor
Matthew Hutson

Science and technology writer

Black holes are singular objects in our universe, pinprick entities that pierce the fabric of spacetime. Typically formed by collapsed stars, they consist of an appropriately named singularity, a dimensionless, infinitely dense mass, from whose gravitational pull not even light can escape. Once a beam of light passes within a certain radius, called the event horizon, its fate is sealed. By definition, we can’t see a black hole, but it’s been theorized that the spherical swirl of light orbiting around the event horizon can present a detectable ring, as rays escape the turbulence of gas swirling into the event horizon. If we could somehow photograph such a halo, we might learn a great deal about the physics of relativity and high-energy matter.

On April 10, 2019, the world was treated to such an image. A consortium of more than 200 scientists from around the world released a glowing ring depicting M87*, a supermassive black hole at the center of the galaxy Messier 87. Supermassive black holes, formed by unknown means, sit at the center of nearly all large galaxies. This one bears 6.5 billion times the mass of our sun and the ring’s diameter is about three times that of Pluto’s orbit, on average. But its size belies the difficulty of capturing its visage. M87* is 55 million light years away. Imaging it has been likened to photographing an orange on the moon, or the edge of a coin across the United States.

Our planet does not contain a telescope large enough for such a task. “Ideally speaking, we’d turn the entire Earth into one big mirror [for gathering light],” says Jonathan Weintroub, an electrical engineer at the Harvard-Smithsonian Center for Astrophysics, “but we can’t really afford the real estate.” So researchers relied on a technique called very long baseline interferometry (VLBI). They point telescopes on several continents at the same target, then integrate the results, weaving the observations together with software to create the equivalent of a planet-sized instrument—the Event Horizon Telescope (EHT). Though they had ideas of what to expect when targeting M87*, many on the EHT team were still taken aback by the resulting image. “It was kind of a ‘Wow, that really worked’ [moment],” says Geoff Crew, a research scientist at MIT Haystack Observatory, in Westford, Massachusetts. “There is a sort of gee-whizz factor in the technology.”

Catching bits

On four clear nights in April 2017, seven radio telescope sites—in Arizona, Mexico, and Spain, and two each in Chile and Hawaii—pointed their dishes at M87*. (The sites in Chile and Hawaii each consisted of an array of telescopes themselves.) Large parabolic dishes up to 30 meters across caught radio waves with wavelengths around 1.3mm, reflecting them onto tiny wire antennas cooled in a vacuum to 4 degrees above absolute zero. The focused energy flowed as voltage signals through wires to analog-to-digital converters, transforming them into bits, and then to what is known as the digital backend, or DBE.

The purpose of the DBE is to capture and record as many bits as possible in real time. “The digital backend is the first piece of code that touches the signal from the black hole, pretty much,” says Laura Vertatschitsch, an electrical engineer who helped develop the EHT’s DBE as a postdoctoral researcher at the Harvard-Smithsonian Center for Astrophysics. At its core is a pizza-box-sized piece of hardware called the R2DBE, based on an open-source device created by a group started at Berkeley called CASPER.

The R2DBE’s job is to quickly format the incoming data and parcel it out to a set of hard drives. “It’s a kind of computing that’s relatively simple, algorithmically speaking,” Weintroub says, “but is incredibly high performance.” Sitting on its circuit board is a chip called a field-programmable gate array, or FPGA. “Field programmable gate arrays are sort of the poor man’s ASIC,” he continues, referring to application-specific integrated circuits. “They allow you to customize logic on a chip without having to commit to a very expensive fabrication run of purely custom chips.”

An FPGA contains millions of logic primitives—gates and registers for manipulating and storing bits. The algorithms they compute might be simple, but optimizing their performance is not. It’s like managing a city’s traffic lights, and its layout, too. You want a signal to get from one point to another in time for something else to happen to it, and you want many signals to do this simultaneously within the span of each clock cycle. “The field programmable gate array takes parallelism to extremes,” Weintroub says. “And that’s how you get the performance. You have literally millions of things happening. And they all happen on a single clock edge. And the key is how you connect them all together, and in practice, it’s a very difficult problem.”

FPGA programmers use software to help them choreograph the chip’s components. The EHT scientists program it using a language called VHDL, which is compiled into bitcode by Vivado, a software tool provided by the chip’s manufacturer, Xilinx. On top of the VHDL, they use MATLAB and Simulink software. Instead of writing VHDL firmware code directly, they visually move around blocks of functions and connect them together. Then you hit a button and out pops the FPGA bitcode.

But it doesn’t happen immediately. Compiling takes many hours, and you usually have to wait overnight. What’s more, finding bugs once it’s compiled is almost impossible, because there are no print statements. You’re dealing with real-time signals on a wire. “It shifts your energy to tons and tons of tests in simulation,” Vertatschitsch says. “It’s a really different regime, to thinking, ‘How do I make sure I didn’t put any bugs into that code, because it’s just so costly?’”

Data to disk

The next step in the digital backend is recording the data. Music files are typically recorded at 44,100 samples per second. Movies are generally 24 frames per second. Each telescope in the EHT array recorded 16 billion samples per second. How do you commit 32 gigabits—about a DVD’s worth of data—to disk every second? You use lots of disks. The EHT used Mark 6 data recorders, developed at Haystack and available commercially from Conduant Corporation. Each site used two recorders, which each wrote to 32 hard drives, for 64 disks in parallel.

In early tests, the drives frequently failed. The sites are on tops of mountains, where the atmosphere is thinner and scatters less of the incoming light, but the thinner air interferes with the aerodynamics of the write head. “When a hard drive fails, you’re hosed,” Vertatschitsch says. “That’s your experiment, you know? Our data is super-precious.” Eventually they ordered sealed, helium-filled commercial hard drives. These drives never failed during the observation.

The humans were not so resistant to thin air. According to Vertatschitsch,“If you are the developer or the engineer that has to be up there and figure out why your code isn’t working… the human body does not debug code well at 15,000 feet. It’s just impossible. So, it became so much more important to have a really good user interface, even if the user was just going to be you. Things have to be simple. You have to automate everything. You really have to spend the time up front doing that, because it’s like extreme coding, right? Go to the top of a mountain and debug. Like, get out of here, man. That’s insane.”

Over the four nights of observation, the sites together collected about five petabytes of data. Uploading the data would have taken too long, so researchers FedExed the drives to Haystack and the Max Planck Institute for Radio Astronomy, in Bonn, Germany, for the next stage of processing. All except the drives from the South Pole Telescope (which couldn’t see M87* in the northern hemisphere, but collected data for calibration and observation of other sources)—those had to wait six months for the winter in the southern hemisphere to end before they could be flown out.

Connecting the dots

Making an image of M87* is not like taking a normal photograph. Light was not collected on a sheet of film or on an array of sensors as in a digital camera. Each receiver collects only a one-dimensional stream of information. The two-dimensional image results from combining pairs of telescopes, the way we can localize sounds by comparing the volume and timing of audio entering our two ears. Once the drives arrived at Haystack and Max Planck, data from distant sites were paired up, or correlated.

Unlike with a musical radio station, most of the information at this point is noise, created by the instruments. “We’re working in a regime where all you hear is hiss,” Haystack’s Crew says. To extract the faint signal, called fringe, they use computers to try to line up the data streams from pairs of sites, looking for common signatures. The workhorse here is open-source software called DiFX, for Distributed FX, where F refers to Fourier transform and X refers to cross-multiplication. Before DiFX, data was traditionally recorded on tape and then correlated with special hardware. But about 15 years ago, Adam Deller, then a graduate student working at the Australian Long Baseline Array, was trying to finish his thesis when the correlator broke. So he began writing DiFX, which has now been widely expanded and adopted. Haystack and Max Planck each used Linux machines to coordinate DiFX on supercomputing clusters. Haystack used 60 nodes with 1,200 cores, and Max Planck used 68 nodes with 1,380 cores. The nodes communicate using Open MPI, for Message Passing Interface.

Correlation is more than just lining up data streams. The streams must also be adjusted to account for things such as the sites’ locations and the Earth’s rotation. Lindy Blackburn, a radio astronomer at the Harvard-Smithsonian Center for Astrophysics, notes a couple of logistical challenges with VLBI. First, all the sites have to be working simultaneously, and they all need good weather. (In terms of clear skies, “2017 was a kind of miracle,” says Kazunori Akiyama, an astrophysicist at Haystack.) Second, the signal at each site is so weak that you can’t always tell right away if there’s a problem. “You might not know if what you did worked until months later, when these signals are correlated,” Blackburn says. “It’s a sigh of relief when you actually realize that there are correlations.”

Something in the air

Because most of the data on disk is random background noise from the instruments and environment, extracting the signal with correlation reduces the data 1,000-fold. But it’s still not clean enough to start making an image. The next step is calibration and a signal-extraction step called fringe-fitting. Blackburn says a main aim is to correct for turbulence in the atmosphere above each telescope. Light travels at a constant rate through a vacuum, but changes speed through a medium like air. By comparing the signals from multiple antennas over a period of time, software can build models of the randomly changing atmospheric delay over each site and correct for it.

The classic calibration software is called AIPS, for Astronomical Image Processing System, created by the National Radio Astronomy Observatory. It was written 40 years ago, in Fortran, and is hard to maintain, says Chi-kwan Chan, an astronomer at the University of Arizona, but it was used by EHT because it’s a well-known standard. They also used two other packages. One is called HOPS, for Haystack Observatory Processing System, and was developed for astronomy and geodesy—the use of radio telescopes to measure movement not of celestial bodies but of the telescopes themselves, indicating changes in the Earth’s crust. The newest package is CASA, for Common Astronomy Software Applications.

Chan says the EHT team has made contributions even to the software it doesn’t maintain. EHT is the first time VLBI has been done at this scale—with this many heterogeneous sites and this much data. So some of the assumptions built into the standard software break down. For instance, the sites all have different equipment, and at some of them the signal-to-noise ratio is more uniform than at others. So the team sends bug reports to upstream developers and works with them to fix the code or relax the assumptions. “This is trying to push the boundary of the software,” Chan says.

Calibration is not a big enough job for supercomputers, like correlation, but is too big for a workstation, so they used the cloud. “Cloud computing is the sweet spot for analysis like fringe fitting,” Chan says. With calibration, the data is reduced a further 10,000-fold.

Put a ring on it

Finally, the imaging teams received the correlated and calibrated data. At this point no one was sure if they’d see the “shadow” of the black hole—a dark area in the middle of a bright ring—or just a bright disk, or something unexpected, or nothing. Everyone had their own ideas. Because the computational analysis requires making many decisions—the data are compatible with infinite final images, some more probable than others—the scientists took several steps to limit the amount that expectations could bias the outcome. One step was to create four independent teams and not let them share their progress for a seven-week processing period. Once they saw that they had obtained similar images—very close to the one now familiar to us—they rejoined forces to combine the best ideas, but still proceeded with three different software packages to ensure that the results are not affected by software bugs.

The oldest is DIFMAP. It relies on a technique created in the 1970s called CLEAN, when computers were slow. As a result, it’s computationally cheap, but requires lots of human expertise and interaction. “It’s a very primitive way to reconstruct sparse images,” says Akiyama, who helped create a new package specifically for EHT, called SMILI. SMILI uses a more mathematically flexible technique called RML, for regularized maximum likelihood. Meanwhile, Andrew Chael, an astrophysicist now at Princeton, created another package based on RML, called eht-imaging. Akiyama and Chael both describe the relationship between SMILI and eht-developers as a friendly competition.

In developing SMILI, Akiyama says he was in contact with medical imaging experts. Just as in VLBI, MRI, and CT, software needs to reconstruct the most likely image from ambiguous data. They all rely to some degree on assumptions. If you have some idea of what you’re looking at, it can help you see what’s really there. “The interferometric imaging process is kind of like detective work,” Akiyama says. “We are doing this in a mathematical way based on our knowledge of astronomical images.”

Still, users of each of the three EHT imaging pipelines didn’t just assume a single set of parameters—for things like expected brightness and image smoothness. Instead, each explored a wide variety. When you get an MRI, your doctor doesn’t show you a range of possible images, but that’s what the EHT team did in their published papers. “That is actually quite new in the history of radio astronomy,” Akiyama says. And to the team’s relief, all of them looked relatively similar, making the findings more robust.

By the time they had combined their results into one image, the calibrated data had been reduced by another factor of 1,000. Of the whole data analysis pipeline, “you could think of it a progressive data compression,” Crew says. From petabytes to bytes. The image, though smooth, contains only about 64 pixels worth of independent information.

For the most part, the imaging algorithms could be run on laptops; the exception was the parameter surveys, in which images were constructed thousands of times with slightly different settings—those were done in the cloud. Each survey took about a day on 200 cores.

Images also relied on physics simulations of black holes. These were used in three ways. First, simulations helped test the three software packages. A simulation can produce a model object such as a black hole accretion disk, the light it emits, and what its reconstructed image should look like. If imaging software can take the (simulated) emitted light and reconstruct an image similar to that in the simulation, it will likely handle real data accurately. (They also tested the software on a hypothetical giant snowman in the sky.) Second, simulations can help constrain the parameter space that’s explored by the imaging pipelines. And third, once images are produced, it can help interpret those images, letting scientists deduce things such as M87* mass from the size of the ring.

The simulation software Chan used has three steps. First, it simulates how plasma circles around a black hole, interacting with magnetic fields and curved spacetime. This is called general relativistic magnetohydrodynamics. But gravity also curves light, so he adds general relativistic ray tracing. Third, he turns the movies generated by the simulation into data comparable to what the EHT observes. The first two steps use C, and the last uses C++ and Python. Chael, for his simulations, uses a package called KORAL, written in C. All simulations are run on supercomputers.

Akiyama knew the calibrated data would be sent to the imaging team at 5pm on June 5, 2018. He’d prepared his imaging script and couldn’t sleep the night before. Within 10 minutes of getting the email on June 5, he had an image. It had a ring whose size was consistent with theoretical predictions. “I was so, so excited,” he says. However, he couldn’t share the image with anyone around him doing imaging, lest he bias them. Even within his team, people were working independently. For a few days, he worried he’d be the only one to get a ring. “The funny thing is I also couldn’t sleep that night,” he says. Full disclosure to all EHT teams would have to wait several weeks, and a public announcement would have to wait nearly a year.

Doing donuts

The image of M87* fostered collaboration, both before and after its creation, like few other scientific artefacts. Nearly all the software used at all stages is open source, and much of it is on GitHub. A lot of it came before EHT, and they made use of existing telescopes—if they’d had to build the dishes, the operation would not have been possible.

The researchers learned some lessons about software development. When Chael began coding eht-imaging, he was the only one using it. “I’m in my office pushing changes, and for a while it was fine, because when a bug happened, it would only affect me,” he says. “But then at a certain point, a bunch of other people started using the code, and I started getting angry emails. So learning to develop tests and to be really rigorous in testing the code before I pushed it was really important for me. That was a transition that I had to undergo.”

Crew came to understand—perhaps better than he probably already did—the importance of documentation. He was the software architect for the ALMA array, in Chile, which is the most important site in the EHT. ALMA has 66 dishes and does correlation on-site in real time using “school-bus-sized calculators,” he says. But bringing ALMA online, he couldn’t get fringe. He tried everything before letting it sit for about eight months, then discovered a quirk in DiFX. Fed a table of data on Earth’s rotation, it used only the first five entries, not necessarily the five you wanted, so in March it was using the parameters for January. That bug, or feature, was not well documented. But “it was a very simple thing to fix,” Crew says, “and the fringe just popped right out, and it was just spectacular. And that’s the kind of wow factor where it’s like, you go from nothing’s working to wow. The M87 image was kind of the same thing. There was an awful lot of stuff that had to work.”

“Astronomy is one of the fields where there’s not a lot of money for engineering and development, and so there’s a very active open-source community sharing code and sharing the burden of developing good electronics,” Vertatschitsch says. She calls it a “really cool collaborative atmosphere.” And it’s for a larger cause. “The goal is to speed up the time to science,” she says. “That was some of the most fun engineering I’ve ever gotten to do.”

The collaboration is now open to anyone who wants to create an image of M87* at home. Not only is the software open source, but so is much of the data. And the researchers have packed the pipelines into docker containers, so the analysis is reproducible. Chan says that compared to other fields, astronomy is still quite behind in addressing the reproducibility crisis. “I think EHT is probably setting a good model.”

When it comes to software development for radio astronomy, “it’s pretty exotic stuff,” Crew says. “You don’t get rich doing it. And your day is full of a different kind of frustration than you have with the rest of the commercial environment. Nature poses us puzzles, and we have to stand on our heads and write code to do peculiar things to unravel these puzzles. And at the end of the day the reward is figuring out something about nature.”

He goes on: “It’s an exciting time to be alive. It really is.” That we can create images of black holes millions of light years away? “Well, that’s only a small piece. The fact that we can release an image and people all over the world come up with creative memes using it within hours.” For instance, Homer Simpson taking a bite out of M87* instead of a donut—“I mean, that’s just mind-boggling to me.”

This article is part of Behind the Code, the media for developers, by developers. Discover more articles and videos by visiting Behind the Code!

Want to contribute? Get published!

Follow us on Twitter to stay tuned!

Illustration by Blok

Preberané témy
Hľadáte svoju ďalšiu pracovnú príležitosť?

Viac ako 200 000 kandidátov našlo prácu s Welcome to the Jungle

Preskúmať pracovné miesta