A new algorithm has been developed by scientists from MIT’s Computer Science and Artificial Intelligence Laboratory, the Harvard-Smithsonian Center for Astrophysics, and the MIT Haystack Observatory that could assist astronomers in creating the initial image of a black hole.
An artist's drawing a black hole named Cygnus X-1. It formed when a large star caved in. This black hole pulls matter from the blue star beside it. Image: M.Weiss/NASA/CXC
Under the sponsorship of an international association known as the Event Horizon Telescope, the algorithm would accumulate data obtained from radio telescopes dispersed throughout the globe. The project is on the mission to change the whole planet into a huge radio telescope dish.
Radio wavelengths come with a lot of advantages. Just like how radio frequencies will go through walls, they pierce through galactic dust. We would never be able to see into the center of our galaxy in visible wavelengths because there’s too much stuff in between.
Katie Bouman, MIT graduate student in electrical engineering and computer science, who headed the creation of the new algorithm
Radio waves also need big antenna dishes due to their long wavelengths. The world’s biggest single radio-telescope dish has a diameter of 1,000ft; however, an image of moon it produced would be more blur than that viewed by an ordinary backyard optical telescope.
A black hole is very, very far away and very compact. [Taking a picture of the black hole in the center of the Milky Way galaxy is] equivalent to taking an image of a grapefruit on the moon, but with a radio telescope. To image something this small means that we would need a telescope with a 10,000-kilometer diameter, which is not practical, because the diameter of the Earth is not even 13,000 kilometers.
Coordinating the measurements taken by radio telescopes at extensively divergent positions was the key implemented by the Event Horizon Telescope project. Six observatories have agreed to join the project as of now, with more observatories to follow most probably.
However, even double the telescopes would provide insufficient data as they approximate a 10,000-km-wide antenna. The purpose of algorithms such as Bouman’s is to suffice the incomplete data. Bouman would submit her new algorithm called CHIRP (Continuous High-resolution Image Reconstruction using Patch priors) at the Computer Vision and Pattern Recognition conference held in June. She has joined on the conference paper by her mentor, Bill Freeman, Professor of electrical engineering and computer science and by co-workers at MIT’s Haystack Observatory and the Harvard-Smithsonian Center for Astrophysics, including Sheperd Doeleman, Director of the Event Horizon Telescope project.
A technique known as interferometry is utilized by the Event Horizon Telescope which unites the signals identified by pairs of telescopes, such that the signals interfere with one another. CHIRP can be functional in an imaging system that utilizes radio interferometry.
An astronomical signal generally reaches any two telescopes at mildly different times. It is essential to take into account this difference while obtaining visual info from the signal. However, the atmosphere of the Earth could also reduce the speed of radio waves, exaggerating the arrival time differences and discarding the calculation basis for interferometric imaging.
Bouman came up with an interesting algebraic answer to this issue. The additional delays due to atmospheric noise nullify if the measurements obtained from the three telescopes are multiplied. This indicates that every new measurement needs information from not just two but three telescopes, and the rise in accuracy compensates for the loss of information.
Even after filtering out atmospheric noise, the measurements from few telescopes scattered throughout the globe are sparse; any quantity of possible images could fit the data just as well. Assembling an image that fits both the data and meets some expectations about what images look like is the next step. Contributions were made by Bouman and her co-workers too.
The algorithm conventionally employed to make sense of astronomical interferometric data makes an assumption that an image is a summation of individual light points, and it tries to find those points whose brightness and location best correlate with the data. Then the algorithm blurs bright points nearby, to bring back some continuity to the astronomical image.
To create a more accurate image, CHIRP utilizes a model that is slightly more complicated than single points but is mathematically tractable. The model could be considered a rubber sheet covered with evenly spaced cones whose heights differ, but whose bases have equivalent diameter.
The model could be fit to the interferometric data by adjusting the of the cone heights, which could turn out to be zero for long stretches, equivalent to a flat sheet. Converting the model into a visual image is like converting it with a plastic wrap. The plastic would be covered tightly between adjacent peaks, but it will slip down the cones sides next to the flat regions. The plastic wrap altitude is equivalent to the image brightness. The model conserves the image’s natural continuity since that altitude differs continuously.
Bouman’s cones are a mathematical concept, and the plastic wrap acts as a virtual “envelope” whose altitude is estimated computationally. Mathematical objects known as splines, which curve smoothly, similar to parabolas, worked better than cones mostly. However, the basic concept was the same.
To identify visual patterns that repeat in 64-pixel patches of real-world images Bouman made use of a machine-learning algorithm finally, and using those features she further improved her algorithm’s image reconstructions. She obtained patches from astronomical images and from pictures of terrestrial scenes in different experiments; however, the training data choice had insignificant effect on the last reconstructions.
Bouman created a huge database of artificial astronomical images and the measurements they would give in various telescopes, given arbitrary variations in thermal noise atmospheric noise, from the telescopes themselves, and various other kinds of noise. Her algorithm was often better than the earlier ones at rebuilding the actual image from the measurements and could handle noise better. She has also given access to her test data online for the benefit of other researchers.
With the Event Horizon Telescope project, “there is a large gap between the needed high recovery quality and the little data available,” says Yoav Schechner, Professor of electrical engineering at Israel’s Technion, who was not part of the work. “This research aims to overcome this gap in several ways: careful modeling of the sensing process, cutting-edge derivation of a prior-image model, and a tool to help future researchers test new methods.”
Suppose you want a high-resolution video of a baseball. The nature of ballistic trajectory is prior knowledge about a ball's trajectory. In essence, the prior knowledge constrains the sought unknowns. Hence, the exact state of the ball in space-time can be well determined using sparsely captured data.
“The authors of this paper use a highly advanced approach to learn prior knowledge,” he says. “The application of this prior-model approach to event-horizon images is not trivial. The authors took on major effort and risk. They mathematically merge into a single optimization formulation a very different, complex sensing process and a learning-based image-prior model.”