Scientists from Caltech and the University of Southern California (USC) have given an account of the first use of quantum computing to solve a physics problem. They used quantum-compatible machine learning methods to devise a technique for acquiring an uncommon Higgs boson signal from extensive amounts of noise data.
Discovered in the year 2012 at the Large Hadron Collider, Higgs is the particle proposed to provide elementary particles with mass. The innovative quantum machine learning technique has been observed to function on par even with very small datasets, in contrast to the conventional equivalents.
Although the physics plays a vital role in quantum computing, to date, quantum computing methods have not been adopted to resolve problems of interest for physics scientists. In this study, the scientists were successful in acquiring relevant information related to Higgs particles by programming a quantum annealer (a quantum computer with the ability to run just optimization tasks) to analyze particle-measurement data including errors. Maria Spiropulu, the Shang-Yi Ch’en Professor of Physics from Caltech, visualized the research and worked in cooperation with Daniel Lidar, pioneer of the quantum machine learning methodology and Viterbi Professor of Engineering at USC who is also a Distinguished Moore Scholar in Caltech’s divisions of Physics, Mathematics and Astronomy and Engineering and Applied Science.
The quantum program finds patterns from a dataset to sort out relevant data from erroneous data. Researchers anticipate that it will be handy in solving problems beyond high-energy physics. The specifications of the program and also comparisons with prevalent methods have been reported in a paper published in the journal Nature on October 19, 2017.
A well-known computing method for sorting out data is the neural network technique, which is highly efficient in acquiring obscure patterns from a dataset. It is highly difficult to elucidate patterns recognized by neural networks because the classification technique does not disclose the way they were extracted. Methods that result in better comprehensibility are normally more prone to error and are less effective.
“Some people in high-energy physics are getting ahead of themselves about neural nets, but neural nets aren’t easily interpretable to a physicist,” stated Joshua Job, USC’s physics graduate student who is the co-author of the paper and a guest student at Caltech. The innovative quantum program is “a simple machine learning model that achieves a result comparable to more complicated models without losing robustness or interpretability,” stated Job.
By using earlier methods, the classification precision is highly reliant on the quality and size of a training set—a manually extracted portion of the dataset. This is complicated in the case of high-energy physics research which is reliant on rare events hidden inside vast amounts of noise data. “The Large Hadron Collider generates a huge number of events, and the particle physicists have to look at small packets of data to figure out which are interesting,” stated Job. The innovative quantum program “is simpler, takes very little training data, and could even be faster. We obtained that by including the excited states,” stated Spiropulu.
A quantum system’s excited states have surplus energy, leading to errors in the output. “Surprisingly, it was actually advantageous to use the excited states, the suboptimal solutions,” stated Lidar.
“Why exactly that’s the case, we can only speculate. But one reason might be that the real problem we have to solve is not precisely representable on the quantum annealer. Because of that, suboptimal solutions might be closer to the truth,” stated Lidar.
It was highly difficult to model the problem such that a quantum annealer can comprehend it. However, it was successfully overcome by Alex Mott (PhD ’15), Spiropulu’s former graduate student at Caltech who is now at DeepMind. “Programming quantum computers is fundamentally different from programming classical computers. It’s like coding bits directly. The entire problem has to be encoded at once, and then it runs just once as programmed,” explained Mott.
In spite of the advancements, the scientists are of the notion that quantum annealers are not exceptional. The current annealers are simply “not big enough to even encode physics problems difficult enough to demonstrate any advantage,” stated Spiropulu.
“It’s because we’re comparing a thousand qubits—quantum bits of information—to a billion transistors,” stated Jean-Roch Vlimant, a postdoctoral scholar in high energy physics at Caltech. “The complexity of simulated annealing will explode at some point, and we hope that quantum annealing will also offer speedup,” stated Vlimant.
The scientists are actively looking out for more applications of the innovative quantum-annealing classification method. “We were able to demonstrate a very similar result in a completely different application domain by applying the same methodology to a problem in computational biology,” stated Lidar. “There is another project on particle-tracking improvements using such methods, and we’re looking for new ways to examine charged particles,” stated Vlimant.
“The result of this work is a physics-based approach to machine learning that could benefit a broad spectrum of science and other applications,” stated Spiropulu. “There is a lot of exciting work and discoveries to be made in this emergent cross-disciplinary arena of science and technology,” concluded Spiropulu.
The United States Department of Energy, Office of High Energy Physics, Research Technology, Computational HEP; and Fermi National Accelerator Laboratory; as well as the National Science Foundation supported the study. The AT&T Foundry Innovation Centers also supported the research through INQNET (INtelligent Quantum NEtworks and Technologies), a program for advancing quantum technologies.