One means through which computers “think” is by scrutinizing correlation within large data sets. A team of international researchers have demonstrated that quantum computers have the ability to perform analysis quicker than classical computers, for a broader array of data types than was previously anticipated.
The “quantum linear system algorithm” put forward by the team has been described in the Physical Review Letters journal on February 2, 2018. In the years to come, the algorithm can assist in simplifying problems as diverse as social networks, commodities pricing, and chemical structures.
The previous quantum algorithm of this kind applied to a very specific type of problem. We need an upgrade if we want to achieve a quantum speed up for other data.
Zhikuan Zhao, Author
This is precisely what Zhao has presented, in collaboration with his colleague Anupam Prakash from the Centre for Quantum Technologies, National University of Singapore, and collaborator Leonard Wossnig , then at ETH Zurich and the University of Oxford. Zhao is a PhD student at the Singapore University of Technology and Design.
The first quantum linear system algorithm was put forth in 2009 by a distinct team of scientists. That algorithm initiated studies into quantum forms of machine learning, that is, artificial intelligence.
A linear system algorithm functions on a huge matrix of data. For instance, a trader may be attempting to speculate the price of goods in future - the matrix might acquire archival data related changes in price over time, as well as data related to features that may have had an impact on these prices (for example, currency exchange rates). The algorithm then computes the strength of correlation of each feature with the other by “inverting” the matrix. Then, this information can be used to extrapolate into the future.
“There is a lot of computation involved in analyzing the matrix. When it gets beyond say 10,000 by 10,000 entries, it becomes hard for classical computers,” explained Zhao. This is due to the fact that the number of computational steps rapidly increases proportional to the number of elements in the matrix - when the matrix size gets doubled, the length of the computation becomes eight-fold.
The algorithm developed in 2009 could work better for larger matrices, however only when the data in the matrices is “sparse.” In such cases, there are restricted correlations between the elements, which normally do not hold for real-world data.
Zhao, Prakash, and Wossnig have put forward an innovative algorithm that works rapidly than the earlier and the classical quantum versions, without any constraint on the type of data it computes.
Roughly estimating, in the case of a 10,000 square matrix, the classical algorithm will take trillions of computational steps, the first quantum algorithm will take 10,000s of steps, but the new quantum algorithm will take only 100s of steps. The algorithm is dependent on a method called quantum singular value estimation.
The previous quantum linear system algorithm has been demonstrated a few times in a proof-of-concept run on small-scale quantum computers. Zhao and his team aspire to collaborate with an experimental team to perform a proof-of-concept working of their algorithm, as well. They also desire to perform a complete investigation of the work needed to execute the algorithm, examining the probable overhead costs.
In order to exhibit a real quantum benefit over the classical algorithms, bigger quantum computers will be needed. Zhao anticipates that “We’re maybe looking at three to five years in the future when we can actually use the hardware built by the experimentalists to do meaningful quantum computation with application in artificial intelligence.”