Posted in | News | Quantum Computing

Squeezing Light to Develop Highly Error-Tolerant Quantum Computers

A theoretical approach to quantum computing, which is 10 billion times more tolerant to errors when compared to previous models, has been proposed by researchers from Hokkaido University and Kyoto University.

A new theoretical model involving squeezing light to just the right amount to accurately transmit information using subatomic particles is bringing us closer to a new era of computing. (Credit: Hokkaido University)

The technology enables us to get closer to building quantum computers that harness the distinctive characteristics of subatomic particles to transfer, process, and store excessively huge amounts of complex information.

Quantum computing can be used to overcome challenges in processing huge amounts of information, for instance, modeling complex chemical processes, much better and rapidly when compared to modern computers.

Existing computers store data by coding it into “bits.” A bit can be in one of two states: 0 and 1. Researchers have been looking for means to use subatomic particles, known as “quantum bits,” with the ability to exist in more than just two distinctive states, to store and process considerably large amounts of information. Quantum bits are the building blocks of quantum computers.

One such means is harnessing the intrinsic characteristics of photons of light, for instance, encoding information in the form of quantum bits into a light beam through digitization of patterns of the electromagnetic field. However, during quantum computation, the encoded information could be lost from light waves, resulting in piling up of errors.

Researchers have been trying to “squeeze” light to minimize information loss. Squeezing is a process in which tiny quantum-level fluctuations, termed noise, are eliminated from an electromagnetic field.

A specific level of uncertainty is introduced by the noise into the phase and amplitude of the electromagnetic field. Hence, squeezing is an efficacious tool for the optical implementation of quantum computers, although the prevalent usage is not enough.

Akihisa Tomita, an applied physicist at Hokkaido University, and his team have proposed an innovative method for drastically reducing errors by employing this approach; they have reported their findings in a paper published in the Physical Review X journal.

They created a theoretical model in which the properties of quantum bits as well as the modes of the electromagnetic field in which they occur are used. As part of this approach, light is squeezed by eliminating error-prone quantum bits, when quantum bits cluster together.

When compared to prevalent experimental approaches, this model is 10 billion times more tolerant to errors, that is, it can tolerate up to one error for every 10,000 calculations.

The approach is achievable using currently available technologies, and could further advance developments in quantum computing research.

Akihisa Tomita, Hokkaido University

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.