What Is the Computational Power of the Universe?

These are the latest articles and videos I found most interesting.

  1. What Is the Computational Power of the Universe?
  2. Los Alamos works on a biologically realistic computer network
  3. Future Decoded Quantum Computing
  4. Race for quantum supremacy hits theoretical quagmire

What Is the Computational Power of the Universe?

Video by National Institute of Standards and Technology

Can a close look at the universe give us solutions to problems too difficult even for a planet-sized computer to solve? Physicist Stephen Jordan considers this question and more in our latest video.


Los Alamos works on a biologically realistic computer network

Video by Los Alamos National Lab

Brain neuroscientists and computer scientists call this field neuromimetic computing—building computers inspired by how the cerebral cortex of the brain works. Its cortical processes rely on billions of small biological “switches,” called neurons. To learn, they store and process information.

Using an approach called neural networks, researchers are developing computers that simulate neurons and their interconnections. Then computers can learn about their surroundings, interpret data, and make predictions based on it.

In practice, however, researchers attempting to simulate neural processing at anything close to the scale and complexity of the brain’s cortical circuits have been stymied by limitations on computer memory and computational power.

All that has changed with the new Trinity supercomputer at Los Alamos, which became fully operational in mid-2017. Trinity has unique capabilities designed for the stockpile stewardship mission, which includes highly complex nuclear simulations that have replaced the testing of nuclear weapons. All this capability means Trinity allows a fundamentally different approach to large-scale cortical simulations, enabling an unprecedented leap in the ability to model neural processing.

To test that capability on a limited-scale problem, computer scientists and neuroscientists at Los Alamos created a “sparse prediction machine” that runs on a neural network on Trinity. A sparse prediction machine is designed to work like the brain: researchers expose it to data—in this case, various videos of a car driving down a road—without labeling the data in any way. Then the program sorts through that data frame by frame, focuses on the important information, and develops a prediction about the car’s motion.

With Trinity’s power, the Los Alamos team simulates the way a brain handles information in its neurons but uses the fewest neurons at any given moment to explain the information at hand. That’s the “sparse” part, and it makes the brain very efficient—and, hopefully, a computer more efficient, too.


Future Decoded Quantum Computing

Video by Microsoft Research

Learn about Microsoft’s unique approach to Quantum Computing. For more information, visit: https://www.microsoft.com/quantum


Race for quantum supremacy hits theoretical quagmire

Reprinted from Nature
by Philip Ball

It’s far from obvious how to tell whether a quantum computer can outperform a classical one, says Philip Ball.

Quantum supremacy might sound ominously like the denouement of the Terminator movie franchise, or a misguided political movement. In fact, it denotes the stage at which the capabilities of a quantum computer exceed those of any available classical computer. The term, coined in 2012 by quantum theorist John Preskill at the California Institute of Technology, Pasadena1, has gained cachet because this point seems imminent. According to various quantum-computing proponents, it could happen before the end of the year.

But does the concept of quantum supremacy make sense? A moment’s thought reveals many problems. By what measure should a quantum computer be judged to outperform a classical one? For solving which problem? And how would anyone know the quantum computer has succeeded, if they can’t check with a classical one?

Computer scientists and engineers are rather more phlegmatic about the notion of quantum supremacy than excited commentators who foresee an impending quantum takeover of information technology. They see it not as an abrupt boundary but as a symbolic gesture: a conceptual tool on which to peg a discussion of the differences between the two methods of computation. And, perhaps, a neat advertising slogan.

IBM Research

An IBM cryostat wired for a 50-qubit system.

Magic number

Quantum computers manipulate bits of information according to the quantum rules that govern the behaviour of matter on the smallest scales. In this quantum world, information can be coded as quantum bits (qubits), physically composed of objects that represent binary 1s and 0s as quantum states. By keeping the qubits in a coherent quantum superposition of states – so that in effect their settings are correlated, rather than being independent as in the bits (transistors) of classical computer circuitry – it becomes possible to carry out some computations much more efficiently, and thus faster, with far fewer (qu)bits, than on classical computers.

Both IBM and Google have already developed prototype quantum-computing devices. IBM has made a 5-qubit device available for public use as a cloud-based resource and on 10 November it announced that it had made a 20-qubit device available for commercial users. Its computer scientists also reported on the same day that they had successfully tested a 50-qubit circuit. Google, too, is developing devices with 49–50 qubits on which its researchers hope to demonstrate quantum supremacy by the end of this year2.

How could anyone know, though, that a quantum computer is genuinely doing something that is impossible for a classical one to do – rather than that they just haven’t yet found a classical algorithm that is clever enough to do the job? This is what makes quantum supremacy a theoretically interesting challenge: are there classes of problem for which it can be rigorously shown that quantum computing can do what classical cannot?

Among the favourite candidates are so-called sampling problems, in which in-effect random bits are transformed into bits that come from a predefined distribution. The Google team in Santa Barbara, California, led by John Martinis, has described an experimental procedure for implementing such a sampling scheme on a quantum computer, and has argued that at the 50-qubit level it could show quantum supremacy2.

Because of this paper, 50 qubits has become something of an iconic number. That’s why a recent preprint3 from Edwin Pednault and co-workers at IBM’s Thomas J. Watson Research Center in Yorktown Heights, New York, showing how, with enough ingenuity, some 49-qubit problems can be simulated classically, has been interpreted in some news reports as a challenge to Google’s aim to demonstrate quantum supremacy with only 50 qubits.

It’s all about depth

But it’s not really that. Quantum-computing experts are now finding themselves obliged to repeat a constant refrain: it’s not just about the number of qubits. One of the main measures of the power of a quantum circuit is its so-called depth: in effect, how many logical operations (‘gates’) can be implemented in a system of qubits before their coherence decays, at which point errors proliferate and further computation becomes impossible. How the qubits are connected also matters. So the true measure of the power of a quantum circuit is a combination of factors, which IBM researchers have called the “quantum volume”.

This means that the extent to which a quantum-computational task is challenging to perform classically depends also on the algorithmic depth, not just on how many qubits you have to throw at it. Martinis says that the IBM paper is concerned only with small-depth problems, so it’s not so surprising that a classical solution still exists at the 49-qubit level. “We at Google are well aware that small-depth circuits are easier to classically compute”, he says. “It is an issue we covered in our original paper.”

Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology, agrees that the IBM work doesn’t obviously put quantum supremacy further out of reach. “It is an excellent paper, which sets a new record for the classical simulation of generic quantum circuits,” he writes – but “it does not undercut the rationale for quantum supremacy experiments.”

Indeed, he says, the truth is almost the opposite: the paper shows that it’s “possible to simulate 49-qubit circuits using a classical computer, [which] is a precondition for Google’s planned quantum supremacy experiment, because it’s the only way we know to check such an experiment’s results.” In essence, the IBM paper shows how to verify the quantum result right up to the edge of what is feasible – so computer scientists and engineers can be confident that things are OK when they go beyond it. The goal, Aaronson says, can be likened to “get[ting] as far as you can up the mountain, conditioned on people still being able to see you from the base.”

These views seem to sit comfortably with the IBM team’s own perspective on their work. “I think the appropriate conclusion to draw from the simulation methods we have developed is that quantum supremacy should properly be viewed as a matter of degree, and not as an absolute threshold,” says Pednault. “I, along with others, prefer to use the term ‘quantum advantage’ to emphasize this perspective.”

Theorist Jay Gambetta at IBM agrees that for such reasons, quantum supremacy might not mean very much. “I don’t believe that quantum supremacy represents a magical milestone that we will reach and declare victory,” he says. “I see these ‘supremacy’ experiments more as a set of benchmarking experiments to help develop quantum devices.”

In any event, demonstrating quantum supremacy, says Pednault, “should not be misconstrued as the definitive moment when quantum computing will do something useful for economic and societal impact. There is still a lot of science and hard work to do.”

Which, of course, is just applied science as normal. The idea of quantum supremacy sets a nice theoretical puzzle, but says little about what quantum computers might ultimately do for society.

Nature

doi:10.1038/nature.2017.22993

Corrections

Corrected:

An earlier version of this story erroneously stated that IBM had created a 20-qubit device for public use. It is available only for commercial users, however IBM does have a 5-qubit device for public use.

References

1. Preskill, J. Preprint at https://arxiv.org/abs/1203.5813 (2012).
The term, coined in 2012 by quantum theorist John Preskill at the California Institute of Technology, Pasadena1, has gained cachet because this point seems imminent.
2. Neill, C. et al. Preprint at https://arxiv.org/abs/1709.06678 (2017).
Google, too, is developing devices with 49–50 qubits on which its researchers hope to demonstrate quantum supremacy by the end of this year2
The Google team in Santa Barbara, California, led by John Martinis, has described an experimental procedure for implementing such a sampling scheme on a quantum computer, and has argued that at the 50-qubit level it could show quantum supremacy2.
3. Pednault, E. et al. Preprint at https://arxiv.org/abs/1710.05867 (2017).
That’s why a recent preprint3 from Edwin Pednault and co-workers at IBM’s Thomas J.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: