These are the latest articles and videos I found most interesting.
 What Is the Computational Power of the Universe?
 Los Alamos works on a biologically realistic computer network
 Future Decoded Quantum Computing
 Race for quantum supremacy hits theoretical quagmire
What Is the Computational Power of the Universe?
Video by National Institute of Standards and Technology
Can a close look at the universe give us solutions to problems too difficult even for a planetsized computer to solve? Physicist Stephen Jordan considers this question and more in our latest video.
Los Alamos works on a biologically realistic computer network
Video by Los Alamos National Lab
Brain neuroscientists and computer scientists call this field neuromimetic computing—building computers inspired by how the cerebral cortex of the brain works. Its cortical processes rely on billions of small biological “switches,” called neurons. To learn, they store and process information.
Using an approach called neural networks, researchers are developing computers that simulate neurons and their interconnections. Then computers can learn about their surroundings, interpret data, and make predictions based on it.
In practice, however, researchers attempting to simulate neural processing at anything close to the scale and complexity of the brain’s cortical circuits have been stymied by limitations on computer memory and computational power.
All that has changed with the new Trinity supercomputer at Los Alamos, which became fully operational in mid2017. Trinity has unique capabilities designed for the stockpile stewardship mission, which includes highly complex nuclear simulations that have replaced the testing of nuclear weapons. All this capability means Trinity allows a fundamentally different approach to largescale cortical simulations, enabling an unprecedented leap in the ability to model neural processing.
To test that capability on a limitedscale problem, computer scientists and neuroscientists at Los Alamos created a “sparse prediction machine” that runs on a neural network on Trinity. A sparse prediction machine is designed to work like the brain: researchers expose it to data—in this case, various videos of a car driving down a road—without labeling the data in any way. Then the program sorts through that data frame by frame, focuses on the important information, and develops a prediction about the car’s motion.
With Trinity’s power, the Los Alamos team simulates the way a brain handles information in its neurons but uses the fewest neurons at any given moment to explain the information at hand. That’s the “sparse” part, and it makes the brain very efficient—and, hopefully, a computer more efficient, too.
Future Decoded Quantum Computing
Learn about Microsoft’s unique approach to Quantum Computing. For more information, visit: https://www.microsoft.com/quantum
Race for quantum supremacy hits theoretical quagmire
Reprinted from Nature
by Philip Ball
It’s far from obvious how to tell whether a quantum computer can outperform a classical one, says Philip Ball.
Quantum supremacy might sound ominously like the denouement of the Terminator movie franchise, or a misguided political movement. In fact, it denotes the stage at which the capabilities of a quantum computer exceed those of any available classical computer. The term, coined in 2012 by quantum theorist John Preskill at the California Institute of Technology, Pasadena^{1}, has gained cachet because this point seems imminent. According to various quantumcomputing proponents, it could happen before the end of the year.
But does the concept of quantum supremacy make sense? A moment’s thought reveals many problems. By what measure should a quantum computer be judged to outperform a classical one? For solving which problem? And how would anyone know the quantum computer has succeeded, if they can’t check with a classical one?
Computer scientists and engineers are rather more phlegmatic about the notion of quantum supremacy than excited commentators who foresee an impending quantum takeover of information technology. They see it not as an abrupt boundary but as a symbolic gesture: a conceptual tool on which to peg a discussion of the differences between the two methods of computation. And, perhaps, a neat advertising slogan.
IBM Research
Magic number
Quantum computers manipulate bits of information according to the quantum rules that govern the behaviour of matter on the smallest scales. In this quantum world, information can be coded as quantum bits (qubits), physically composed of objects that represent binary 1s and 0s as quantum states. By keeping the qubits in a coherent quantum superposition of states – so that in effect their settings are correlated, rather than being independent as in the bits (transistors) of classical computer circuitry – it becomes possible to carry out some computations much more efficiently, and thus faster, with far fewer (qu)bits, than on classical computers.
Both IBM and Google have already developed prototype quantumcomputing devices. IBM has made a 5qubit device available for public use as a cloudbased resource and on 10 November it announced that it had made a 20qubit device available for commercial users. Its computer scientists also reported on the same day that they had successfully tested a 50qubit circuit. Google, too, is developing devices with 49–50 qubits on which its researchers hope to demonstrate quantum supremacy by the end of this year^{2}.
How could anyone know, though, that a quantum computer is genuinely doing something that is impossible for a classical one to do – rather than that they just haven’t yet found a classical algorithm that is clever enough to do the job? This is what makes quantum supremacy a theoretically interesting challenge: are there classes of problem for which it can be rigorously shown that quantum computing can do what classical cannot?
Among the favourite candidates are socalled sampling problems, in which ineffect random bits are transformed into bits that come from a predefined distribution. The Google team in Santa Barbara, California, led by John Martinis, has described an experimental procedure for implementing such a sampling scheme on a quantum computer, and has argued that at the 50qubit level it could show quantum supremacy^{2}.
Because of this paper, 50 qubits has become something of an iconic number. That’s why a recent preprint^{3} from Edwin Pednault and coworkers at IBM’s Thomas J. Watson Research Center in Yorktown Heights, New York, showing how, with enough ingenuity, some 49qubit problems can be simulated classically, has been interpreted in some news reports as a challenge to Google’s aim to demonstrate quantum supremacy with only 50 qubits.
It’s all about depth
But it’s not really that. Quantumcomputing experts are now finding themselves obliged to repeat a constant refrain: it’s not just about the number of qubits. One of the main measures of the power of a quantum circuit is its socalled depth: in effect, how many logical operations (‘gates’) can be implemented in a system of qubits before their coherence decays, at which point errors proliferate and further computation becomes impossible. How the qubits are connected also matters. So the true measure of the power of a quantum circuit is a combination of factors, which IBM researchers have called the “quantum volume”.
This means that the extent to which a quantumcomputational task is challenging to perform classically depends also on the algorithmic depth, not just on how many qubits you have to throw at it. Martinis says that the IBM paper is concerned only with smalldepth problems, so it’s not so surprising that a classical solution still exists at the 49qubit level. “We at Google are well aware that smalldepth circuits are easier to classically compute”, he says. “It is an issue we covered in our original paper.”
Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology, agrees that the IBM work doesn’t obviously put quantum supremacy further out of reach. “It is an excellent paper, which sets a new record for the classical simulation of generic quantum circuits,” he writes – but “it does not undercut the rationale for quantum supremacy experiments.”
Indeed, he says, the truth is almost the opposite: the paper shows that it’s “possible to simulate 49qubit circuits using a classical computer, [which] is a precondition for Google’s planned quantum supremacy experiment, because it’s the only way we know to check such an experiment’s results.” In essence, the IBM paper shows how to verify the quantum result right up to the edge of what is feasible – so computer scientists and engineers can be confident that things are OK when they go beyond it. The goal, Aaronson says, can be likened to “get[ting] as far as you can up the mountain, conditioned on people still being able to see you from the base.”
These views seem to sit comfortably with the IBM team’s own perspective on their work. “I think the appropriate conclusion to draw from the simulation methods we have developed is that quantum supremacy should properly be viewed as a matter of degree, and not as an absolute threshold,” says Pednault. “I, along with others, prefer to use the term ‘quantum advantage’ to emphasize this perspective.”
Theorist Jay Gambetta at IBM agrees that for such reasons, quantum supremacy might not mean very much. “I don’t believe that quantum supremacy represents a magical milestone that we will reach and declare victory,” he says. “I see these ‘supremacy’ experiments more as a set of benchmarking experiments to help develop quantum devices.”
In any event, demonstrating quantum supremacy, says Pednault, “should not be misconstrued as the definitive moment when quantum computing will do something useful for economic and societal impact. There is still a lot of science and hard work to do.”
Which, of course, is just applied science as normal. The idea of quantum supremacy sets a nice theoretical puzzle, but says little about what quantum computers might ultimately do for society.
Nature
doi:10.1038/nature.2017.22993
Corrections
Corrected:
An earlier version of this story erroneously stated that IBM had created a 20qubit device for public use. It is available only for commercial users, however IBM does have a 5qubit device for public use.
References
1.  Preskill, J. Preprint at https://arxiv.org/abs/1203.5813 (2012). The term, coined in 2012 by quantum theorist John Preskill at the California Institute of Technology, Pasadena^{1}, has gained cachet because this point seems imminent. 

2.  Neill, C. et al. Preprint at https://arxiv.org/abs/1709.06678 (2017). Google, too, is developing devices with 49–50 qubits on which its researchers hope to demonstrate quantum supremacy by the end of this year^{2} The Google team in Santa Barbara, California, led by John Martinis, has described an experimental procedure for implementing such a sampling scheme on a quantum computer, and has argued that at the 50qubit level it could show quantum supremacy^{2}. 

3.  Pednault, E. et al. Preprint at https://arxiv.org/abs/1710.05867 (2017). That’s why a recent preprint^{3} from Edwin Pednault and coworkers at IBM’s Thomas J. 