Quantum Technology & National Security

These are the latest articles and videos I found most interesting.

  1. Quantum Technology & National Security
  2. Have We Missed Half of What the Neocortex Does? Allocentric Location as the Basis of Perception
  3. Quantum computing – Schrodinger’s cats can calculate faster
  4. How We Learn to Interpret Speech
  5. New dataset to help AI with video understanding

Quantum Technology & National Security

Quantum Supremacy ≠ Useful

Wave function of an atom is about 10 nanometers, this is about a size of a modern silicon transistor

To factor a 1024-bit number using Shor’s algorithm we need between 50 million and 3 trillion (!!!) qubits. We were able to create 50. And remember you have about billion transistors on today’s microprocessors. We are now talking about thousand times more complicated than that in order to run Shor’s algorithm. There are no entangled systems of that size in nature, we do not have a clue how it will work and if it will work. If we manage to create such a complex system we’ll create a new state of matter!

Talk by Professor Michael J. Biercuk

Quantum technologies have the potential to defeat modern crypto-systems and provide otherwise unimaginable computational capabilities. The current strategic-analysis landscape for quantum technology is obscured in part by aggressive marketing and the overall complexity of the field, however. On November 28, 2017, Professor Michael J. Biercuk cut through the hype to explain quantum technology, its promise, and the reality of the state-of-the-art.

Have We Missed Half of What the Neocortex Does? Allocentric Location as the Basis of Perception

Talk by Jeff Hawkins (Numenta) at the Center for Brains, Minds and Machines (CBMM)

In this talk I will describe a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.

I will be discussing material from these two papers. Others can be found at www.Numenta.com/papers

Quantum computing – Schrodinger’s cats can calculate faster

Talk by Yonatan Cohen

Quantum mechanics, which was born at the beginning of the 20th century and revolutionized our understanding of nature, implies that nature is far more counter intuitive and far richer than anyone could imagine before. In the 1980’s, physicists understood that this richness of nature can be used to construct new kind of computers that can outperform significantly any classical computer which ignores the quantum world. In the several decades that followed, physicists have made huge progress in learning how to design and control quantum systems, and have turned the once far away dream of a large scale quantum computer, into a realistic and closer than ever prospect.

What are the basic principles on which quantum computers work and how do quantum computers beat classical computers? How can quantum computers be built and what are the major challenges?

How We Learn to Interpret Speech

Video by Charles Taylor at The Royal Institution

Balloons are fun at parties, but with a bit of work, you can also make them talk. In this clip from the 1989 CHRISTMAS LECTURES, Charles Taylor demonstrates what ‘talking’ balloons can tell us about how we process and understand speech.

New dataset to help AI with video understanding

Video by MIT
Moments in Time Dataset

As part of the MIT IBM Watson AI Lab, scientists developed the Moments in Time Dataset with one million three-second video clips designed to help AI interpret video content.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: