These are the latest articles and videos I found most interesting.
- Artificial Intelligence
- Expert Panel Debunks AI Hype
- How brain processes language
- First Science from Juno at Jupiter
- History of Ideas: Ancient Greece
Artificial Intelligence
by Professor Martyn Thomas CBE @ Gresham College
Alan Turing famously proposed a test of artificial intelligence. What has been achieved? Professor Stephen Hawking has said that real artificial intelligence will mean the end of mankind. Is that a real threat? Are there limits to what a silicon brain might do?
Can Computers Think?
- If they can, how would we tell for certain?
- This is an open (and possibly circular) philosophical question: solipsism
- Alan Turing decided it was better to ask can we tell from a conversation whether we are talking to a computer or to a person.
- The Imitation Game that became the Turing Test
Frequently Raised Objections
- The Theological Objection
- The Heads in the Sand Objection
- The Mathematical Objection
- The Argument from Consciousness
- Arguments from Various Disabilities
- Lady Lovelace’s Objection
- The Argument from Continuity in the Nervous System
- The Argument from Informality of Behaviour
Limitations of Machine Learning. The recent Royal Society Report on ML listed some research challenges:
- creating systems that discover cause and effect and not just correlations
- creating systems whose workings can be understood
- verifying ML systems so they can be trusted
- preserving privacy, whilst sharing datasets
- eliminating systemic or social bias from ML systems
- ensuring that ML systems are secure from cyberattack
The transcript and downloadable versions of the lecture are available from the Gresham College website
Machine learning: the power and promise of computers that learn by example – Summary
Machine learning: the power and promise of computers that learn by example
Machine learning at The Royal Society
Expert Panel Debunks AI Hype
Neural networks seen as huge but limited
SAN JOSE, Calif. — Neural networks have hit the peak of a hype cycle, according to a panel of experts at an event marking the 50th anniversary of the Alan Turing Award. The technology will see broad use and holds much promise, but it is still in its early days and has its limits.
Many panelists said that artificial intelligence is a misnomer for neural networks, which do not address fundamental types of human reasoning and understanding. Instead, they are tools to take on a long journey to building AI.
The discussion of deep learning was particularly relevant given Turing’s vision that machines would someday exceed humans in intelligence. “Turing predicted [that] AI will exceed human intelligence, and that’s the end of the race — if we’re lucky, we can switch them off,” said Stuart Russell, a professor of computer science at Berkeley and AI researcher, now writing a new version of a textbook on the field.
“We have at least half a dozen major breakthroughs to come before we get [to AI], but I am pretty sure they will come, and I am devoting my life to figure out what to do about that.”
He noted that a neural network is just part of Google’s AlphaGo system that beat the world’s best players.
“AlphaGo … is a classical system … and deep learning [makes up] two parts of it … but they found it better to use an expressive program to learn the rules [of the game]. An end-to-end deep learning system would need … [data from] millions of past Go games that it could map to next moves. People tried and it didn’t work in backgammon, and it doesn’t work in chess,” he said, noting that some problems require impossibly large data sets.
Russell characterized today’s neural nets as “a breakthrough of sorts … fulfilling their promise from the 1980’s … but they lack the expressive power of programming languages and declarative semantics that make database systems, logic programming, and knowledge systems useful.”
Neural nets also lack the wealth of prior understanding that humans bring to problems. “A deep-learning system would never discover the Higgs boson from the raw data” of the Hadron Collider, he added. “I worry [that] too much emphasis is put on big data and deep learning to solve all our problems.”
Limits in self-driving cars, image recognition
Neural nets hold significant promise and limits in areas such as self-driving cars and image recognition, said other top researchers.
“I work on self-driving cars … systems [that] must be robust,” said Raquel Urtasun, who teaches machine learning at the University of Toronto and runs Uber’s advanced research center there. “This is quite challenging for neural nets because they don’t model uncertainty well.”
Neural nets “will say [that] there is a 99% probability [that] a car is there … but you can’t tolerate false positives … when you make a mistake, you need to understand why you made a mistake.”
She agreed with Russell of Berkeley that “deep learning won’t solve all our problems.” Blending neural nets with graphical models “is an interesting area” that might help systems tap the kind of prior knowledge that humans bring to bear.
Given their limits, users need to “understand [that machine-learning] systems can have biases … [and sometimes] will make unfair decisions,” she said.
Urtasun attributed the success of today’s neural nets to “a few tricks that make training better, but [there’s been] no fundamental change [to the core algorithms] in the last 25 years. Breakthroughs came in part from the availability of big data sets and better hardware that made it possible to train larger scale models,” she said.
Nevertheless, deep learning has “enabled apps we hadn’t thought about in health, transportation — we see it almost everywhere.”
Stanford’s Fei-Fei Li, now on sabbatical as chief scientist at Google Cloud, agreed that neural nets are at a peak of hype with real promise and real limits. She just finished teaching 770 students in Stanford’s largest class to date on neural nets.
Li characterized the moment as the end of the beginning, in which machine learning has emerged from lab experiments to commercial deployments. A broad set of industries and scientific fields are “being impacted by massive data and data analytics capabilities,” she said.
Nevertheless, “the euphoria that we’ve solved most problems is not true. While we celebrate the success [of ImageNet in image recognition], we hardly talk about its failures … many challenges remain that involve reasoning.”
“An AI algorithm makes the perfect chess move while the room is on fire,” she said, repeating a joke coined by another researcher about the lack of contextual awareness in deep learning.
More broadly, “we have very limited understanding of what human cognition is. Because of that, both fields are at the very beginning.”
The bulls and bears of neural networks
It’s too early to say just how far neural networks will take us, argued the most bullish member of the panel, Ilya Sutskever, co-founder and research director of OpenAI and a former research scientist at Google Brain.
“These models are hard to understand. Machine vision, for example, was incomprehensible as a program, but now we have an incomprehensible solution to an incomprehensible problem,” he said.
Although algorithms about back propagation at the core of neural networking have been around for years, the hardware to run them has only been available recently. New architectures in the works for neural nets promise that “in the next few years, we’ll see amazing computers that will show much progress,” added Sutskever.
Speaking in a separate panel, Doug Burger, distinguished engineer working on FPGA accelerators at Microsoft’s Azure cloud service, agreed. “Despite being at the peak of the hype curve, neural networks are real … there’s something deep and fundamental here [that] we don’t fully understand yet.”
Startups, academics, and established companies are working on processors to accelerate neural nets, many using vector multiplication matrices with reduced precision, he noted. “That will play out over three or four years, and what will come after that is really interesting to me.”
Fellow panelist Norm Joupi agreed. The veteran microprocessor designer and lead of the team behind Google’s TPU accelerator called neural nets “one of the biggest nuggets” in computer science today.
Michael I. Jordan, a machine-learning expert at Berkeley, was the bear in the AI panel. Computer science remains the overarching discipline, not AI, and neural nets are a still-developing part of it, he argued.
“It’s all a big tool box,” he said. “We need to build the infrastructure and engineering [around neural nets, and] we are far away from that. We need to have systems thinking with math and machine learning.”
Like other speakers, he pointed to human reasoning capabilities outside the scope of neural nets. “Natural language processing is very hard. Today, we are matching string to strings, but that’s not what translation is.”
For example, he noted enthusiasm in China over chatbots. The automated conversation agents can engage humans, but without support for abstractions and semantics, they can’t say anything that’s true about the world.
“We are in an era of enormous learning, but we are not [at AI] yet,” he concluded. Nevertheless, he agreed that neural nets are significant enough that they need to become a part of a revised computer science curriculum.
How brain processes language
by Ina Bornkessel-Schlesewsky @ University of South Australia
Predicting “When” in Discourse Engages the Human Dorsal Auditory Stream: An fMRI Study Using Naturalistic Stories.
Language is the most powerful communicative medium available to humans. Nevertheless, we lack an understanding of the neurobiological basis of language processing in natural contexts: it is not clear how the human brain processes linguistic input within the rich contextual environments of our everyday language experience. This fMRI study provides the first demonstration that, in natural stories, predictions concerning the probability of remention of a protagonist at a later point are processed in the dorsal auditory stream.
Katerina Danae Kandylaki, Arne Nagels, Sarah Tune, Tilo Kircher, Richard Wiese, Matthias Schlesewsky, and Ina Bornkessel-Schlesewsky; The Journal of Neuroscience, November 30, 2016 • 36(48):12180 –12191
First Science from Juno at Jupiter
by NASA Jet Propulsion Laboratory
Scientists from NASA’s Juno mission to Jupiter discussed their first in-depth science results in a media teleconference on May 25, 2017, at 2 p.m. ET (11 a.m. PT, 1800 UTC), when multiple papers with early findings were published online by the journal Science and Geophysical Research Letters.
Juno launched on Aug. 5, 2011, from Cape Canaveral Air Force Station, Florida, and arrived in orbit around Jupiter on July 4, 2016. In its current exploration mission, Juno soars low over the planet’s cloud tops, as close as about 2,100 miles (3,400 kilometers). During these flybys, Juno probes beneath the obscuring cloud cover of Jupiter and studies its auroras to learn more about the planet’s origins, structure, atmosphere and magnetosphere.
History of Ideas: Ancient Greece
A healthy mind can only dwell in a healthy body
We know we’re meant to think that Ancient Greece was a cradle of civilisation; but what exactly did the Greeks contribute to humanity? Here is a list of some of their greatest and most relevant achievements.