Robotics and the Myths of Autonomy

The following are the notes I took from reading the book by David A. Mindell ‘Our Robots, Ourselves, Robotics and the Myths of Autonomy’. This is a must-read book for every roboticist. Ideas and wisdom in the book are especially important for the engineers working on autonomous or self-driving cars. Are they solving the right problems? They need to ask this question.

Civilization advances by extending the number of important operations we can perform without thinking about them.

‒ Alfred North Whitehead

All autonomous systems are joint human-machine cognitive systems.

There are no fully autonomous systems just as there are no fully autonomous soldiers, sailors, airmen or Marines.


Our Robots, Ourselves, Robotics and the Myths of Autonomy

by David A. Mindell

Our Robots, Ourselves, Robotics and the Myths of Autonomy

David Mindell at Microsoft Research

David Mindell at Google

Human, Remote, Autonomous

Late in the night, high above the Atlantic Ocean in the long, open stretch between Brazil and Africa, an airliner encountered rough weather. Ice clogged the small tubes on the aircraft’s nose that detected airspeed and transmitted the data to the computers flying the plane. The computers could have continued flying without the information, but they had been told by their programmers that they could not.

The automated, fly-by-wire system gave up, turned itself off, and handed control to the human pilots in the cockpit: thirty-two-year-old Pierre Cedric Bonin and thirty-seven-year-old David Robert. Bonin and Robert, both relaxed and a little fatigued, were caught by surprise, suddenly responsible for hand flying a large airliner at high altitude in bad weather at night. It is a challenging task under the best of circumstances, and one they had not handled recently. Their captain, fifty-eight-year-old Marc Debois, was off duty back in the cabin. They had to waste precious attention to summon him.

Even though the aircraft was flying straight and level when the computers tripped off, the pilots struggled to make sense of the bad air data. One man pulled back, the other pushed forward on his control stick. They continued straight and level for about a minute, then lost control. On June 1, 2009, Air France flight 447 spiraled into the ocean, killing more than two hundred passengers and crew. It disappeared below the waves, nearly without a trace.

In the global, interconnected system of international aviation, it is unacceptable for an airliner to simply disappear. A massive, coordinated search followed. In just a few days traces of flight 447 were located on the ocean’s surface. Finding the bulk of the wreckage, however, and the black box data recorders that held the keys to the accident’s causes, required hunting across a vast seafloor, and proved frustratingly slow.

More than two years after, two miles deep on the seafloor, nearly beneath the very spot where the airliner hit the ocean, an autonomous underwater vehicle, or AUV, called Remus 6000 glided quietly through the darkness and extreme pressure. Moving at just faster than a human walking pace, the torpedo-shaped robot maintained a precise altitude of about two hundred feet off the bottom, a position at which its ultrasonic scanning sonar returns the sharpest images. As the sonars pinged to about a half mile out either side, the robot collected gigabytes of data from the echoes.

The terrain is mountainous, so the seafloor rose quickly. Despite its intelligence, the robot occasionally bumped into the bottom, mostly without injury. Three such robots worked in a coordinated dance: two searched underwater at any given time, while a third one rested on a surface ship in a three-hour pit stop with its human handlers to offload data, charge batteries, and take on new search plans.

On the ship, a team of twelve engineers from the Woods Hole Oceanographic Institution, including leader Mike Purcell, who spearheaded the design and development of the searching vehicles, worked in twelve-hour shifts, busy as any pit crew. When a vehicle came to the surface, it took about forty five minutes for the engineers to download the data it collected into a computer, then an additional half hour to process those data to enable a quick, preliminary scroll-through on a screen.

Looking over their shoulders were French and German investigators, and representatives from Air France. The mood was calculating and deliberate, but tense: the stakes were high for French national pride, for the airliners manufacturer, Airbus, and for the safety of all air travel. Several prior expeditions had tried and failed. In France, Brazil, and around the world, families awaited word.

Interpreting sonar data requires subtle judgment not easily left solely to a computer. Purcell and his engineers relied on years of experience. On their screens, they reviewed miles and miles of rocky reflections alternating with smooth bottom. The pattern went on for five days before the monotony broke: a crowd of fragments appeared, then a debris held—a strong signal of human-made artifacts in the ocean desert. Suggestive, but still not definitive.

The engineers reprogrammed the vehicles to return to the debris and “fly” back and forth across it, this time close enough that onboard lights and cameras could take pictures from about thirty feet off the bottom. When the vehicles brought the images back to the surface, engineers and investigators recognized the debris and had their answer: they had found the wreckage of flight 447, gravesite of hundreds.

Soon, another team returned with a different kind of robot, a remote y operated vehicle (ROV), a heavy-lift vehicle specially designed for deep salvage, connected by a cable to the ship. Using the maps created by the successful search, the ROV located the airliner’s black box voice and data recorders and brought them to the surface. The doomed pilots’ last minutes were recovered from the ocean, and investigators could now reconstruct the fatal confusion aboard the automated airliner. The ROV then set about the grim task of retrieving human remains.

The Air France 447 crash and recovery linked advanced automation and robotics across two extreme environments: the high atmosphere and the deep sea. The aircraft plunged into the ocean because of failures in human interaction with automated systems; the wreckage was then discovered by humans operating remote and autonomous robots.

While the words (and their commonly perceived meanings) suggest that automated and autonomous systems are self-acting, in both cases the failure or success of the systems derived not from the machines or the humans operating on their own, but from people and machines operating together. Human pilots struggled to fly an aircraft that had been automated for greater safety and reliability; networks of ships, satellites, and floating buoys helped pinpoint locations; engineers interpreted and acted on data produced by robots. Automated and autonomous vehicles constantly returned to their human makers for information, energy, and guidance.

Air France 447 made tragically clear that as we constantly adapt to and reshape our surroundings, we are also remaking ourselves. How could pilots have become so dependent on computers that they flew a perfectly good airliner into the sea? What becomes of the human roles in activities like transportation, exploration, and warfare when more and more of the critical tasks seem to be done by machines?

In the extreme view, some believe that humans are about to become obsolete, that robots are “only one software upgrade away” from full autonomy, as Scientific American has recently argued. And they tell us that the robots are coming—coming to more familiar environments. A new concern for the strange and uncertain potentials of artificial intelligence has arisen out of claims that we are on the cusp of superintelligence. Our world is about to be transformed, indeed is already being transformed, by robotics and automation. Start-ups are popping up, drawing on old dreams of smart machines to help us with professional duties, physical labor, and the mundane tasks of daily life. Robots living and working alongside humans in physical, cognitive, and emotional intimacy have emerged as a growing and promising subject of research. Autonomy—the dream that robots will one day act as fully independent agents—remains a source of inspiration, innovation, and concern.

The excitement is in the thrill of experimentation; the precise forms of these technologies are far from certain, much less their social, psychological, and cognitive implications. How will our robots change us? In whose image will we make them? In the domain of work, what will become of our traditional roles— scientist, lawyer, doctor, soldier, manager, even driver and sweeper—when the tasks are altered by machines? How will we live and work?

We need not speculate: much of this future is with us today, if not in daily life then in extreme environments, where we have been using robotics and automation for decades. In the high atmosphere, the deep ocean, and outer space humans cannot exist on their own. The demands of placing human beings in these dangerous settings have forced the people who work in them to build and adopt robotics and automation earlier than those in other, more familiar realms.

Extreme environments press the relationships between people and machines to their limits. They have long been sites of innovation. Here engineers have the freest hand to experiment. Despite the physical isolation, here the technologies’ cognitive and social effects first become apparent. Because human lives, expensive equipment, and important missions are at stake, autonomy must always be tempered with safety and reliability.

In these environments, the mess and busyness of daily life are temporarily suspended, and we find, set off from the surrounding darkness, brief, dream-like allegories of human life and technology. The social and technological forces at work on an airliners flight deck, or inside a deep-diving submersible, are not fundamentally different from those in a factory, an office, or an automobile. But in extreme environments they appear in condensed, intense form, and are hence easier to grasp. Every airplane flight is a story, and so is every oceanographic expedition, every space flight, every military mission. Through these stories of specific people and machines we can glean subtle, emerging dynamics.

Extreme environments teach us about our near future, when similar technologies might pervade automobiles, health care, education, and other human endeavors. Human-operated, remotely controlled, and autonomous vehicles represent the leading edge of machine and human potential, new forms of presence and experience, while drawing our attention to the perils, ethical implications, and unintended consequences of living with smart machines. We see a future where human knowledge and presence will be more crucial than ever, if in some ways strange and unfamiliar.

And these machines are just cool. I’m not alone in my lifelong fascination with airplanes, spacecraft, and submarines. Indeed, technological enthusiasm, as much as the search for practical utility, drives the stories that follow. It’s no coincidence that similar stories are so often the subject of science fiction— something about people and machines at the limits of their abilities captures the imagination, engages our wonder, and stirs hopes about who we might become.

This enthusiasm sometimes reflects a naive faith in the promise of technology. But when mature it is an enthusiasm for basic philosophical and humanistic questions: Who are we? How do we relate to our work and to one another? How do our creations expand our experience? How can we best live in an uncertain world? These questions lurk barely below the surface as we talk to people who build and operate robots and vehicles.

Join me as I draw on firsthand experience, extensive interviews, and the latest research from MIT and elsewhere to explore experiences of robotics and automation in the extreme environments of the deep ocean and in aviation (civil and military) and spaceflight. It is not an imagination of the future, but a picture of today: we’ll see how people operate with and through robots and autonomy and how their interactions affect their work, their experiences, and their skills and knowledge.

Our stories begin where I began, in the deep ocean. Twenty-five years ago, as an engineer designing embedded computers and instruments for deep-ocean robots, I was surprised to find that technologies were changing in unexpected ways the work of oceanography the ways of doing science, the meaning of being an oceanographer.

The realization led to two parallel careers. As a scholar, I study the human implications of machinery, from ironclad warships in the American Civil War to the computers and software that helped the Apollo astronauts land on the moon. As an engineer, I bring that research to bear on present-day projects, building robots and vehicles designed to work in intimate partnership with people. In the stories that follow I appear in some as a participant, in others as an observer, and in still others as both.

These years of experience, research, and conversation have convinced me that we need to change the way we think about robots. The language we use for them is more often from twentieth-century science fiction than from the technological lives we lead today. Remotely piloted aircraft, for example, are referred to as “drones,” as though they were mindless automata, when actually they are tightly controlled by people.

Robots are imagined (and sold) as fully autonomous agents, when even today’s modest autonomy is shot through with human imagination. Rather than being threatening automata, the robots we use so variously are embedded, as are we, in social and technical networks. In the pages ahead, we will explore many examples of how we work together with our machines. It’s the combinations that matter.

It is time to review what the robots of today actually do, to deepen our understanding of our relationships with these often astoundingly capable human creations. I argue for a deeply researched empirical conclusion: whatever they might do in a laboratory, as robots move closer to environments with human lives and real resources at stake, we tend to add more human approvals and interventions to govern their autonomy. My argument here is not that machines are not intelligent, nor that someday they might not be. Rather, my argument is that such machines are not inhuman.

Let us name three mythologies of twentieth-century robotics and automation. First, there is the myth of linear progress, the idea that technology evolves from direct human involvement to remote presence and then to fully autonomous robots. Political scientist Peter W. Singer, a prominent public advocate for autonomous systems, epitomizes this mythology when he writes that “this concept of keeping the human in the loop is already being eroded by both policymakers and the technology itself, which are both rapidly moving toward pushing humans out of the loop.”

Yet there is no evidence to suggest that this is a natural evolution, that the “technology itself,” as Singer puts it, does any such thing. In fact there is good evidence that people are moving into deeper intimacy with their machines.

We repeatedly find human, remote, and autonomous vehicles evolving together, each affecting the other. Unmanned aircraft, for example, cannot occupy the national airspace without the task of piloting manned aircraft changing too. In another realm, new robotic techniques for servicing spacecraft changed the way human astronauts serviced the Hubble Space Telescope. The most advanced (and difficult) technologies are not those that stand apart from people, but those that are most deeply embedded in, and responsive to, human and social networks.

Second is the myth of replacement, the idea that machines take over human jobs, one for one. This myth is a twentieth-century version of what I call the iron horse phenomenon. Railroads were initially imagined to replace horses, but trains proved to be very poor horses. Railroads came into their own when people learned to do entirely new things with them. Human-factors researchers and cognitive scientists find that rarely does automation simply “mechanize” a human task; rather, it tends to make the task more complex, often increasing the workload (or shifting it around). Remotely piloted aircraft do not replicate the missions that manned aircraft carry out; they do new things. Remote robots on Mars do not copy human field science; they and their human partners learn to do a new kind of remote, robotic field science.

Finally, we have the myth of full autonomy, the utopian idea that robots, today or in the future, can operate entirely on their own. Yes, automation can certainly take on parts of tasks previously accomplished by humans, and machines do act on their own in response to their environments for certain periods of time. But the machine that operates entirely independently of human direction is a useless machine. Only a rock is truly autonomous (and even a rock was formed and placed by its environment). Automation changes the type of human involvement required and transforms but does not eliminate it. For any apparently autonomous system, we can always find the wrapper of human control that makes it useful and returns meaningful data. In the words of a recent report by the Defense Science Board, “there are no fully autonomous systems just as there are no fully autonomous soldiers, sailors, airmen or Marines.”

To move our notions of robotics and automation, and particularly the newer idea of autonomy, into the twenty-first century, we must deeply grasp how human intentions, plans, and assumptions are always built into machines. Every operator, when controlling his or her machine, interacts with designers and programmers who are still present inside it—perhaps through design and coding done many years before.

The computers on Air France 447 could have continued to fly the plane even without input from the faulty airspeed data, but they were programmed by people not to. Even if software takes actions that could not have been predicted, it acts within frames and constraints imposed upon it by its creators. How a system is designed, by whom, and for what purpose shapes its abilities and its relationships with the people who use it.

My goal is to move beyond these myths and toward a vision of situated autonomy for the twenty-first century. Through the stories that follow, I aim to redefine the public conversation and provide a conceptual map for a new era.

As the basis for that map, I will rely throughout the book on human, remote, and autonomous when referring to vehicles and robots. The first substitutes for the awkward “manned,” so you can read “human” as shorthand for “human occupied.” These are of course old and familiar types of vehicles like ships, aircraft, trains, and automobiles, in which peoples’ bodies travel with the machines. People generally do not consider human-occupied systems to be robots at all, although they do increasingly resemble robots that people sit inside.

“Remote,” as in remotely operated vehicles (ROVs), simply makes a statement about where the operator’s body is, in relation to the vehicle. Yet even when the cognitive task of operating a remote system is nearly identical to that of a direct physical operator, great cultural weight is attached to the presence or absence of the body, and the risks it might undergo. In the most salient example, remotely fighting a war from thousands of miles away is a different experience from traditional soldiering. As a cognitive phenomenon, human presence is intertwined with social relationships.

Automation is also a twentieth-century idea, and still carries a mechanical sense of machines that step through predefined procedures; “automated” is the term commonly used to describe the computers on airliners, even though they contain modern, sophisticated algorithms. “Autonomy” is the more current buzzword, one that describes one of the top priorities of research for a shrinking Department of Defense.

Some clearly distinguish autonomy from automation, but I see the difference as a matter of degree, where autonomy connotes a broader sense of self-determination than simple feedback loops and incorporates a panoply of ideas imported from artificial intelligence and other disciplines. And of course the idea of the autonomy of individuals and groups pervades current debates in politics, philosophy, medicine, and sociology. This should come as no surprise, as technologists often borrow social ideas to describe their machines.

Even within engineering, autonomy means several different things. Autonomy in spacecraft design refers to the onboard processing that takes care of the vehicle (whether an orbiting probe or a mobile robot) as distinct from tasks like mission planning. At the Massachusetts Institute of Technology, where I teach, the curriculum of engineering courses on autonomy covers mostly planning”—how to get from here to there in a reasonable amount of time without hitting anything. In other settings autonomy is analogous to intelligence, the ability to make human-like decisions about tasks and situations, or the ability to do things beyond what designers intended or foresaw. Autonomous underwater vehicles (AUVs) are so named because they are untethered, and contrast with remotely operated vehicles (ROVs), which are connected by long cables. Yet AUV engineers recognize that their vehicles are only semiautonomous, as they are only sometimes fully out of touch.

The term “autonomous” allows a great deal of leeway; it describes how a vehicle is controlled, which may well change from moment to moment. One recent report introduces the term “increasing autonomy” to describe its essentially relative nature, and to emphasize how “full” autonomy—describing machines that require no human input—will always be out of reach. For our purposes, a working definition of autonomy is: a human-designed means for transforming data sensed from the environment into purposeful plans and actions.

Language matters, and it colors debates. But we need not get stuck on it; I will often rely on the language (which is sometimes imprecise) used by the people I study. The weight of this book rests not on definitions but on stories of work: How are people using these systems in the real world, experiencing, exploring, even fighting and killing? What are they actually doing?

Focusing on lived experiences of designers and users helps clarify the debates. For example, the word “drone” obscures the essentially human nature of the robots and attributes their ill effects to abstract ideas like “technology” or “automation.” When we visit the Predator operators’ intimate lairs we will discover that they are not conducting automated warfare—people are still inventing, programming, and operating machines. Much remains to debate about the ethics and policy of remote assassinations carried out by unmanned aircraft with remote operators, or the privacy concerns with similar devices operating in domestic airspace. But these debates are about the nature, location, and timing of human decisions and actions, not about machines that operate autonomously.

Hence the issues are not manned versus unmanned, nor human-controlled versus autonomous. The questions at the heart of this book are: Where are the people? Which people are they? What are they doing? When are they doing it?

Where are the people? (On a ship … in the air … inside the machine … in an office?)

The operator of the Predator drone may be doing something very similar to the pilot of an aircraft—monitoring onboard systems, absorbing data, making decisions, and taking actions. But his or her body is in a different place, perhaps even several thousand miles away from the results of the work. This difference matters. The task is different. The risks are different, as are the politics.

People’s minds can travel to other places, other countries, other planets. Knowledge through the mind and senses is one kind of knowledge, and knowledge through the body (where you eat, sleep, socialize, and defecate) is another. Which one we privilege at any given time has consequences for those involved.

Which people are they? (Pilots … engineers … scientists … unskilled workers … managers?)

Change the technology and you change the task, and you change the nature of the worker—in fact you change the entire population of people who can operate a system. Becoming an air force pilot takes years of training, and places one at the top of the labor hierarchy. Does operating a remote aircraft require the same skills and traits of character? From which social classes does the task draw its workforce? The rise of automation in commercial-airline cockpits has corresponded to the expanding demographics of the pilot population, both within industrialized countries and around the globe. Is an explorer someone who travels into a dangerous environment, or someone who sits at home behind a computer? Do you have to like living on board a ship to be an oceanographer? Can you explore Mars if you’re confined to a wheelchair? Who are the new pilots, explorers, and scientists who work through remote data?

What are they doing? (Flying … operating … interpreting data … communicating?)

A physical task becomes a visual display, and then a cognitive task. What once required strength now requires attention, patience, quick reactions. Is a pilot mainly moving her hands on the controls to fly the aircraft? Or is she punching key commands into an autopilot or flight computer to program the crafts trajectory? Where exactly is the human judgment she is adding? What is the role of the engineer who programmed her computer, or the airline technician who set it up?

When are they doing it? (In real time … after some delay … months or years earlier?)

Flying a traditional airplane takes place in real time—the human inputs come as the events are happening and have immediate results. In a spaceflight scenario, the vehicle might be on Mars (or approaching a distant asteroid), in which case it might take twenty minutes for the vehicle to receive the command, and twenty minutes for the operator to see that the action has occurred. Or we might say that craft is landing “automatically,” when actually we can think of it as landing under the control of the programmers who gave it instructions months or years earlier (although we may need to update our notions of “control”). Operating an automated system can be like cooperating with a ghost.

These simple questions draw our attention to shifts and reorientations. New forms of human presence and action are not trivial, nor are they equivalent—a pilot who risks bodily harm above the battlefield has a different cultural identity from one who operates from a remote ground-control station. But the changes are also surprising—the remote operator may feel more present on the battlefield than pilots flying high above it. The scientific data extracted from the moon may be the same, or better, when collected by a remote rover than by a human who is physically present in the environment. But the cultural experience of lunar exploration is different from being there.

Let’s replace dated mythologies with rich, human pictures of how we actually build and operate robots and automated systems in the real world. The stories that follow are at once technological and humanistic.

We shall see human, remote, and autonomous machines as ways to move and reorient human presence and action in time and in space. The essence of the book boils down to this: it is not “manned” versus “unmanned” that matters, but rather, where are the people? Which people? What are they doing? And when?

The last, and most difficult questions, then, are:

How does human experience change? And why does it matter?

Air

In late 2013, a joint industry-FAA working group on flight deck automation assessed the current state of technology and pilot skill. They reviewed twenty-six recent incidents and accidents and found that pilots may rely too heavily on automated systems, not understand them sufficiently, and make errors while using them. Moreover, the knowledge base and set of skills required of pilots has expanded. While some traditional skills are now “reversionary,” to be used only in emergencies or failures, pilots must still be proficient with them. The definition of “normal” pilot skills has changed, and more is expected of today’s pilots than even in the recent past.

Kathy Abbott, a PhD computer scientist, is the FAA’s chief scientific and technical advisor for flight deck human factors and co-chair of the working group. She points out that automation does not remove human involvement in the operation of the airplane, but rather moves it around. “We’re not eliminating human error” by automating tasks, Abbott explains, “were just changing where the error occurs.”

. . .

After World War II, the U.S. government began requiring professional flight engineers on commercial aircraft above eighty thousand pounds, which included most of the day’s large airliners. Ironically, some aircraft, such as the DC-6, had already been designed for a two-person cockpit when the rule was imposed. Yet the third man was still required, even though he had little to do. He therefore rode in the jump seat between the pilots, and helped provide an additional pair of eyes to avoid conflicting traffic.

There followed a period of conflict between pilots and flight engineers. Increasingly, airlines put professional pilots in this third seat; pilots’ unions refused to accept flight engineers as members. Flight engineers countered by arguing that the pilots themselves had little to do during the bulk of the flight. During congressional hearings they presented photographs of pilots reading newspapers, sleeping, or flirting with female flight attendants sitting on their laps while en route.

. . .

Beyond Utopian Autonomy

ABE, Pioneering Robotic Undersea Explorer, Is Dead at 16. NEWYORKT/MES, MARCH 15, 2010

The Autonomous Benthic Explorer (ABE) running survey track lines over a hydrothermal vent field, using a scanning sonar to collect bathymetric data for precision mapping. ABE’s measurements of temperature and chemical data are represented by the shaded line behind the vehicle, illustrating its detection of a hydrothermal plume

(COURTESY CENTER FOR ENVIRONMENTAL VISUALIZATION, UNIVERSITY OF WASHINGTON)

In this emerging world, we can imagine autonomy as a strangely shaped three-dimensional cloud in the ocean, with vehicles constantly moving back and forth across its boundaries. Now imagine that ABE is your car, and the 3-D cloud of autonomy is in your neighborhood. At certain times, in certain places, the car has some kinds of autonomy—to stay within a highway lane, for example, or drive in a high-speed convoy. At other times, such as when far from a cell tower, or driving in snow when ice obscures the car’s sensors, the autonomous capabilities are reduced, and the driver must be more involved. You drive into and out of the cloud, delicately switching in and out of automatic modes.

Indeed the prospect of driverless cars—they might have been called “automobiles” had that term not been taken a century ago—is generating broad public enthusiasm and debate around autonomy. Google has been the most outspoken player, with a carefully controlled publicity campaign. (Most of their work has been proprietary so we must rely on public statements to assess the program.)

Automobile manufacturers, of course, have been adding various kinds of automation to their cars for decades, from automatic transmissions to cruise control and antilock brakes. My family’s Volvo has software that can slam on the brakes at any moment if it detects the car is about to hit something. (When driving that car I must place a great deal of trust in that software.) In general, auto manufacturers are continuing their incremental approach, selling automation as “safety features” rather than autonomy.

Mercedes-Benz has announced a concept car with an interior envisioned as a “mobile living space,” where people can read and relax under the direction of the car’s autonomy. Unlike the Google car, however, Mercedes imagines that “passengers are able to interact intuitively with the connected vehicle” in a “symbiosis of the virtual and real world.” Google, on the other hand, has been promoting a vision of complete autonomy. As one Google engineer compares their approach to those of the car companies: “They want to make cars that make drivers better. We want to make cars that are better than drivers.” The ride-sharing giant Uber recently hired a large group of roboticists away from Carnegie Mellon, in an apparent effort to automate their cars.

Google has been testing self-driving cars on California roads since 2009, claiming hundreds of thousands of miles of accident-free highway driving. They travel routes mapped with great precision by Googles human-driven survey cars; the maps serve as virtual railway tracks for the cars (indeed, they are as yet unable to drive on roads without these detailed maps). The drives have included human safety drivers and software experts who can turn the autonomy on and off. “The idea was that the human drives onto the freeway, engages the system, [and] it takes them on the bulk of the trip—the boring part—and then they reengage,” said Google engineer Nathaniel Fairfield.

A ride in one of these vehicles led the New York Time” John Markoff to conclude that “computerized systems that replace human drivers are now largely workable and could greatly limit human error,” potentially supporting Google’s goal of cutting the number of U.S. highway deaths in half. Google’s rhetoric around the project has the kind of Silicon Valley optimism that typically surrounds software systems.

Roboticist Sebastian Thrun, lead engineer for the project, envisions a future of utopian autonomy “without traffic accidents or congestion.”

A number of critics of Google’s approach have pointed out its limitations. Most of the work has been done in northern California or other western states. The Google car’s successful driving tests in Nevada were run under tight constraints from the company for good weather and simple routes (the company also sought to avoid disclosure of details of the safety drivers’ disengagements of the autonomous operation).

The vehicles algorithms had difficulty negotiating construction sites, requiring the safety driver to take control. Technology journalist Mark Harris has recently shown that becoming a safety driver for one of these cars can require weeks of training, suggesting the computer to human handoffs remain complex and risky.

In contrast to the wide-open West of car commercials and Google’s trials, urban driving entails a great deal of social interaction, as we drive through a messy, complicated, and dynamic physical and social landscape. Google admits this problem is ten to one hundred times harder than driving on highways. Once again, autonomy within a human context proves much tougher than the abstracted technical problem.

MIT’s John Leonard, who helped develop some of the basic algorithms that driverless cars use for localization and mapping, points out how much driving depends on social interaction. My late friend Seth Teller, formerly an MIT roboticist, perceptively observed that urban driving consists of hundreds of “short-lived social contracts between people,” as we scan the streets, make eye contact, let people in and wave “thank you.” Computers are slowly getting better at assigning labels to the physical world and to different kinds of objects. Yet as Predator pilots can attest, techniques to similarly interpret human identities and intentions remain primitive.

Only half joking, Leonard contends that driving in Boston can be considered operating in an extreme environment. He put a video camera on the dashboard of his car and is collecting examples of driving situations that are difficult for algorithms to handle: merging onto a busy road at rush hour; staying within road lines obscured by dust or snow; turning left across several lanes of traffic. In the snowy Boston winter of 2015, the three-dimensional landscape of urban driving would change overnight, as snow piles nine feet high narrowed the roads and altered traffic patterns.

What have we learned from extreme environments that might shed light on possible futures for autonomous cars? We know that driverless cars will be susceptible to all of the problems that surround people’s use of automation in the environments we have examined—system failures, variability of skills among users, problems of attention management, the degradation of manual skills, and rising automation bias as people come to rely on automated systems.

The most challenging problem for a driverless car will be the transfer of control between automation and the driver—what we might call “the Air France 447 problem.” Any life-critical system has to have ways to handle anomalies when an individual sensor or component fails, or when things in the world just don’t work out as expected. The more complex the system, the more potential anomalies hidden in the corners. While these anomalies may be rare, there are more than a billion car trips per day in the United States, greater than ten thousand times the number of daily airline flights.

Google’s car might recognize a situation it couldn’t handle and warn the driver to take back control. Perhaps it will have a “check autonomy light, analogous to the opaque “check engine” light in your car today—though it will have to be much more informative to be effective. What happens when the light goes on at high speed and the trade-off has to happen quickly? If people are freed from the tedium of driving and are sleeping, or reading a book, how will they rush into the loop quickly enough to avoid oncoming traffic, or a pothole, or a collision?

As on Air France 447, drivers would have to rise to this challenge at the most difficult moments of driving. And in contrast to the operators we have been following in extreme environments, automobile drivers are relatively untrained and comprise a broad spectrum of physical abilities, acquired skill, and socioeconomic and educational backgrounds. As challenging as they are, the extreme environments of the deep ocean, air, and outer space are relatively “clean”; the driving environment is much more cluttered and uncertain.

Perhaps in response to these critiques, Google changed its approach to get rid of these troublesome people—drivers. As engineer Nathaniel Fairfield says, Google discovered that “people are lazy,” and found that they “go from plausible suspicion to way overconfidence.” Following its experiments, the company concluded that human drivers are not trustworthy enough to collaborate with their software and changed their model to zero driver intervention. Google engineers speak about the “lazy driver,” the 93 percent of car accidents estimated to derive from human error. (Of course, human-factors specialists have long understood that human errors often are the result of poor system design and poor work practices.)

Google introduced a new version of its car in 2014, one that seems designed to be friendly and unthreatening to the public. It travels at low speeds to reduce danger, has no driving wheel or console, and removes input from the human driver altogether. In the words of project director Chris Urmson, the company is “working toward the goal of vehicles that can shoulder the entire burden of driving.” These fully autonomous cars would be “designed to operate safely and autonomously without requiring human intervention.”

The new car’s interface consists only of buttons to start and stop the engine, and a screen that shows the route (one wonders how the driver will tell the car where to go). The driver will be transformed into a passenger whose only job “is to kick back, relax, and enjoy the ride.” “It doesn’t have a fallback to human— it has redundant systems,” said Fairfield. “It has two steering motors, and we have various ways we can bring it to a stop.” Videos set to lighthearted music appeared on YouTube showing blind, disabled, and elderly passengers enjoying pleasant rides in the California sunshine on clean, open roads.

Providing mobility for the disabled or elderly is certainly a laudable goal, but Google’s new approach amounts to a retreat and a missed opportunity. Ironically for such a high-technology company, Google rhetoric takes a step backward into the twentieth century, archaically reimagining the driver as a passive observer. Their “new” approach succumbs to all three of the twentieth-century mythologies of robotics and automation:

  • automobile technology will logically advance toward complete, utopian autonomy (the myth of linear progress);
  • autonomous driving systems will eliminate human drivers in the driving task (the myth of replacement); and
  • autonomous cars can operate entirely on their own (the myth of full autonomy).

Our ventures into extreme environments have taught us how this utopian autonomy succumbs to the realities of harsh environments where lives are at risk. Indeed Google’s utopian autonomy is a more brittle, less functional solution than a rich, human-centered automation. It’s true that including human drivers in automated cars raises problems of mode confusion, attentiveness, and interfaces, but these are tractable— especially using what we have learned from extreme environments. Instead, Google engineers are succumbing to naive automation, defining the problem in ways that Google-enabled software can solve. This definition appeals to engineers because of its self-contained nature: driving as purely a problem of navigation and collision avoidance.

This definition may be acceptable for narrow, local applications (I would not be surprised to see such driverless cars in niche applications, ferrying passengers in parking lots or college campuses, much like the monorails at airports do). But the full spectrum of driving as a social activity is remarkably broad, encompassing a diversity of geographic, economic, cultural, and other components. Google presents no evidence that their code even recognizes this wonderful variation. Moreover, simply having people in the car raises ethical stakes of risk, agency, and reliability that have to be addressed. Google’s engineers repeatedly congratulate themselves for rising to the most difficult challenges, yet here they have punted the meaningful, socially beneficial problems in favor of narrow algorithmic solutions.

And, of course, even with Google’s autonomous cars, the people are still “in there,” they’ve just moved to a different place and time. Let’s look inside an algorithm as an example of how deeply humanly crafted apparently autonomous code can be. Consider the first documented collision between autonomous cars. The Defense Advanced Research Projects Agency—sponsored competition, the DARPA Grand Challenge of 2007, generated some of the technology on which the Google car is based. Google’s Chris Urmson was the lead engineer on the winning team, and a number of other participants are now on the Google team.

In the incident, the MIT car, called Talos, was passing the Cornell car, dubbed Skynet, which was having trouble with its planning algorithm and was stuttering along slowly by the side of the road. Computers on board Talos classified Skynet as “a cluster of static objects” rather than a moving vehicle, and turned to pass in front of it. But the Cornell car was not stationary; it was moving in a lurching pattern that Talos did not recognize. Skynet bolted forward just as Talos cut in front of it, leading to a small collision. Neither team won the competition.

To their credit, the teams got together and published the details of the crash. Numerous algorithms and sensors were involved, but a key element was the MIT car’s failure to classify the Cornell car as a moving object and to derive its future path. Ironically, the MIT strategy was to avoid classifying objects as one thing or another (“car” or “guardrail”), which can be prone to errors, and instead to only classify them as moving or not moving. But when analyzing objects detected by the vehicles sensors, the velocity data contained random noise (as all data does), so the MIT system used a threshold velocity of 3 meters/second to filter the data. Anything faster than that was considered “moving” and anything slower than that was considered “not moving.”

How was that threshold set? By one engineer estimating the difference between stopped and moving and dialing it in to the algorithm. I asked my colleague Jon How, one of the principals on the project, how many such thresholds there are in a system like that. His reply: “Many, many, many.” In fact the “configuration file” for the MIT vehicle contained nearly a thousand lines of text, setting hundreds of variables: sensor positions and calibrations, fudge factors to align the sensors with one another, how to deal with sun dazzle, etc. Machine learning techniques can help reduce this reliance on parameters, but they still rely on human programmers for their basic structure. How points out that core algorithms generally rely heavily on accurate models of uncertainty in the world. As he observes, “The problem of autonomy is fundamentally the problem of living in an uncertain world.”

This brief look inside an early autonomous cars code points to how deeply such “autonomous” cars are suffused with human judgment, in countless little details like the threshold we looked at, and in more profound ways, like their models of uncertainty. Recall our original picture of autonomy as a set of pipes that takes in sensory inputs and transforms them into goal-directed actions. Its wonderful technology to behold, but the pipes and transformations are human designed.

In a vehicle like a car they can kill you.

Lawyers and legal scholars are just beginning to consider the liability problems involved in driverless cars. If your notion of autonomy is that the vehicle is making decisions on its own, then the chain of intention that constitutes liability might be broken. Who is responsible when your Google car drives you into a ditch? This is not just a matter of letting lawyers write contracts, but gets to the fundamental notion of autonomy: if the system is really working on its own, then how can it be the manufacturers fault when something goes wrong? (Some believe that traditional product liability will apply here just fine: if the company makes the product, they will be liable). More practically, how does one certify as safe the software in an autonomous car?

The certification approach for software on life-critical systems like air liners is fairly robust, but cumbersome and expensive: rigorous testing, running through every possible piece of code at least once, careful control of changes. These standards also certify the human processes of planning, designing, and writing the code according to rigorous requirements, as well as quality assurance and managing upgrades once it’s released. But in their current form these procedures are not compatible with systems that claim full autonomy, where the number of possible courses of action verges on the infinite. What’s more, like synthetic vision systems in aviation, autonomous cars like Googles must rely on high-integrity databases sewed with frequent updates. Miss last week’s update, and you could drive right into a construction site, or a snow pile.

How will we certify Google’s models of uncertainty and risk? Every automated path-planning algorithm contains some version of these unknowns. The planning works by optimizing “cost functions”— constantly asking, What is the least “costly” (in terms of time, energy, risk or some other variable) way to get from here to there? But the cost functions themselves embody human judgments about priorities. On one drive, with your kids in the backseat, you might like to drive conservatively; the cost function here should weigh safety more highly than speed. On another trip, perhaps you’re alone and in a hurry, and you’d like to push the performance a bit more, take on higher risk. Perhaps you’re running low on fuel and would like to raise the importance of fuel efficiency.

As a thought experiment, consider whether your autonomous car should have a knob on it labeled “risk.” Want to get home faster? Dial up the risk knob. The system drives more aggressively, you get home a little sooner, and an additional insurance premium is automatically debited from your account. (And what about the other drivers you’re putting at risk? Should you contribute to their insurance too?) Driving with your kids in the back? Dial down the risk knob and follow the traffic laws to the letter.

Consciously or not, we make these decisions every time we get behind the wheel. In Googles automated car, an engineer in a cubicle somewhere is making those decisions for you. If we’d like to make these decisions ourselves, then we’d need an interface. What does the map of your neighborhood look like when it’s highlighted for risk? When the 3-D clouds of autonomy are made visible?

These thought experiments lead us to an alternative approach to Googles—rethinking, rather than eliminating, the driver. The enticing, shadowy pictures of the laser-scanned landscape that reveal the autonomous cars internal models now become the basis of a new interface, and a new experience of driving. As my colleague Bill Mitchell used to say, “The dashboard should be an interface to the city, not an interface to the engine.”

Screen capture Image of the representations inside an early driverless car. The circles emanating from the car represent the scans of a laser radar (LIDAR), which detects the road and trees alongside it. The algorithms then filter and classify these data to generate paths for the car to follow. Could this type of Image be the interface for a new way of driving? (COURTESY EDWARD OLSON, MIT)

Gone are the tense, shoulder-tightening feedback loops of lane-keeping and speed control. Now we are in a supervisory role, one that commands high-level vehicle behaviors, but still allows time on the wheel. We use rich, sensor- and algorithm-enhanced models of the environment to move across a spectrum of automation moment by moment, driving into and out of clouds of autonomy and risk.

It will not be easy to get this right, but solving this problem has greater potential than putting our hopes into a utopia of full autonomy. We should be able to aid the elderly and disabled, enable other tasks (like texting or reading) while driving, and enhance safety, while still keeping central the value of human presence.

Google’s goals and rhetoric for their autonomous cars have changed regularly, and are likely to change again in a fast-moving public conversation. Their public statements likely reflect some difference of vision among the members of their engineering team. Yet the company’s cultural power is such that they have framed the debate, from state legislatures to car dealerships, and they are not alone. Both the National Highway Traffic and Safety Administration and the Society of Automotive Engineers define levels of automation in cars that explicitly or implicitly embed the myth of linear progress in a progressive series that culminates in “full automation.” Neither group’s standards explicitly allow for hybrid modes where some tasks and not others might be highly automated, nor for information-rich automation that might keep the driver involved.

What the Google car proponents have missed, and many in the technology press have also overlooked, is that the liability, certification, and risk issues are not incidental “societal” issues that stand in the way of this otherwise independent invention. Rather, they are crucial parts of the question of autonomy, and crucially involved in framing the future of our robots and ourselves. It is not just a technical issue; it is about who is in control.

As examples of this alternative approach, I conclude with two projects in their early stages that seek to enhance users’ roles and awareness of the internal states of autonomy. Each seeks to engineer a human/machine team from the very beginning, rather than to design a highly automated machine to which a user must adapt.

An unmanned helicopter approaches a landing zone at high speed. It scans the terrain with a laser—much like the laser scanner on top of the Google car—that zips across the surface and takes thousands of measurements. The laser feeds a digital 3-D model of the topography being built inside the computer in real time. Inside, simulations fly flight paths through the model into the immediate future, applying sophisticated algorithms to the data. They identify where the terrain flattens out and determine where there is enough room, free of trees, wires, and obstacles for the helicopter to land. They analyze whether each area is flat enough to ensure the helicopter will not roll over when it touches down.

Autonomous helicopter of the Autonomous Aerial Cargo/Ut111ty System program (AACUS) scanning, selecting, and approaching a landing zone in a mountainous area to deliver cargo or perform medical evacuation. The landing support specialists on the ground negotiate the landing zones with the vehicle through an iPad interface and a radio link. (COURTESY AURORA FLIGHT SCIENCES)

As the helicopter approaches the landing zone, trees block its view. Relying on the laser scanner, the computer finds a path a little to the right where there is a gap between the trees, commands a line through it, and the vehicle lands.

This scenario describes a demonstration from a real project, a full-size autonomous cargo helicopter built by Aurora Flight Sciences, a Manassas, Virginia-based maker of unmanned aircraft, as part of a government research program. It was demonstrated in flight in February 2014. The program’s goal is to send unmanned helicopters into hazardous areas to deliver cargo, and potentially evacuate casualties, with no risk to pilots. I was part of this project team and designed its architecture for human interaction with autonomy. The idea was to consider the autonomous system as part of a human/machine team, not only when designing the interface, but when designing the core algorithms too.

After the demo, the Wall Street Journal reported on the project with the headline: “Navy Drones with a Mind of Their Own.” In the press, it seems, the myth of full autonomy is alive and well.

Yet where were the people? Everywhere, it turns out. Because this was a prototype, a safety pilot sat on board, his arms folded, watching the computer fly but ready to take over and kick it out of the loop at a moment’s notice. A mature fielded system would ideally eliminate the safety pilot altogether. But the safety pilot may turn out to be not as redundant as we think—he or she may just be doing a new kind of job.

More important, when the helicopter lands, a person waits in the landing zone. After all, what good is a cargo delivery without someone, or a group of people, to unload, unpack, and consume the stuff? Those people will need to have courage and a great deal of trust in the machine, for standing in a field with a large helicopter under software control bearing down on you is not a relaxing place to be. Our team interviewed dozens of people, known as landing zone specialists, who do this for a living with piloted helicopters. Most had experience in Iraq and Afghanistan, and a number recalled looking up into the sky and seeing unmanned vehicles flying around. They had the uncomfortable feeling of not knowing to whom the vehicles belonged, or what missions they were performing. The last thing these people wanted working with them in a war zone was a “drone with a mind of its own.” What they wanted was a reliable partner that would do as it was told.

This meant the people in the landing zone had to be able to reject the landing. So we gave the landing zone specialist an iPad. With a few minutes of training he or she could interact with the helicopter in a brief negotiation. The human would suggest a landing area; the computer might reject it if the zone did not meet its safety constraints. The computer would then offer a few alternatives. The human could either pick an alternative or command the helicopter to wave off.

As it turned out, designing this negotiation, an interface and software system to execute it in under a minutes time, and a set of internal states for the autonomy that would be comprehensible to the human, proved among the most challenging parts of the program.

Yet the helicopter’s successful demonstration raised a question for human pilots: if we can add a scanner and algorithms good enough to identify landing zones, wouldn’t a human pilot want that too? As the program moves forward, it may well involve adding these autonomous capabilities to a piloted helicopter as well as to an unmanned one. Medical evacuation helicopters in our own communities, for example, land in uncertain environments under extremely demanding conditions. We are beginning to explore how these novel sensors and algorithms may help their human pilots achieve higher levels of performance and safety.

Some of these ideas are already coalescing into the new notion of optionally piloted aircraft, or OPAs. Aviation Week & Space Technology, the industry’ leading magazine, has been publishing its Pilot Reports on new aircraft for decades. Its 2012 pilot report on an aircraft called Centaur was the first in which the pilot doing the test never touched the controls.

The Centaur optionally piloted aircraft (OPA), a highly modified twin-engine general aviation aircraft built by Aurora Flight Sciences. The Centaur can be flown by a pilot in the front seat, remotely from a computer interface on the ground, or from the same computer interface from the backseat. (COURTESY AURORA FLIGHT SCIENCES)

The reporter conducted the test while sitting in the backseat of the small, twin-engine aircraft. Up front sat a person acting as the safety pilot, his arms calmly resting on his lap. Resting beside him, in what is ordinarily the copilot’s seat, was an engineered series of linkages, actuators, and servos. The safety pilot pulled a lever to engage the mechanisms, and they began moving the pilot’s control stick and pressing the rudder pedals. The actuators are double and redundant; if one set fails, another will immediately take over. The safety pilot can disengage the mechanism with a single pull of the lever if something goes wrong; otherwise he does not touch the controls.

In the backseat, the “operator” commands the plane through a laptop, using an interface identical to that of the ground control station for an unmanned vehicle. Through the screen, he can change altitude, fly to waypoints, take off, or land. Pushing the “launch” button begins an autonomous takeoff. The computer holds the brakes, pushes the throttles forward, checks the engines and instruments, and releases the brakes for the takeoff roll. The plane accelerates, takes to the air, and begins to climb out on a semi-autonomous flight.

As an OPA, Centaur—named after the mythical half-human, half-horse creature—can fly in a normal mode under pilot control, a certified aircraft as though fresh from the factory. But it also can be flown in semi-automated mode, as in the test when the reporter issued high-level commands through the laptop in the backseat. Or it can be flown unmanned, with the same computer interface operating on the ground through a radio ink.

Engineers at Aurora made Centaur by converting a commercially available aircraft, the Diamond DA-42, to this OPA. They added the mechanism in the front seat and a host of independent electronics, and digitized the flight manuals and emergency procedures into computer algorithms.

Unmanned aircraft are currently mostly illegal in U.S. national airspace, but Centaur is certified by the FAA to fly in its unmanned mode if a safety pilot is aboard. Hence the aircraft can help develop sensors, algorithms, and procedures for unmanned systems. It has been flown with a pilot aboard to Alaska, where it has permits to operate unmanned to collect data for climate research.

Centaur employs a transitional technology, suitable for flight testing and engineering development while regulations and techniques are worked out. Within it, however, are the seeds of a new way of piloting, for eventually, even in U.S. airspace, the safety pilots’ job may be transformed to one operating through the laptop interface.

Are we moving into a future of airliners without pilots? Probably not for the foreseeable future, but Centaur shows us how much of the technology that makes that possible exists today. In a technical sense, the automation to taxi an airliner, command a takeoff, follow a route, and autoland is all well proven. The unmanned-aircraft problem appears to have been solved—but only when divorced from its human context.

Again, we have learned that fully autonomous operation is the lesser problem. We can say that the technology exists today, but for the innovation to have a social importance and contribute to human welfare requires not only the machines but the social, political, and economic systems to situate them in human life and to enhance our practices and our experiences—a much more open problem.

Airliners need to be safety certified so that they don’t fall on people. They need to provide not only statistical safety for passengers, but also the experience of safety. They need to operate not only in every emergency we can think of, but in nearly all of the emergencies we can’t think of. This is why, when placed within human settings of reliability, risk, liability, and trust, the unmanned aircraft problem, like the driverless car problem, has not been solved. Building trust in such systems will require years of demonstrations, operations, and smart engineering to prove reliability.

More likely, and certainly sooner, we will begin to see airliners operating with reduced crews. Long-haul flights routinely bring three or more pilots along, to trade off duties even during the dull hours en route. Reducing the number of people required will have a direct impact on costs, if it can be done at an equivalent level of safety.

The trouble is, large airliners are not certified for single-pilot operation. But as expensive pieces of capital equipment they will be with us in their current forms for decades to come. So to reduce the crews we’d have to build add-in devices that can interact with the airplane as a human does now, and interact with the remaining pilot in an effective way. Were we to replace a copilot with a machine during a long flight, the machine might only need to monitor for emergencies and handle the problems only long enough to wake up a napping pilot and make him or her aware of the situation. Such a device, of course, would have to pass the “Air France 447” test of a sudden, challenging alarm—but people have not always passed that test either. (The recent Germanwings crash highlights the challenging social problem of how to safeguard against suicidal pilots.)

The Defense Advanced Research Projects Agency, which also sponsored the early driverless car competitions mentioned earlier, has laid down the challenge of how to add automation to any aircraft quickly and cheaply. If one were to build such a technology, it might support a pilot in a variety of other ways as well —performing routine tasks, looking up procedures, drawing on the database of past events, offering suggestions to improve performance. Avoiding the myth of replacement, it would not do the copilot’s but would rather depend on a new division of labor between the pilot and the assistant, allowing each to play to their strengths. Thinking this problem through, and building a prototype, is part of a research program sponsored by DARPA; the result may be a system that can serve in this copilot role, but it could also turn any aircraft into an unmanned or remotely piloted vehicle.

The idea is to build some kind of device that will sit in the copilot’s seat and interact with the cockpit as a human does—by reading the instruments on the panel with machine vision, grabbing hold of the flight controls to “fly” the aircraft, and even flipping switches and grabbing levers around the cockpit. The project is called the Automated Labor In-cockpit Automation System (ALIAS).

ALIAS has multiple, overlapping goals. Probably the most far out is to turn any aircraft—including a large transport aircraft or a helicopter—into an AUV, with minimal invasiveness to the existing certified aircraft. With the right data link, an ALIAS-enabled aircraft could be flown remotely. A more immediate goal is for ALIAS to serve as a “pilot’s assistant,” helping with routine tasks during high-workload periods (much as the small robot R2-D2 assisted Luke Skywalker in Star Wars). Then the human pilot could operate the entire flight by commanding tasks on an iPad-like interface from the front seat.

The pilot’s assistant could enable flying long-duration flights with reduced crews. Rather than taking the place of the copilot, ALIAS entails rethinking the relationship between pilot and copilot—rethinking the basic task of flying an airplane. Long before we see fully automated airliners, we will see flight enabled by ALIAS-like solutions, likely at first on long-duration cargo flights over water.

I am part of Aurora’s team that won a contract to design and build ALIAS. ALIAS will use machine-learning techniques to adapt to a new aircraft, machine vision to see and interpret cockpit displays, and some form of robotic actuators to move the controls and flip the switches. It will need training to adapt to new aircraft, which includes the basic flight manuals and procedures but also involves “watching” a human pilot fly the aircraft and gathering data on how the person performs. It will seek to encode the techniques of the most qualified pilots.

The major challenge of ALIAS, however, is not the algorithms but the cooperation with human partners. As we’ve learned from commercial airliners, the last thing any cockpit needs is another black box with a complex interface and unclear internal states to confuse a pilot. The challenge, then, is to rethink the pilots’ tasks in such a way that raises the level of abstraction they work at without adding undue complexity, which of course requires rethinking the role of the pilot.

A pilot may fly in a manned mode one day, in a remote mode the next day, and autonomously on still another. Moreover, the human pilot may choose to let ALIAS fly for one hour and intervene manually in the next. ALIAS-like systems might even allow the pilot to sleep on long flights, provided one can demonstrate the machine’s ability to handle the aircraft in an emergency for long enough to wake up the pilot, inform him or her of the situation, and enable him or her to intervene as necessary. Allowing the pilot to nap is proven to reduce fatigue and improve performance, but will require us to revise our notion of the pilot as the heroic, always-on, fully attentive operator to that of an extremely capable human with cognitive and physiological limits.

ALIAS is high-risk and filled with uncertainty, as all DARPA programs are, and it is unclear whether (and how) it might eventually be useful. But as a research program, a kind of advanced thought experiment, it makes explicit what we have been learning for decades: far from a linear progression from human to remote to autonomous aircraft, what we are seeing is a convergence. Human, remote, and autonomous are evolving together, blurring their boundaries.

Autonomy in the Human World

The idea of autonomy denies that we are born into a world that existed prior to us. It posits an essential aloneness; an autonomous being is free in the sense that a being severed from all others is free. To regard oneself this way is to betray the natural debts we owe the world, and commit the moral error of ingratitude. For in fact, we are basically dependent beings, one upon the other, and each on a world that is not of our own making.

One comment

  1. Nice post & very informative 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: