Line-Drawing Exercises: Autonomy and Automation

An existential worry lies beneath concerns about the future of AI. We as humans fear that we will lose our autonomy as we pursue automation with such abandon. It feels urgent that we examine what we care most about in humanity as we race to develop the science and technology.
Line-Drawing Exercises: Autonomy and Automation
·
Contributors (4)
Published
Feb 05, 2018
DOI
10.21428/cb377052

A Matter of Design

Debates about the future of artificial intelligence evoke strong feelings.  The promise of solving some of the knottiest human-created and human-related problems through enhanced intelligence is exciting.  The prospect, say, of ending the threat of climate change to our ecosystem is downright thrilling. The ability to develop a raft of new drugs and to understand when to apply them, along with other therapies, will transform human lives for the better. Businesses see greater efficiencies, new product lines, and safer workplaces on the horizon, giving rise to giddy projections of growth and higher corporate profits.  These are all plainly good things that ought to come out of our drive for more sophisticated machine learning, automation, and other strands of artificial intelligence in the coming decades.  

On the flip side, we worry—with reason—about other effects of the relentless pursuit of artificial intelligence: machines that will surpass human intelligence and perhaps take over the earth tend to top the list.  Even if this ultimate fear may be far-fetched or at least far-off, nearer-term concerns about AI occupy our minds today.  AIs used to assist in making decisions based on large data-sets in the criminal justice system may perpetuate aspects of our racist history and present.  AIs that help us know what to read or listen to may lead to the proliferation of fake news, pull us further apart from one another in the public sphere, and reward the haves over the have-nots even more than in the past.  Many types of jobs, particularly for the working class, are already disappearing and others will soon follow.  There is no doubt that the current race to build and implement AIs is—in the first instance—serving the large, entrenched technology giants such as Amazon, Apple, Google, and Facebook.  The rich are getting richer once again, perhaps for good this time.  The future of work for those who cannot build AIs looks more uncertain than ever.

An existential worry lies beneath these concerns about the future of AI.  We as humans fear that we will lose our autonomy as we pursue automation with such abandon.  One near-term example: mobility.  There is an irony, and perhaps a lurking tension, to the fact that freedom- and car-loving Americans are leading the race to develop autonomous vehicles.  While today’s 16-year-olds are still learning to drive, it seems fairly certain that their children will not participate in the same kind of driver’s education when it is their turn to seek a license.  Perhaps a fossil-fuel-burning car, driven by an actual human, will take become a luxury item for which the rich will have to pay high taxes for the right to pollute the environment and take more risk on the roads.  We hurtle in the direction of self-driving, cleaner vehicles because they will be safer and will diminish the amount of pollution released into the atmosphere, perhaps ensuring a longer lifespan on earth for all humans.  In doing so, we plan to give up a form of autonomy many love: getting behind the wheel of a car we have paid for, that we own, and that we enjoy taking out on the open road.  The story told by the Jack Kerouacs of the future will have a very different feel.

In pursuing automation with such abandon, we threaten our own autonomy as humans.  

It feels urgent that we examine what we care most about in humanity as we race to develop the science and technology of automation.  There are zones of human experience that we do not want to cede to machines—even machines that we ourselves have trained.  Perhaps it is fine for a robot to collect the crumbs from under our living room tables for us or better yet, scrub the toilets; perhaps we welcome ideas for a next show we might binge-watch.  No doubt there is moral force to techniques for poverty reduction that AIs offer.  But it seems highly likely that most humans will wish to retain autonomy in other spheres.  Art, music, theatrical performance, human relationships, love all seem like good candidates to be reserved for human autonomy.  At least from today’s perspective, the idea of watching and hearing Yo-Yo Ma play Tchaikovsky on the summer stage at Tanglewood in Western Massachusetts is more appealing than hearing an AI play a technically extraordinary rendering of the same piece, even wearing the world’s best pair of headphones.  We cheer and delight at Kevin Olosula of Pentatonix play an enhanced version of a Bach cello suite to which he has added his own glorious beat-boxing overlay.  A computer doing the same thing would not be the same.  These seem, at least for now, some easy cases of where humanity should persist.

For the purposes of this argument, stipulate for the moment that there are cases in which it is fine for automation to proceed (vacuuming rugs) and there are instances in which we have real qualms if humans were to be replaced (playing cello suites).  Set aside the libertarian argument for the moment that we should place no constraints on innovation.  Imagine instead that we can and should, as a society, decide in advance where a high degree of automation, supported by various flavors of AI, would be most welcome and where it would be undesirable. 

As a matter of design, we will reach hard cases, and soon, when it comes to certain fields of human endeavor.  Consider domains such as health care, education, and law.  Each of these domains (among many others) will give rise to tricky design decisions in the near-term.  If we think computers can do a better job teaching our students mathematics, at what point do we turn instruction over to them?  If AIs can help us to do a better job preparing more young people for a more complex, automated, interconnected world, should we put them to work?  If AIs can help to limit infant mortality, will we press them into service?  To the extent that AIs can assure us or making a more equitable set of outcomes in legal disputes, will we continue to rely only on human judges and juries to render decisions? 

While many examples could work, let’s use health care as the example to help determine the types of decisions where AI makes sense to deploy and where it might reasonably be resisted.

 

The Promise of AI for Health Care

Imagine a patient is suffering from a terrible, painful, chronic disease, untreatable today and certain to bring about an early death.  The patient is someone you love: a relative, for whom you would do just about anything to make them better.

In his opening manifesto, Joi Ito writes of the believers in Singularity, a threatening “new religion”:

“To them, this wonderful tool, the computer, has worked so well for everything so far that it must continue to work for every challenge we throw at it, until we have transcended known limitations and ultimately achieve some sort of reality escape velocity.” 

The problem of this life-shortening disease is among those that AI promises to help solve.  This challenge—to save a life—is among the first of the many challenges being thrown at this extraordinary new system of AI that has so many people worried.

AI might help this patient that you care about in a wide range of ways—to be determined, but promising all the same.  One form of this promise has to do with research into specific diseases.  The combination of what until recently was fashionably called “big data” with increasingly powerful computers offers the ability to model all sorts of ways to address the challenge of curing this disease.  There are, today and into the future, larger and larger sets of epidemiological data available to researchers.  Bench scientists can compile terabytes worth of research by other scientists—including their published articles and unpublished, messy information—into a data set that a computer can help to analyze.  Data from clinical trials, regulatory filings, and patient records can be brought together into an intelligible format and rendered interoperable. 

It is hard to imagine that the use of artificial intelligence to query such a growing set of data would be considered harmful or undesirable.  The likelihood that a biotech start or major pharmaceutical firm could cure this dreaded disease rises; the likelihood that the patient gets a treatment that improves her quality of life, or lengthens her life, rises in turn.  Surely the researcher, the owners of the biotech firm, and the firm’s shareholders become wealthier too—most would call this economic growth a positive outcome as well, spurring job creation and consumer well-being as a byproduct of this successful cure.

Artificial intelligence could help the patient through another route.  A second theory has less to do with bench scientists sifting through an unthinkable amount of data to come up with a new therapeutic approach and more to do with considering the problem from the viewpoint of the patient herself.  One flavor of the “singularity” involves the effects of fusing physical, digital, and biological data into a mass that might be understandable to powerful computers.

Imagine this patient’s life before the onset of this particular disease.  From even before she is born, there are data that are collected about her in discernable formats.  Her complete genomic sequence is chief among these data.  It is combined with relevant data from both her parents’ health histories.  A complete charting of her vital statistics over time are fused with these core elements of her health file.  Real-time data from her FitBit feed into this data record: her steps, her heartrate every few hours, her weight when she steps on her connected scale.  If she’s diligent, perhaps she has recorded what she’s eaten and what exactly, and how much, she drinks.  The comments of her doctors at periodic office (or online) visits complement these other data. 

Assume, too, that these data are held by a trusted third party, in carefully encrypted formats, along with similar data on millions of other people.  These rivers of data join together into small pools associated with individuals and in giant oceans of data when combined with the data about many others.  Set aside for a moment worries of privacy violations and hackers—real concerns, to be sure, but not the focus here.  For this patient, the risks and trade-offs associated with the creation and existence of such a file full of highly personal data are worth it.

The trade-offs for this patient are worth accepting because artificial intelligence allows for her personal physician to predict the onset of this disease and to take steps to avoid it.  The AI that runs over all these data has found a series of risk factors in her special combination of genomic, behavioral, and other information.  A change to her lifestyle or the use of a particular therapy could stall the onset of the disease or enable her to avoid it altogether. 

From this patient-centric viewpoint, the use of large data-sets and AI running over it once again might greatly improve or substantially extend her life.  She in turn would give more joy to others; produce more in her life, contributing to the economy; maybe she is herself one of the scientists coming up with life-saving drugs or finding ways to share these therapies with needy patients around the world. 

In both of these scenarios—one focused on new drug development, the other on the use of patient-specific data—the AI involved seems to play a wholly positive role.  It is hard to imagine anyone making a serious argument that the AI should not be developed or used in this particular fashion.  (An argument against the creation, storage, and amalgamation of the individual health data is another matter—and could very credibly be launched.  Though outside the scope of this paper, we need serious discussion of how to protect personal data in what is already an exploding market for health-related data in the United States, which is protected in only a haphazard fashion.)

There are more or less endless possible scenarios of relatively untroubling applications of AI in this health care scenario.  Another highly plausible notion: the advent of human-computer interactions at the point of diagnosis and treatment evaluations.  It is easy to imagine doctors in the relatively near future practicing a form of “computer-assisted” medicine, in which diagnoses are suggested based upon the extraordinary amount of data collected and shared about past and current patients.  It is hard to imagine a regulatory regime that would disallow physicians from using this superior decision-making process to help their patients.  On the contrary, one could imagine a world in which it would be malpractice to ignore the advice of an AI that had a high degree of certainty related to an outcome.

From a design perspective, we should seek to determine the point at which concern about this Singularity might creep into this scenario.  On the far end of the spectrum, how would we feel if no person was involved in the treatment and decision-making for this patient at all?  What if the entire process—data collection, data storage, data analysis, explanation of the data to the patient, determination of the course of treatment—were all performed by the AI?  Some might make the case that the AI would do a much better job than any physician on earth at these functions.  A regulatory regime might even mandate that the AI’s decisions are final: if the AI says that the patient can be saved a certain way, then it should be so; if the AI says the cost of saving this patient is too high, then she shall not get the treatment required and will die in the natural course.  All of a sudden, the AI scenario sounds a lot less appealing.

The job of design in law is often a matter of line-drawing.  In between two scenarios, where should a line be drawn?  When two people have competing claims, who should expect to prevail?  How should rules be set to determine the outcome when two systems, ideas, or processes come into conflict?

At the outset, the line should be drawn somewhere between the use of AIs to create better life-saving drugs and turning the entire system of health information over to machines to render all of our decisions about individual and collective care.  A few of many concerns about these types of decisions: trust, process, and integrity.

A lack of trust holds us back from embracing a world of machine-driven decision-making.  We have yet to see the way in which machines would make these types of decisions.  Would they do so in a way that we would consider “humane”?  Will those who have designed these machines seek profit over fairness, justice, and other societal values?  The issue is not so much whether we doubt that machines could be trained to make more consistent decisions in the aggregate about human health, but whether we would trust in an individual case, or overall, that the choices would seem humane.  As Joi Ito points out in his manifesto, the companies that are racing ahead in developing AIs—including in health care—are run by those who believe in the Singularity and are pursuing enormous profits along the way.

Another concern has to do with process.  When humans make decisions about something such as the health care of another person, we expect the ability to check that decision via a second opinion.  And ultimately, the patient must make an informed choice to undertake a particular course of treatment.  If the health care decision-making process were developed with no such procedural elements, there is no chance that it would meet with approval anytime soon.  The proponents of the Singularity might waive away these objections.  After all, couldn’t the machines be designed in such a way that their decisions would be open to review and reversal?  After all, there might be multiple designs of machines with different sets of skills, just as humans have different skills.

A third (and far from final) concern has to do with integrity.  One of the early fears about the data-driven, machine-oriented world we are hurtling toward is that the decisions to be made will not be just.  The way that we design our decision-making processes in computers is certain to replicate our own biases.  Our history and our present are riddled with examples of our biases getting in the way of some people, and some groups of people, enjoying their lives to the fullest.  A primary difficulty with the Singularity is the fear of reducing these blindspots into code that will make an infinite series of life-altering decisions.

There is of course a recursive quality to this concern.  If we are worried about the blindspots that we build into computers, shouldn’t we be worried about the blindspots of our current doctors, health administrators, and health policy makers?  We should be.  The inefficiency of the current system here seems to be a feature, not a bug: the likelihood that a particular bias is systemic and affects everyone is low, while it would be universal in a machine-driven system.  Part of what is holding us back is that a machine-driven system might be more efficient and more accurate, but it would not have the escape valves we associate with human imperfection.

These design decisions are not far out in the future.  They are close upon us.  Health care offers a window into the challenges we will have to address quite soon.  The net effect of the use of AIs must be trustworthy, subject to a reasoned governance process, and operate with integrity.  The end must be just, as any other decision-making process must.

Joi Ito turned in his manifesto to the idea of art—specifically, music—as a way to explain his concerns about the Singularity.  Perhaps another art form, visual art, could help too.  When looking at a painting—say, a version of Georges Seurat’s “A Sunday on La Grande Jatte” (1884)—the image famously makes little sense at close range.  As the observer moves away from the painting, at a distance of a few feet, the picture hangs together with beauty.  In making choices about where AIs should be entrusted with life-and-death choices, we may be too close to the picture if we are looking at individual points of decision.  As we make these design choices about the proper place and role of AIs in society, we must find the right distance away from the image from which to gaze at it.

Discussions

Labels

No Discussions on this Branch yet

Highlight text above to create a new Discussion