Skip to main content
SearchLoginLogin or Signup

How To Become A Centaur

Essay Competition Winner | The old story of AI is about human brains working against silicon brains. The new story of IA will be about human brains working with silicon brains. As it turns out most of the world is the opposite of a chess game: Non-zero-sum—both players can win.

Published onJan 08, 2018
How To Become A Centaur

Garry cringed, like someone just spit in his breakfast. Pawn to f5. Blue remained silent, like it just spit in someone else’s breakfast. Rook to e7: taking Garry’s queen. This was Game 6, but Garry had already lost his nerve when Blue beat him at the end of Game 2, and they’ve been drawing ever since. Garry made the move that would be his last. Bishop to e7: taking the rook that took his queen. Blue responded. Pawn to c4. Garry quickly recognized this was a set-up for Blue to invade with its queen — and knew there was no hope after that.

Garry Kasparov resigned, in less than 20 moves. On May 11th, 1997, IBM’s Deep Blue became the first AI to beat a human World Chess Champion.

You can now download a chess-playing AI better than Deep Blue on your laptop.

From the ESPN documentary, The Man vs The Machine: the moment Garry Kasparov shrugged his shoulders and walked away.

The Story of AI

Here’s the story we’ve been telling ourselves about AI for decades: it’s man versus machine, creators versus their creation, a ball of wrinkly meat versus a smooth block of silicon. Whether it’s our immediate worries about AI (machines stealing your job, self-driving cars making deadly mistakes, autonomous killer drones) or the more far-fetched concerns about AI (taking over the world and turning us all into pets and/or paperclips), it all comes from the same root fear: the fear that AI will not share our human goals and values. And what’s worse, we’ve told ourselves that our relationship between ourselves and our AI is like a chess game:

Zero-sum — one player’s win is another player’s loss.

Garry demanded a rematch. He accused IBM’s humans of secretly helping out Blue, and besides, this match he’d lost in 1997 was a rematch after he’d decisively beaten Deep Blue in 1996. Another rematch would only be fair.

IBM said no. They killed Blue, then packed up and went home. (RIP Deep Blue, 1989-1997.)

However, Garry couldn’t help but imagine: what if a human did work together with an AI? The next year, in 1998, Garry Kasparov held the world’s first game of “Centaur Chess”.1 Similar to how the mythological centaur was half-human, half-horse, these centaurs were teams that were half-human, half-AI.

But if humans are worse than AIs at chess, wouldn’t a Human+AI pair be worse than a solo AI? Wouldn’t the computer just be slowed down by the human, like Usain Bolt trying to run a three-legged race with his leg tied to a fat panda’s? In 2005, an online chess tournament, inspired by Garry’s centaurs, tried to answer this question. They invited all kinds of contestants — supercomputers, human grandmasters, mixed teams of humans and AIs — to compete for a grand prize.2

Not surprisingly, a Human+AI Centaur beats the solo human. But — amazingly — a Human+AI Centaur also beats the solo computer.

This is because, contrary to unscientific internet IQ tests on clickbait websites, intelligence is not a single dimension. (The “g factor”, also known as “general intelligence”, only accounts for 30-50% of an individual’s performance on different cognitive tasks.3 So while it is an important dimension, it’s not the only dimension.) For example, human grandmasters are good at long-term chess strategy, but poor at seeing ahead for millions of possible moves — while the reverse is true for chess-playing AIs. And because humans & AIs are strong on different dimensions, together, as a centaur, they can beat out solo humans and computers alike.

But won’t AI eventually get better at the dimensions of intelligence we excel at? Maybe. However, consider the “No Free Lunch” theorem, which comes from the field of machine learning itself.4 The theorem states that no problem-solving algorithm (or “intelligence”) can out-do random chance on all possible problems: instead, an intelligence has to specialize. A squirrel intelligence specializes in being a squirrel. A human intelligence specializes in being a human. And if you’ve ever had the displeasure of trying to figure out how to keep squirrels out of your bird feeders, you know that even squirrels can outsmart humans on some dimensions of intelligence. This may be a hopeful sign: even humans will continue to outsmart computers on some dimensions.

Now, not only does pairing humans with AIs solve a technical problem — how to overcome the weaknesses of humans/AI with the strengths of AI/humans — it also solves that moral problem: how do we make sure AIs share our human goals and values?

And it’s simple: if you can’t beat ‘em, join ‘em!

The rest of this essay will be about AI’s forgotten cousin, IA: Intelligence Augmentation. The old story of AI is about human brains working against silicon brains. The new story of IA will be about human brains working with silicon brains. As it turns out, most of the world is the opposite of a chess game:

Non-zero-sum — both players can win.

In the next few sections, I’ll talk about the past, present, and possible future of IA — how we humans have built tools to amplify our intellectual strengths, and overcome our intellectual weaknesses. I’ll show how humans are already working with AIs in various fields, from art to engineering. And finally, I’ll give some rough ideas on how you can design a good partnership with an AI — how to become a centaur.

Together, humans and AI can go from “checkmate”, to “teammate”.

The Story of IA

Doug Engelbart taped a brick to a pencil, and tried to write with it.5 He sure knew how to use his Cold War military research money.

Historic photo from the Doug Engelbart Institute.

In 1962 — decades before Garry Kasparov played chess with centaurs, years before the early internet was invented, even a while before the first supercomputer — Doug Engelbart was investigating how our tools shape our thoughts. At the time, most of Doug’s peers just saw computers as a way to crunch numbers faster. However, he saw something deeper: he saw a way to augment the human mind.

Not that humans augmenting their own abilities is anything new. We don’t have claws or fangs, so our ancestors augmented their physical abilities with spears and arrows. We don’t have large working memories, so our ancestors augmented their cognitive abilities with abacuses and writing. And these tools didn’t just make human lives easier — they completely changed how humans lived. Writing especially: it wasn’t “just” a way to record things, it led to the creation of mathematics, science, history, literary arts, and other pillars of modern civilization.

That’s why Doug tied that brick to a pencil — to prove a point. Of all the tools we’ve created to augment our intelligence, writing may be the most important. But when he “de-augmented” the pencil, by tying a brick to it, it became much, much harder to even write a single word. And when you make it hard to do the low-level parts of writing, it becomes near impossible to do the higher-level parts of writing: organizing your thoughts, exploring new ideas and expressions, cutting it all down to what’s essential. That was Doug’s message: a tool doesn’t “just” make something easier — it allows for new, previously-impossible ways of thinking, of living, of being.

Doug Engelbart chased this dream for several years, and on December 9th, 1968, showed the world a new computer system that brought the idea of intelligence amplification to life. This event is now known as The Mother of All Demos,6 and it’s a fitting title. For the very first time, the world saw: the computer mouse, hypertext, video conferencing, collaborative work in real-time, and so much more, in — let me remind you — 1968. That was 16 years before the Apple Macintosh, 35 years before Skype, and 44 years before Google Docs.

Stills from The Mother of All Demos, presented in San Francisco by Doug Engelbart (right) in 1968.

Over the next few decades, the wonders in The Mother of All Demos slowly reached the public. The personal computer gave ordinary people the power of computing, something only governments and big corporations could afford previously. A particle physics lab in Switzerland released a little thing called the “World Wide Web”, which let people share knowledge using things called “web pages”, and people could even create connections between pieces of knowledge using something called a “hyperlink”.

Steve Jobs once called the computer a bicycle for the mind. Note the metaphor of a bicycle, instead of a something like a car — a bicycle lets you go faster than the human body ever can, and yet, unlike the car, the bicycle is human-powered. (Also, the bicycle is healthier for you.) The strength of metal, with a human at its heart. A collaboration — a centaur.

Things were looking good for the Intelligence Augmentation movement.


Nowadays, few people have even heard of IA, especially compared with its cousin, AI. But it’s not just linguistics. Doug Engelbart envisioned that the computer would be a tool for intellectual and artistic creativity; now, our devices are designed less around creation, and more around consumption. Forget AI not sharing our values — even non-AI technology stopped supporting our values, and in some cases, actively subverts them.7

We hoped for a bicycle for the mind; we got a Lazy Boy recliner for the mind.

But thankfully, IA’s story does not end there. In recent years, there’s been a resurgence of interest in IA. Ironically, it’s in part due to a fear of humans “falling behind” AI — this is the exact reason why Elon Musk founded Neuralink, a company that’s researching how to make brain implants that link our minds directly to computers. But as Doug Engelbart and Garry Kasparov have shown, you don’t need a direct brain-machine interface to augment our intelligence. The interface that evolution has already gifted us — eyes, ears, hands and a body — work pretty darn well. You can ride the bicycle for the mind, without literally jamming metal into it.

But just as IA shows that it doesn’t have to be humans versus machines, it doesn’t have to be IA versus AI. For the last century, the story of AI and the story of IA have been chugging along on different tracks — but in the next decade, these two stories may be headed for a collision course.

How To Become A Centaur

There was another shock in store for Garry Kasparov. Remember that 2005 online chess tournament, between supercomputers, human grandmasters, and Human+AI centaurs? I forgot to mention who actually won the grand prize.

At first, Garry wasn’t surprised when a human grandmaster with a weak laptop could beat a world-class supercomputer. But what stunned Garry was who won at the end of the tournament — not a human grandmaster with a powerful computer, but rather, a team of two amateur humans and three weak computers! The three computers were running three different chess-playing AIs, and when they disagreed on the next move, the humans “coached” the computers to investigate those moves further.

As Garry put it: “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”

The centaur, from ancient Greek mythology, was a majestic being born of a goddess. Bojack Horseman, from the Netflix original series, is a depressed alcoholic who hurts everyone around him. Despite both of them being half-human, half-horse creatures, one is clearly a more successful combination than the other. And this brings us to the most important lesson about human-machine collaboration:

When you create a Human+AI team, the hard part isn’t the “AI”. It isn’t even the “Human”.

It’s the “+”.

A quadcopter body, designed by a Human+AI team. (slide from Maurice Conti's 2016 talk).

So, how do you find the best “+” for humans and AI? How do you combine humans’ and AI’s individual strengths, to overcome their individual weaknesses? Well, to do that, we first need to know exactly what humans’ and AI’s strengths and weaknesses are.

Human nature, for better or worse, doesn’t change much from millennia to millennia. If you want to see the strengths that are unique and universal to all humans, don’t look at the world-famous award-winners — look at children. Children, even at a young age, are already proficient at: intuition, analogy, creativity, empathy, social skills. Some may scoff at these for being “soft skills”, but the fact that we can make an AI that plays chess but not hold a normal five-minute conversation, is proof that these skills only seem “soft” to us because evolution’s already put in the 3.5 billion years of hard work for us.

And if you want to see the weaknesses of humans, go to school. This is the stuff that’s hard for human intelligences, and requires years of training to gain even a basic competency: arithmetic, computation, memory, logic, numeracy. Note that these are all things your phone can do better and faster than the smartest human alive. (And we wonder why kids feel school is meaningless…)

Now, those are the strengths & weaknesses of humans — what about the strengths & weaknesses of AI? Honestly, it’s a fool’s errand to try to predict what specific things AI can or can’t do in the far future. Thirty years ago, nobody predicted we’d have self-driving cars by now. (Then again, we predicted we’d have flying cars by now.) Since we can’t predict anything specific, let’s think generally about what kinds of tasks, so far, AI has had a relative advantage or disadvantage.

Computers are, obviously, best at computing. They’re good at crunching trillions of numbers, scanning billions of data points, considering millions of possibilities. Numbers may be AI’s greatest strength — but numbers are also their greatest weakness. Right now, you can only train an AI if you have a “cost function”, that is, if there are quantitatively better or worse answers. This is why AIs have bested grandmasters at chess and Go — where it’s clear that win > draw > lose — but are awkward at best at having conversation, creating inventions, making art, negotiating business, formulating scientific hypotheses — where you can’t simply rank all your answers on a single dimension from best to worst. In those kinds of tasks, you’d want a human being, who can step back from a single answer and ask, “why?” or “how?” or “what if?

In other words: AIs are best at choosing answers. Humans are best at choosing questions.

And that’s how the winning Human+AI team of the 2005 online tournament chose their “+”. The two amateur humans gave questions to their three weak computers, and when the computers gave back differing answers, the humans gave them even deeper questions.

The chessboard isn’t the only place Human+AI centaurs have had success. From art to engineering, the last few years have seen the rise of centaurs in multiple fields:

  • In 2002, Sung-Bae Cho created a tool where you and an AI create fashion designs together. The tool simulates the process of evolution, but on dresses. The AI provides the “genetic variation” by randomly generating variants of dresses, and you provide the “natural selection” by using your sense of aesthetics to pick the dresses that will go on to “reproduce” in the next generation.

  • In 2016, Maurice Conti demonstrated another case of evolutionary AI working with a human, to create a quadcopter body. The human sets goals and constraints for the AI (“try to make the body as light as possible, while still remaining sturdy and having four propellers”) and the AI “evolves” a quadcopter body in response. The human can then “reply” to the AI, by setting further goals or constraints.

  • In 2016, Zhu et al created a painting tool where you draw in the rough outlines, and an AI photo-realistically fills in the gaps. The human and the AI have an artistic “conversation” through pictures. For example, the human can draw some green lines on the bottom, and the AI replies with several possible photo-realistic grassy fields to choose from. Then, the human can draw a black triangle above that, and the AI replies with several pictures of a mountain behind a grassy field. Through this push and pull between human & machine, art is made.

In all these examples of centaurs, the human chooses the questions, in the form of setting goals and constraints — while the AI generates answers, usually showing multiple possibilities at once, and in real-time to the humans’ questions. But it’s not just a one-way conversation: the human can then respond to the AI’s answers, by asking deeper questions, picking and combining answers, and guiding the AI using human intuition.

So, when you think of augmenting human intelligence with AI, think less of assimilating into The Borg, and more of a spirited conversation between Kirk & Spock — a mix of intuition and logic that surpasses either one alone.

Since the design of Human+AI systems is such a new field — in fact, it’s pretty generous to call it a “field”, it’s more like a small patch of grass — there are lots of unsolved problems, like: 1) What kind of questions should a human ask? In all the above examples, the question is usually “what possible solutions fit these goals & constraints?” 2) How should humans and AIs communicate? You don’t have to use words, or even code; the painting example has the human and AI communicate through pictures! 3) How can multiple humans or multiple AIs work together? All the above examples had just one human working with one AI, but the winner of the 2005 Centaur Chess tournament had two humans and three AIs — how can this scale to dozens, thousands, even millions of people and/or machines?

AIs choose answers. Humans choose questions. And given all the possibilities, the promises and pitfalls of technology in the coming decades, the next question for us humans to choose is:

What’s next?

"Wheels for the Mind" Apple Poster, 1980

The Story of Us

For the last few decades, the story of AI has been one of a rising hero — or is it of a rising villain? In 1997, an AI beat Garry Kasparov at chess, and in 2011 and 2016, AIs beat the world’s top humans at Jeopardy! and Go. And now, many fear that AI will take over our jobs, or even take over humanity itself.

Meanwhile, the story of IA has been one of a tragic fall. Starting out strong with Doug Engelbart’s Mother of All Demos, the idea of IA has slowly been forgotten, as technology shifted from tools for creation and more towards tools for consumption. Someone stole the wheels off the bicycle for our mind.

But now, these two story threads may be starting to wrap together, forming a new braid in history: AIA — Artificial Intelligence Augmentation.8 IA can give AI the human partnership it needs in order to remain aligned with our deepest goals and values. And in return, AI can give IA some new replacement wheels for the bicycle of our mind.

I’d like to tell you what the future holds. But if you tell someone something good is inevitable, it can cause self-defeating complacency — and if you tell someone something bad is inevitable, it can cause self-fulfilling despair.

Besides, answers are for AIs. As a human, you deserve questions.

For example: IA may be able to align AI’s goals with humans’ goals, but how can we align augmented humans’ goals with non-augmented humans’ goals? Are we just replacing a divide between humans and AIs with a divide between humans and humans 2.0? Forget getting humans and AIs to live in peace, how do we even get humans and humans to live in peace? We know how to create tools to augment our intelligence, but can we create tools to augment our empathy? Our communities? Our sense of meaning and purpose?

I don’t know. I don’t know what the answers are.

However, humanity has had a long history of borrowing ideas from nature. In just the field of machine learning alone, artificial neural networks were inspired by biological neural networks, and genetic algorithms were inspired by the process of biological evolution itself. So, if there’s just one idea you take away from this entire essay, let it be Mother Nature’s most under-appreciated trick: symbiosis.

It’s an Ancient Greek word that means: “living together.” Symbiosis is when flowers feed the bees, and in return, bees pollinate the flowers. It’s when you eat healthy food to nourish the trillions of microbes in your gut, and in return, those microbes break down your food for you. It’s when, 1.5 billion years ago, a cell swallowed a bacterium without digesting it, the bacterium decided it was really into that kind of thing, and in return, the bacterium — which we now call “mitochondria” — produces energy for its host.

Symbiosis shows us you can have fruitful collaborations even if you have different skills, or different goals, or are even different species. Symbiosis shows us that the world often isn’t zero-sum — it doesn’t have to be humans versus AI, or humans versus centaurs, or humans versus other humans. Symbiosis is two individuals succeeding together not despite, but because of, their differences. Symbiosis is the “+”.

A new chapter in humanity’s story is beginning, and we — living together — get to write what happens next.

Denis Hurley:

I agree that humans need to work with AI and robotics. They assist us, they don’t replace us. Last summer, I wrote a similar piece, but I think it’s better to strive to be like Chiron, specifically. Centaurs were unruly beasts. Chiron was an educator. He managed to use the best traits in humans and wild animals.

Kaveh Alagheband:

It is interesting that in this article conventional algorithms are considered to be AIs. I am not even sure that “You can now download a chess-playing AI better than Deep Blue on your laptop” is factually true for conventional users because it runs on 2nd-generation TPUs. The chess playing AI- Alphazero- is not even mentioned in the whole article. This new, and fundamentally different process, played the top conventional chess engine and the result was 28 wins to 0. Alphazero, however, does not give out traditional numerical evaluations and so the human operator, like the centaur idea, is not quite the same today.

I am not saying that the main idea of the article is unattainable, but in the environment of “resisting reduction” the way chess is used here is quite reductive.

Kaveh Alagheband:

It might be interesting to see how chess AI went through the history of chess in 4 hours, playing against itself (no human interaction), and its development of the openings is strikingly similar to human history of chess.

rahime edibali:

Since Frankenstein was written by Mary Shelley, the fear of artificial intelligence (AI) is told on various platforms. For the people who don’t have any contribution of AI it is needless to be afraid of AI. Because keeping the pace of anxiety could only give harm but nothing more. Otherwise they will suffer twice if bad things that they had expected happen. Here some are mostly mentioned anxieties related with the economic effects of including of robots to business life.


Since Industrialisation Had Started The Same Anxiety

Karl Marx, writing during the age of steam, described the automation of the proletariat as a necessary feature of capitalism.[1] Yet it doesn’t have to be just for the sake of capitalism. If we look at health sector, we can understand how urgent those improvements must be made. Humans can choose to redistribute that capital in order to replace income lost to robots.[2] Today, researchers are primarily interested in designing one-way systems, which can read brain signals and then send them to devices such as prosthetic limbs and cars.[3] Robotic technologies that collect and interpret unprecedented amounts of data about human behaviour actually threaten both access to information and freedom of choice.[4]


Implementations In the sectors

And current discussions of economic policy focus on how to improve workers’ job and wage prospects. That makes sense, since robots and artificial intelligence are not on the brink of learning how to do every job.[5]

MIT’s Computer Science and Artificial Intelligence Laboratory recently developed system that allowed groups of robots to assemble IKEA Furniture.[6] The economist Carl Benedikt Frey and the machine learning expert Michael Osborne, both of Oxford University, have concluded that 47 percent of U.S. jobs are at high risk from automation. In the nineteenth century, they argue, machines replaced artisans and benefited unskilled labor.

In the twentieth century, computers replaced middle-income jobs, creating a polarized labor market. Over the next decades, they write, “most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labour in production occupations, are likely to be substituted by computer capital.”[7] And also in agriculture, which was the dominant employer of humanity between the dawn of the agricultural revolution and the nineteenth century.[8]

Besides robots could have not only negative effects but also positive effects too. Today, more than 65 million people are confined to wheelchairs, contending with many more obstacles than their walking peers and sitting in a world designed for standing. But thanks to robotics, the next two decades will likely see the end of the wheelchair.[9]

Furthermore there are abundant gains via using robots instead of human workforce. Even thinking yourself as doing some kinds of jobs is hard to bare. What about doing these jobs till the arranged day of your retirement? Those kinds of degrading jobs can be made by robots. For example canalizations’ controls and maintenance are so difficult to do interms of both mental and physical performance. And also some dangerous works like controlling the damages tribunes, explorations under the deep waters of oceans or volcanos etc... Robots could be used so that human honour couldn’t be degraded and could be kept from risking their lives in vain. 

Lagal Burdens Waiting For Being Terminated

And of course intervention of robots to our daily lives could also challenge our ability of making legislations. There is legal gap between the answers of these questions? Who is to blame for a fault of a robot? Who is to be punished? Owner of the robot?  The code writer of the robot? Or the robot itself. As law is defined as a normative science, it is used to make legislations after the bad things are occurred. However those mentioned bad things could be imminent as we enhance using robots.  Recently we have witnessed a sample of vandalism against a robot called HitchBOT which had encountered to take a journey around the world.[10] It can be accepted as an odd. Nonetheless perhaps vandalism against robots should be punished. Because of the fact that if any one attempts to spread aggression among society without any sensible, legal reasons, then those behaviours could also be an example for the other humans who seek the ways of evil.



In the twenty-first century, stable, long-term employment with a single employer will no longer be the norm, and unemployment or underemployment will no longer be a rare and exceptional situation. Intermittence will increasingly prevail, with individuals serving as wage earners, freelancers, entrepreneurs, and jobless at different stages of their working lives.[11] As Brynjolfsson and McAfee offer the second machine age has already began. Knowing that your mental advantages might be even greater than your physical ones, the only thing that you could do is just to be prepared, and awaken for jeopardises and also have full enjoyment of the benefits of this age.

[1] Erik Bryniofsson and Andrew McAfee, Will Humans Go The Way of Horses?, Foreign Affairs july-August 2015, p. 8 [2] Erik Bryniofsson and Andrew MacFee, Will Humans Go The Way of Horses?, Foreign Affairs july-august 2015, p. 12 [3] Illah Reza Nourbakhsh, The Coming Robot Dystopia, Foreign Affairs, July-august 2015, p. 25-26 [4] Illah Reza Nourbakhsh, The Coming Robot Dystopia, Foreign Affairs, July-august 2015, p.26 [5] Erik Bryniofsson and Andrew MacFee, Foreign Affairs, July-August 2015, p. 14 [6] Daniela Rus, The Robots Are Coming, Foreign Affairs, July-August 2015, p. 6 [7] Martin Wolf, Same As It Ever Was, Foreign Affairs, July-August 2015, p. 20 [8] Martin Wolf, Same As It Ever Was, Foreign Affairs, July-August 2015, p. 21 [9] Illah Reza Nourbakhsh, The Coming Robot Dystopia, Foreign Affairs, July-August 2015, p. 24 [10] 14 August 2015 Al Jazeree [11] Nicolas Colin and Bruno Palier, The Next Safety Net, Foreign Affairs, July-August 2015, p. 31


Paula Lang:

There are six types of symbiosis. The type you are referring to is mutualistic symbiosis. As in, mutually beneficial.

Another type, commensalistic symbiosis is where one organism benefits while the other is neither harmed nor helped.

This presents a psychological slipper slope which leads humans to slide from mutualism to commensalism with a shrug. What’s the harm?

The next slippery slope presents itself with the next lower category, parasitic symbiosis. I gain, you lose. If my gain is exponential and your loss is minuscule then I am tempted to go for it.

With two individuals the benefit/harm calculation is easy. Add new variables (ie society) and suddenly we find ourselves (through AIA) encouraging “The Giving Tree” behaviors.

¡Con quidado!