Myth and the Making of AI
A myth runs deep in Western culture that can be traced through everything from wild west novels to space exploration and the origin stories of Silicon Valley startups. It is a myth that crosses creative boundaries, driving blockbuster sales of Steve Jobs biographies, inspiring visitors to Michelangelo’s ceiling frescoes in the Sistine Chapel, and fueling criticisms of Beyoncé when her albums credit dozens of writers. It is repeated in history books and celebrated in TED talks. It is the myth of the lone pioneer. This lone pioneer may be a hero with a thousand faces, but he is a singular hero. His journey celebrates the self-discovery that comes with creating, but at its heart, it also affirms a reductive identity that is based in self-sufficiency. Everyone else is invisible in his story.
Today’s race for AI is the greatest hero of all, with Singularity driving a new wave of pioneer narratives featuring man as maker and machine as protagonist. Yet this cultural mythology contradicts the characteristic that most distinguishes the rising AI age: ambiguity. Creating technology is now a practice of designing for the uncertain and coding for the indeterminate. In the history of design and technology, human and technical variability have long been treated as error: a deviation from the norm to be simplified away, either in service of mass-scale market growth or because a diversity of possibilities threatens individual control.
Yet human lives are full of ambiguous interdependencies. In his essay “Resisting Reduction,” Joichi Ito asserts that when successful, systems of these interdependencies form value exchanges marked by a flourishing function and drawing from “diversity and the richness of experience.” Interdependence is a necessary reality. Every pioneer needs a patron, every artist needs a group of creatives to inspire and provoke. No pioneer, no artist, no inventor ever makes it alone. These human truths lead us to question the reductive cultural myth of self-sufficiency as the highest form of worth and instead affirm one of adaptive interconnectivity.
Flourishing systems retain and value this natural interconnection. Ito concludes his essay with a stirring call for a new kind of participant design—a design of systems as and by participants—that champions a robust interdependency in the creation of complex adaptive systems. Unfortunately, the characteristics that define the field of AI are increasingly ones of secrecy, distrust, and competition, postures that presume scarcity and a struggle over limited resources. Scarcity motivates fear, which is rapidly becoming a dominant cultural narrative between man and machine.
One result of these postures is that teams, corporations, and institutions are increasingly incented to remain isolated from each other in the face of ambiguous technical challenges. This is problematic because if the system that determines who makes AI is reductive and marked by disconnection, we cannot expect the outcome to be anything but the same. A system of robust participant design does not lessen ambiguity, but it can enable participants to benefit by leaning into it as a way of working.
Applying the concept of participatory design to how AI is made requires answers to several practical questions. Who determines who makes? Who has the greatest expertise in interdependence? What actual methods can be applied by developers, designers, and creatives when the aim of making is flourishing? If AI is to participate in a system that is adaptable and sustainable because it is inherently diverse, the qualities of such a system would look very different from the qualities of a system created from a posture of scarcity.
Examining a few key myths can help reveal the roots of reductive systems in technology and how we might transcend these deep seated paradigms. The stories we tell frame the choices we make. One way to gain insight into the connection between makers and their creations is to study existing factors that break and build human relationships. To begin, flourishing systems that promote interconnectivity would have diverse touchpoints, they would value interdependence, and they would cultivate a strong sense of belonging.
Systems that Have Diverse Touchpoints
The evolution of human-computer interfaces is infused with a strong reductive myth of the average human being. This myth endures even though it is widely acknowledged that people are incredibly diverse and largely unpredictable. In the quest for simplified ways to design interfaces to such systems, it was easier to strip away human diversity, reducing people to a set of basic assumptions about their bodies and circumstances. But then, we forgot to add this diversity back into the design process. We now lack the tools and methods to do so. The complexity of AI solutions demands that we reconsider these approaches and expand the ways we account for the prevalence of such ambiguity.
Designers use many techniques to envision the people who will interact with their solutions, from detailed personas to massive databases of customer feedback. These normalizing techniques were heavily influenced by a 19th-century Belgian astronomer and mathematician named Adolphe Quetelet.
Invigorated by his discovery, Quetelet started measuring more aspects of human beings, creating physical, mental, behavioral, and moral categories of people. Everywhere he looked, he found bell curves. He became consumed with what he deemed the human ideal, the perfect average measurement across all those dimensions. Quetelet held that individual people should be measured against that perfect average. From this comparison, one could calculate the innate degree of “abnormality” of an individual person. Diversity and variations in human beings were treated as degrees of error. His ideas were contagious and enduring, especially in the social sciences. Normal-based methods of diagnosing illness led to advancements in public health. However, eugenics, and its horrific assertions about the superiority of abilities, races, and classes of people, also grew from Quetelet’s idea of the perfect average human.
The power of the bell curve still echoes through the design of society, from classrooms to computers. Left-handed students are seated in desks made with the assumption that normal human beings are right-handed. Important features of smartphone applications are placed based where the average user, presumably right-handed, is likely to reach for them. The first personal computers were designed for a mythic average human who could dedicate a high degree of visual and cognitive attention to navigating a graphic user interface, to the exclusion of anyone who didn’t match this profile. As greater numbers of people use technology in exponentially diverse ways, in different contexts and environments, greater numbers of people are also experiencing moments of exclusion.
The common misconception is that the center of the curve represents an 80% majority of the population and 80% of the important product problems to solve.
This leads many teams to treat the remaining 20% as outliers or edge cases, a category of work that’s often deferred or neglected. In fact, edge cases can be a useful starting point for creating better solutions. However, having an edge case implies the existence of a normal, average human. When it comes to design, what if a normal, average human is simply a myth?
Reductive ways of thinking about people leads to reductive touchpoints in the design of a system. Imagine a playground full of only one kind of swing. This swing requires you to be a certain height with two arms and two legs. The only people who will come to play are people who match this design, because the design welcomes them and no one else. And yet there are many different ways to design an experience of swinging. You could adjust the shape and size of the seat. You could keep a person stationary and swing the environment around them. Participation doesn’t require a particular design, but a particular design can prohibit participation.
The same applies to technology. Each feature created by designers and developers determines who can interact and who is left out. When we create a diversity of ways to interact with a system, more people can access that experience. More importantly, they can participate with each other within that system. This natural interconnection and interplay between elements is important to any flourishing system and any healthy human habitat. Unlike the fixed objects in a playground, the elements of digital environments are far more malleable and responsive, ideal for adaptive systems that interact with multiple human beings at once. How might we build better ways to recognize exclusion and regulate negative feedback as inherent parts of a system?
One simple starting point is to identify the types of activities and experiences that are most important to a human environment, physical or digital. We can identify the range of human abilities—physical, cognitive, and social—that are important when using a system. We can design touchpoints that work well for excluded communities, but also extend access to anyone who experiences a similar kind of exclusion on a temporary or situational basis. The result would be a system that enables diverse kinds of participation.
Systems that Value Interdependence
Social independence is another reductive myth that leads to disconnection. Technologies that emerge from cultures that idolize independence often optimize solutions for one lone person. Even in solutions that aim to connect individuals, such as transit systems or social media, people can be treated as a collection of individuals, counting the number of unique likes they receive on a post, rather than a collective unit where the interdependence between individuals is constantly reshaping the nature of that system.
Conceptualizing interdependence and recognizing it in practice can be challenging for anyone who idolizes independence. Interdependence is often conflated with negative notions of human weakness, indulgence, or simply dismissed as relevant only to people who are very young or advanced in age, the times in our lives when we depend heavily on other human beings to support us. And yet no society thrives solely on the skills of its hunters and warriors. No society is sustained through only one kind of contribution. All societies thrive when systems of interdependent skills are manifested in economies that include different types of novices and masters. Interdependence is about matching these complementary skills together and balancing mutual contributions in diverse forms of value exchanges.
People in professions that focus on human relationships, such as educators, sociologists, and personal assistants, often develop a mastery of interdependence as a matter of practice. Interdependence is also a necessary practice for many members of marginalized communities, where collective creativity and resourcefulness are matters of survival when confronted with lack of access to social power and resources. Interdependence can be important for people who employ human assistants and assistive technologies. For people with disabilities, working closely with personal assistants can be a vital aspect of daily life
Because many aspects of society are designed taking for granted the myth of people as socially independent, members of excluded communities often face the greatest physical, cognitive, and social mismatches when interacting with these touchpoints. This myth not only limits who can participate in the system, but also who can contribute to the evolution of that system through design, enabling a self-reinforcing loop of omitting people who could have the greatest expertise in how interdependence enables flourishing.
The rise of AI means more digital agents will facilitate everyone’s interactions with society. Transcending the social independence paradigm in an effort to design a ubiquitous interplay with such agents could start with studying the diverse types of value exchanges that exist in communities that already cultivate interdependence. Exchanging of art for labor, or of food for childcare. Designing for interdependence changes who can contribute to a society, what they contribute, and how they make that contribution. If we develop our innate ability to connect with one another as a precious resource and source of social vitality, what kind of AI could we build?
Systems that Create A Sense of Belonging
Lastly, disconnection is often perpetuated by the reductive myth of culture fit. When there’s only one fixed path to becoming the maker of a system, that path will determine who makes. Whether we consider early childhood education or corporate hiring practices or the internal processes that teams use to build and communicate, the path to becoming a contributor to AI is narrow. This ensures that the design of AI will be informed by only the select few people who fit and survive the cultural requirements to participate.
One way to revise this myth is by hiring people from excluded communities, especially people with disabilities, into positions where they can influence and inform the design of emerging systems. This is a richer form of participatory design, which perhaps doesn’t go far enough in enabling value exchanges that imply a sense of worth for all. A practice known as inclusive design pushes beyond participation and places the highest value on contribution.
Inclusive design is first designing with, and not just for, excluded communities. Then it involves extending the benefits of solutions to anyone who might experience a similar kind of exclusion on a temporary or situational basis. Inclusive design doesn’t mean designing one thing for all people. It emphasizes designing a diversity of ways to participate, so that everyone has a sense of belonging in a place. It starts with challenging the most prevalent mental model of inclusion.
Based upon the Latin root, claudere, which means to shut, inclusion literally means to shut in. This evokes an image of a circular enclosure, with some people contained within the circle and others who are shut out. This mental model informs how we think about inclusive solutions. Is the goal for the people inside the circle to create openings in the enclosure and magnanimously invite excluded communities to participate with them? Is the goal for outsiders to forcibly break into the circle? Or should we eliminate the circle altogether to intermix freely in a utopian state? Perhaps all are incorrect.
What if, rather than a rigid enclosure, inclusion was a cycle of choices that each designer, developer, educator, or leader is constantly making and remaking as they create solutions for someone other than themselves. In this model, what is ultimately made and released into the world is a byproduct of who makes and the assumptions they make about who receives their solutions. This is critical, especially when hundreds, if not thousands, of people are working together to manifest a complex system.
The final features of these objects and experiences give strong indicators of who does and doesn’t belong in a place. Imagine a touchscreen at a subway ticket station that only works for people who can see and touch a screen. Or a job application that can only be submitted over a high-bandwidth internet connection. Or a video game controller that requires two hands to play. Each design choice—the contours and materials, the default language, and the underlying logic of a solution—will quickly let you know whether it is made for you.
This is why participation might not go far enough. Creating flourishing systems will require more than just extending a warm invitation to give input and feedback on potential designs. It will be a matter of entrusting the design of these systems to the contributions of the most excluded communities.
Moments of technological transition are an ideal time to introduce inclusive design. We can engineer these new models to ensure they don’t lead to exclusionary design practices that only fit a nonexistent average human. Without inclusion at the heart of the AI age, we risk amplifying the cycle of exclusion on a massive scale. It won’t just be perpetuated by human beings. It will be accelerated by self-directed machines that are simply reproducing the intentions, biases, and preferences of their human creators.
The Stories We Tell
Myths are derived from culture, and their retelling perpetuates the shape of that culture for future generations. The culture of technology is rife with mythologies, and it can be tempting to allow the sparkling narratives of genius and riches to camouflage the much more mundane truth that technology emerges from our collective ability to work together. It is truly a reflection of how we relate to one another, embodied through a series of choices we make. When we look upon each of those creations, as with any life we birth, our acceptance or rejection of that creation determines the power it holds over us.
One lone pioneer has captured the imagination of technologists like no other: a restless young college student who works alone by night for years on a secretive project, an inspired invention whose completion truly alters the course of his life forever. It is the story of Victor Frankenstein, as told by Mary Shelley in her popular novella Frankenstein, which has endured as a morality tale in modern tech narratives despite being published in 1818.
Frankenstein is everywhere. A quote from it opens Chris Paine’s AI documentary Do You Trust This Computer, it headlines an algorithmic research project and film from Columbia University’s Digital Storytelling Lab, Frankenstein AI: A Monster Made by Many, and it is being celebrated in a cross-disciplinary Frankenstein Bicentennial Project sponsored by Arizona State University and the National Science Foundation. M.I.T. Press also released a special edition of the novella “annotated for scientists, engineers, and creators of all kinds.”
However, Frankenstein is not as straightforward as modern audiences might assume. The fact that Shelley actually never names the creature in her story has enabled generations of readers to project onto it any number of moral ambiguities. From the creature’s behavior, critics have divined the politics of the French Revolution, the significance of slave uprisings in Haiti and the West Indies, important racial and feminist themes, and, much more recently, dire consequences for makers of modern technology.
Shelley clearly uses science and technology as vehicles to tell her tale. But reading it with an interpretation focused exclusively on tech neglects other nuances. As noted in the preface to the M.I.T. edition, “Frankenstein is unequivocally not an anti-science screed, and scientists and engineers should not be afraid of it. The target of Mary’s literary insight is not so much the content of Victor’s science as the way he pursues it.”
Themes of connection and disconnection are woven throughout the original story and right into the heart of its most pivotal scene, the dreary night when Victor finally animates his creature after two years of obsessive work. A spark, then suddenly, in a shock of revulsion—“now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart”—he runs away.
The creature grasps the power of these themes by the end of the story. In the final pages of Volume III, it admits to destroying Victor by destroying all that he loved, severing the deepest connections possible by murdering the three people closest to him. First by choice, and then by violence, everyone else becomes invisible in Victor’s story.
Despite their enduring popularity as mythology for modern technologists, we can examine lone pioneer narratives—including the one in Frankenstein—not only through the lens of technology itself but also by the way technology is pursued in these stories. If there are morals for creators to draw from, certainly one is that human lives are full of ambiguous interdependencies, and to deny these connections is the antithesis of flourishing. Disconnection between makers and their creations, and disconnection from each other, is a prevalent practice in technology as a way of reducing uncertainty. We can disrupt this paradigm by challenging the assumptions at its foundation rather than accepting them as absolute truth.
So how will we pursue the making of AI? The technology we create is a byproduct of our choices. We are naturally interconnected with our creations and with each other as we create, but no cautionary tale will change our course when millions of isolated makers are inherently disconnected from each other in the ways that they invent. To create in siloed disconnection out of fear is to deny the truth of our lives, and, in a stunning lack of forethought, to leave untapped the greatest assets of our collective creativity: the powerful adaptive interconnections that can fuel entirely new systems of flourishing. As we reshape our systems to make this human truth more evident, and as we each contribute to the making of something greater than ourselves, we’ll experience new ways to see into the nature of what we’re creating. In turn, we might learn how to create systems, for AI and beyond, that enable the survival and flourishing of the connections that are most precious to us.
Holmes, Kat. Mismatch: How Inclusion Shapes Design. Cambridge, MA: MIT Press, 2018.
Rose, Todd. The End of Average: How we Succeed in a World that Values Sameness. New York, NY: HarperOne, 2016.
Shelley, Mary; David H. Guston, Ed Finn, and Jason Scott Roberts, eds. Frankenstein: Annotated for Scientists, Engineers, and Creators of all Kinds. Cambridge, MA: MIT Press, 2017.