Skip to main content
SearchLoginLogin or Signup

Playing Intelligence

How games and game engines can provide new ways of comprehending networked and distributed forms of intelligence.

Published onApr 11, 2019
Playing Intelligence
·

Intelligence as a Spectrum

More than one might initially think has our image of artificial intelligence been shaped by popular culture. AI in films has been mostly portrayed as somewhat anthropomorphic in nature: talking cars, a robot that wants to become a boy, a replicant that is “more human than human.” For the last couple of decades, popular culture has shaped an image of artificial intelligence as something humanlike: self-contained and autonomous. In reality the opposite is true. AI and human beings are part of an ecology of intelligences. Intelligence is not a binary quality, but more of a spectrum. An ant on its own is rather unintelligent. Yet an army of ants figuring out how to transport leaves in the most efficient manner over a certain distance is what one would call an emerging, collective intelligence. An ecosystem that has been adapting to changes in climate over thousands and thousands of years can be described as some kind of slow intelligence. Intelligence comes in many different forms, scales, and speeds.

Unfortunately we lack adequate ways of accessing and understanding this new form of intelligence, or rather accumulation of different intelligences. Whereas representations of AI in fiction have often failed to represent the character of progress in the field, it is, ironically, another pop culture medium that shows potential for expanding our understanding of it: games. I argue that video games and game engines offer new possibilities as an interface for networked forms of AI.

Demystifying AI

When an average user interacts with some form of artificial intelligence, three key characteristics too often get blackboxed during the interaction: its networked, modular architecture, its distributed nature, and the aspect of time and learning.

Patchwork Intelligence: Subdividing Thinking

In the summer of 2017, a video of Barack Obama went viral. What at first looked like any other videotaped announcement by the former president was actually heavily manipulated by an algorithm designed by researchers at the University of Washington. The spoken content of the video had been taken from a different, previous announcement by Obama. His lip movements were artificially generated and synced to the audio file by an algorithm, making it impossible to distinguish between the real and the fictional. Researchers at the Montreal Institute for Learning Algorithms went even further and created a program called ObamaNet. Instead of relying on actual spoken words, their tool is able to create its own audio files from text, which are then used to generate synced lip movements. Other contemporary AI research ranges from de-rendering holistic images into individual components to generating low-resolution videos from any kind of text file.

Those AI experiments are just as impressive in their technical sophistication as they are creepy in their potentially harmful implications and use-cases. How did we get here? Who defined text-to-photorealistic-lip-sync as a crucial design problem to solve? Looking at the history of machine learning, it becomes apparent that there is a range of this-to-that translation problems the AI community has been trying to master for a long time. Speech recognition research, for example, traces back to the 1970s, when DARPA funded a five-year Speech Understanding Research program. In the 1990s, the first commercial speech-to-text softwares were available.Text-to-speech, speech-to-text, text-to-image, image-to-text, sound-to-image, image-to-sound, 2d-to-3d, 3d-to-2d, etc. —the machine learning community seems to have developed an obsession with translating. These nearly canonical this-to-that challenges seem to function as quantifiable ways for researchers to compete with each other (just as playing chess lent itself to comparing progress in the field).

If you cannot translate from A to C directly, you break the problem down to sub-problems, translating from A to B and then from B to C. What we see emerging is a rag rug of different modules; a collage of smaller and bigger AIs that are highly specialized in what they are doing (e.g., a network to generate video frames conditioned on keyframes generated by an LSTM network synced to the audio generated by a text-to-speech network). It is imaginable that game engines could be used as an interactive visualization tool to make the connected nature of those patchwork AIs more apparent and explorable.

Distributed Computation: Seeing the World Through Different Lenses

The modular patchwork intelligence of numerous sub-AIs is manifested in the physical world. Our smartphones alone make up an earth-spanning sensor network collecting data from millions of users, considering that the average smartphone consists of 20 sensors or more. The planetary-scale networks of sensors and computers, harvesting and processing data, are largely unmapped, and it is fair to say that the general public is unaware of its sheer scale. Rather than the McLuhanian way of referring to these objects as extensions of use, a more object-oriented view seems appropriate. The relationship between human beings and machines is not a one-way street: The landscape of networked sensors and computers experience us as one of many stimuli. Jennifer Gabrys, professor of sociology at Goldsmiths. University of London, refers to philosopher Gilbert Simondon’s notion of “in-forming” (an act of both rational processing and experiencing) in her 2015 book Program Earth: Environmental Sensing Technology and the Making of a Computational Planet: “Sensing is not just the process of generating information but also a way of forming experience.” (p.11) Monitoring our sensing planet, then, not only results in rational insights, it enables us to render and experience our world from multiple viewpoints. These renders enable us to see new kinds of patterns by “bring[ing] entities into communication even if not directly connected, an environment influencing genetic adaptation and evolution, and to the contrary, even an environment to which living entities are indifferent.” (Gabrys, 2015, p.11). Furthermore, one can argue that observing and rendering are generative ways of computing, resulting in images of not one but many worlds. Therefore environmental forms of computation enable ways of monitoring and optimizing just as much as they unfurl open-ended and speculative processes.

Similarly, artificial intelligence systems make sense of the world in terms of what they have been trained for. They too render the world not as one, but as many. As most AI systems are constructs made of smaller, more specialized sub-AIs, they tend to render abstractions of an accumulation of already rendered worlds. These processes of making sense of the world and rendering it are highly complex, multilayered, and self-feeding. The dissociation between everyday life and the abstract mechanics of networked systems that shape our world, however, is enormous. Applications made in game engines could become interfaces to visualize and interact with these spatial sensor and intelligence networks of enormous scale.

Level Up: Time and Progression

When a new AI sensation makes it into the news (e.g., OpenAI’s “dangerously” good text generator), the public gets to see a curated result of a long process. Behind the scenes, machine learning researchers have to carefully calibrate and experiment with the number of learning epochs, different starting functions, different training sets, number of layers, etc. in order to achieve a desired outcome. The way to the perfect result is paved with countless trials and errors, unimpressive developments, and surprising byproducts.

When training ML agents, like NPCs in a game engine, one can follow and view the algorithm as it learns. Similar to the concept of breaking down tasks into several connected sub-AIs, an agent in a game engine learns tasks step by step and then combines them: A virtual dog in a game might first be trained to walk, then be trained to catch a stick, and through rewards finally fetching it back. Games work in a similar, progressive fashion; levels get increasingly challenging and complex as new elements are introduced, parameters are increased or decreased, and new patterns emerge.

Impressive AI is not an out-of-the-box phenomenon or the creation of a single genius, but the result of a long iterative, collaborative process. By opening the research process up and being transparent about its failures along the way, we can move from a binary utopia/dystopia discussion toward a more informed conversation on training AIs.

Games as Training: from Board Games to Self-Driving Cars

Demonstrating how games and game engines could help demystify AI, I will zoom out a bit to take a look at the shared history of games, simulations, strategy, forecasting, and artificial intelligence.

From Kriegsspiel to AlphaGo

The roots of today’s computer simulations go back to the Prussian kriegsspiel, a kind of elaborate board game developed by Lieutenant Georg Leopold von Reiswitz. The Prussian military used it around the beginning of the 19th century. Initially, two miniature troops fought on a chesslike field made of a grid system. It was a simple, intuitive, and playful way for the Prussian military to quickly simulate different scenarios. Over time more rules and factors were added to the game: three-dimensional terrains, the speed of horses, unforeseen events such as sudden communication difficulties, random factors such as weather conditions and unknown factors such as the actual size of the opponent’s army, its strengths and weaknesses. General Julius von Verdy du Vernois further refined von Reiswitz’s idea by making the rules more flexible and giving power to a game overseer, which resulted in the possibility of playing these simulations in real time. Having become a tool that Prussian officers increasingly applied, the idea of war games was taken up and refined by other countries in the 20th century. In parallel, tabletop war games became consumer goods (and propaganda tools) toward the second World War. Technological developments in the second half of the century led to computer-assisted military simulations, reaching new levels of detail and accuracy.

The idea of simulating scenarios in order to minimize risks and gain strategic advantages has since then translated into other fields such as economics, politics, and marketing. Simulating and forecasting is used in everything from positioning car riding services throughout a city to predictive policing. Cloud gaming technologies today are capable of running enormous simulations in real time, used both for open-end world games as well as strategic tools of all kinds. In general, simulations tend to be about narrowing down paths in order to optimize for likely outcomes as opposed to opening up new, unconventional paths.

In parallel, using games as a training playground for algorithms in computer science has become increasingly popular since at least1949, when mathematician Claude Shannon defined chess as an ideal starting point to explore the concept of a thinking machine. In his paper “Programming a Computer for Playing Chess,” Shannon characterizes chess as a game whose difficulty sits within a satisfactory range, has a clear goal, and involves some kind of thinking in order to master it. Over the years computers were trained for a range of different board games. In 1997 IBM’s Deep Blue wrote history by beating world chess champion Garry Kasparov. In recent years the victory of Deep Mind’s AlphaGo over Go world champion Lee Sedol drew attention from around the globe as another breakthrough in AI research. Central to the success of AI research in recent years is the idea of rewarding algorithms for successful moves. Rather than hard coding a strategy into the algorithm, the algorithm would learn by itself through rewards given.

Buckminster Fuller & the World Peace Game

Designer and architect Buckminster Fuller saw the potential of games and the importance of goals and rewards. By experimenting with unusual game mechanics, he turned the war game concept on its head. Fuller envisioned a so-called World Game whose ultimate goal was to make everyone a winner. Also known as the World Peace Game, it required players representing all nations to play in a participatory manner in order to make use of Earth’s limited resources without interfering with any other humans or gaining advantages at the expense of another. Being convinced that his simulation offered the possibility to tackle shared, worldwide problems, Fuller envisioned the game as a serious democratic alternative to voting.

Referring to the popularity of MMOs (Massively Multiplayer Online Games) and the sheer computational power available today, artist Jonathan Keats speculates in his 2016 Fuller biography You Belong to the Universe on using them as a way to run crowd-source simulations, resulting in a new form of policy making:

“[MMOs] could play a more direct goal in governance. One of Fuller’s ideas — that gaming could serve as an alternative to voting — could potentially be realized with a plurality of people gaming national and global eventualities. For any given issue, different proposals could be gamed in parallel. As some games collapsed, gamers would be able to join more viable games until the most gameable proposal was played through by all. That game would be a surrogate ballot, the majority position within the game serving as a legislatively or diplomatically binding decision. Provided that citizens consented from the start, it would be fully compatible with democratic principles — and could break the gridlock undermining modern democracies.”

While Keats’s proposal is radical and its viability may be questioned, it recognizes the generative and educational potential of a pop culture medium. A new generation of artists, designers, and developers actually work with game engines and simulations themselves, exploring the medium’s qualities and traps in a critical manner.

Subversive Simulating

While participatory gamelike interfaces for networked forms of AI have yet to be built, a look at how artists and designers appropriate game engines in novel ways shows that games and simulations can indeed be used to demystify AI. Rather than using simulations to narrow paths down, they reappropriate them to open up new paths and reveal the mechanics of their simulations and games.

Humans of Simulated New York

For an artist’s residency at DBRS Innovation Labs, artist Fei Lui and developer Francis Tseng designed a digital simulation piece titled Humans of Simulated New York. The core idea was to create an extremely accurate simulation of New York City, which could then generate narratives from its simulated residents. The simulation used as much New York-specific data as possible, drawing from census, market, employment and other data. Lui and Tseng then built an economic model for their agents (simulated residents) to inhabit and act in. By using real-world data, aspects such as structural inequality are captured in the simulation. Other parameters are up for the player to experiment with in order to see how New York and New Yorkers would change under different scenarios. A feature of the project allowed players to play one of the fictional, simulated citizens and see how his or her life is shaped by the overall structure of the simulation.

By no means is Humans of Simulated New York pretending to be a representation of New York City. Tseng and Lui are also not proposing that such a simulation should be run by policy makers; though in the context of smart cities, we will likely see attempts to do so. The project makes first and foremost a point about how the assumptions of the simulation’s creators shape the imaginative potential of the simulation itself. As a reaction to that, one would need to allow the player to change not only parameters, but the rules of the simulation itself—a step the artists want to explore in their next projects.

Ian Cheng

Artist Ian Cheng uses game engines as a tool to build live simulations. These live simulations are at their core games that learn to play themselves in real time, yet they differ from most simulations in that they lack an overall purpose or goal. Quite the contrary, Cheng is interested in designing simulations with chaotic and unexpected outcomes by assigning particular attributes and behaviors to specific entities and letting them unleash onto each other:

“Emergence is a key principle to how this all works: it’s the idea that from simple properties and behavioral laws, unexpected complexity can emerge. I write little, individualized fragments in C# that describe a behavior or tendency of an object, I also write a set of laws that modify the overall physics of the virtual environment. The key production principle is that all these behavior writings are micro, never a whole, deterministic architecture or bird’s eye view design. The simulation in the end is a virtual space with a huge accumulation of mini-behaviors and laws that act and react to each other with no master design, just tendencies, all playing out in parallel with each other. In principle, it mimics the way in which nature has built up the complexities of our world, without design, piece by piece.” (2015)

Unlike simulations used for forecasting and gaining insight, Cheng’s live simulations evolve over human space-time; they cannot be sped up or slowed down. There is also no intention to stop them at a point of perfection, to cover up failures, or manipulate uneventful periods. They evolve in real time in front of our eyes in a manner that is brutally honest, usually without us being able to intervene. In a piece titled Droning like a Ur, an AI in the shape of a boy was taught to name every object he encountered or encountered him. As the simulation evolved, objects merged into new composite objects so that the boy renamed them by awkwardly combining their previous names. In Thousand Islands Thousand Laws a bird that was told to collect objects of a certain mass and stillness picked up the head of a human figure who remained still for too long.

While Cheng’s virtual ecologies are not human-centered, his work makes the case that the very act of simulating is a characteristically human one. Simulations are not representation, but rather training exercises of the real. Car companies simulate accidents to build safer cars. Astronauts simulate lift-off conditions by experiencing high g-forces. We simulate what to say to someone by first rehearsing it with friends, knowing that the aliveness of a conversation is too unpredictable to fully know in advance. Cheng’s work makes the call for simulations as a medium to communicate complex forms of emerging intelligence. As Luciana Parisi, co-director of the Digital Culture Unit at Goldsmiths, University of London, characterizes in her essay Simulations in the 2015 publication Ian Cheng: Live Simulations: “[...] both mental and computer simulations are artificial intelligences embedded and yet not reducible to the bounded stata of the brain.”

Everything

Everything by David OReilly is a procedurally generated video game in which players are able to inhabit and control various entities at different scales. Its gameplay revolves around the existing and inhabiting of entities as opposed to acting. Initially the player starts as one of many objects and creatures, and can shift between them. As the player shifts to smaller entities, the scale of the game world shifts accordingly until one reaches the subatomic level. From here the player is able to go the opposite way and shift to bigger entities at bigger scales, eventually inhabiting entire planets or galaxies.

Taking the example of the human bloodstream with different microbes fighting against each other, British philosopher Alan Watts points out that “[w]hat is in other words conflict at one level (at magnification), is harmony at another level.”

Everything is more of an extensive experience as opposed to a game. There are no definitive scores. It is not too far off to draw parallels to Speculative Realism, taking a non-anthropocentric and object-oriented approach to gaming. OReilly has stated that the aim is to leave players with a feeling of wonder. Watts calls this a “point of emotional investment,” in which one realizes the tremendous interconnectedness of the world.

Toward an Infinite Game

Historically, simulations and AI have been used to find answers to finite games. In a way, strategy video games, military simulations, and forecasting technologies are legacies of the kriegsspiel, and therefore inherit a certain notion of competition. Combined with a belief in exponential growth, these forecasting practices tend to narrow down visions of a potential future. If one puts the emphasis on the relationships, communication, and processes between different entities in a game world as opposed to optimizing outcomes, however, one starts to play in a more experimental manner, generating an expanded range of potential fictions and futures. Artists, developers, and designers have started to experiment with games in ways that are agnostic to the concept of winning or losing; playing an infinite game. They take inspiration from the way nature adapts to new conditions and other forms of emerging intelligence. Those experiments embrace the unpredictable and uncontrollable as means of computing new worlds, of opening up other paths. As Luciana Parisi emphasizes: “Simulations are both manifest appearances of human culture and the scientific images of computational processing.” (2015) Ultimately, they provide small but intriguing counter-narratives to the finite game that is the Singularity.

The nature of video games and distributed forms of computation share an object-oriented view of the world. Therefore no other cultural medium seems so promising to enable new ways of relating to artificial intelligence in its distributed form. While intended as a means of commenting, critiquing or expanding the notion of game simulations, there is no reason to not apply this object-oriented, infinite game approach to real-world applications. With rapid advancements in game cloud computing technology and an ever growing sensor network, one can imagine interfaces for the general public to explore and relate to emerging, distributed and networked forms of intelligence. Games making use of real-world data in real time can become tools for citizens to take on new and unexpected points of view. They can provide ways to build one’s own worlds, to understand complex systems or counter-intuitive processes, to become emotionally invested in a larger picture going beyond the anthropocentrism Silicon Valley promotes. They could become ways to experiment, to build one’s own sub-AI rag rug. One can imagine thousands of thousands of AI network simulations at different scales, at different speeds, with different aspects, factors, data sets, and degrees of influencing each other. These worlds might indeed change one’s opinions and behaviors in the “real” world. In their goallessness, they would develop more like nature, with no notion of control.


Bibliography

Bratton, B. (2015). Outing Artificial Intelligence: Reckoning with Turing Tests. In: M. Pasquinelli, ed., Alleys of Your Mind: Augmented Intelligence and Its Traumas, 1st ed. [online] Lüneburg: meson press, Hybrid Publishing Lab, Centre for Digital Cultures, Leuphana University of Lüneburg, pp.69-80. Available at: https://meson.press/wp-content/uploads/2015/11/978-3-95796-066-5-Alleys_of_Your_Mind.pdf [Accessed 1 May 2018].

Bratton, B. (2016). The Stack - On Software and Sovereignty. 1st ed. Cambridge: MIT Press.

Carse, J. (2012). Finite and Infinite Games. New York: Free Press.

Cheng, I., Evers, E., Jaskey, J., Kelsey, J., Parisi, L. and Raskin, I. (2015). Ian Cheng: Live Simulations. 1st ed. Leipzig: Spector Books.

Cheng, I. (2018). Ian Cheng: Emissaries Guide to Worlding. 1st ed. London: Koenig Books, London & Serpentine Galleries.

Gabriel, M. (2015). Fields of Sense: A New Realist Ontology. 1st ed. Edinburgh: Edinburgh University Press.

Gabrys, J. (2016). Program Earth: Environmental Sensing Technology and the Making of a Computational Planet. 1st ed. Minneapolis: University of Minnesota Press.

Guattari, Félix (2000) The Three Ecologies, trans. Ian Pindar and Paul Sutton, London and New Brunswick, NJ: Athlone Press.

Harman, G. (2018). Object-Oriented Ontology: A New Theory of Everything. 1st ed. London: Penguin Random House.

Huizinga, J. (2016). Homo Ludens. Kettering, OH: Angelico Press.

Ito, J. (2018). Resisting Reduction: A Manifesto. Journal of Design and Science (JoDS), [online] (3). Available at: https://jods.mitpress.mit.edu/pub/resisting-reduction [Accessed 1 May 2018].

Keats, J. (2016). You Belong to the Universe. 1st ed. New York: Oxford University Press.

Kofman, A. (n.d.). Les Simerables. [online] Jacobin. Available at: https://www.jacobinmag.com/2014/10/les-simerables/ [Accessed 1 May 2018].

Medina, E. (2014). Cybernetic Revolutionaries. Cambridge: The Mit Press.

OReilly, D. and Watts, A. (2017). EVERYTHING - Gameplay Trailer. [video] Available at: https://www.youtube.com/watch?v=JYHp8LwBUzo [Accessed 23 May 2018].

Simondon, G. (2017). On the Mode of Existence of Technical Objects. Translated from French by Malaspina, C. and Rogove, J., 1st ed. Minneapolis: University of Minnesota Press.

Tseng, F. (2016). Humans of Simulated New York. [online] space and times. Available at: http://spaceandtim.es/projects/hosny/ [Accessed 1 May 2018].

Wiener, N. (1988). The Human Use of Human Beings. New York: Da Capo Press.

Comments
0
comment
No comments here
Why not start the discussion?