In Resisting Reduction, Joi Ito invites us to consider a new paradigm of technology development, a collaborative model; one where we transmit our ideas, goals, dreams, and complexities to one another, person-to-person; sharing more than what is programmable into an algorithm. It's about trying to affect change by operating at the level songs do.1 As I read this essay I thought immediately of social work, which does this so naturally. The power of the profession lies in its unusual mix of systems thinking and heart. Social work brings dynamic thinking to some of the stickiest human problems, courage to tough conversations, and warmth to the process of making change on the individual and group level. This human-centered tenacity builds the bones of a new technological era.
Humans—messy, complicated, emotional, rational humans—are what make our social systems so complex. Ito describes this human complexity, beginning at the most basic, cellular level: “individuals themselves are systems composed of systems of systems”.2
These already fascinatingly complex beings create families, friendships, communities, cultures, and social systems and networks become exponentially more complex. Technology, now an embedded feature in these human systems, affects each member of a system directly or indirectly; the collateral impact in our individual lives and communities, unfathomable.
Our increasingly automated and measured society is being built with tools that are powerful enough, where the misuse of them could further divide us, but simultaneously offers the opportunity for deeper convergence. As we move forward, will we build technology to reflect our growing diversity? Will we build only what we currently know into our machines, or will we leave space for the unexpected, for the times we surprise ourselves or are surprised by others? And as technology evolves—daily—are we able to develop the social knowledge we require rapidly enough to make these decisions? In the time we need to make them?
Luckily, we have experience and wisdom within our societies to lean into as we develop new technology. And there’s a method for doing this that Joi Ito describes: “participant design”—design of systems as and by participants. This approach will steer us away from a human vs. machines mentality, give us opportunities to work together and leverage diverse expertise—and let ourselves be humbled by the magnificent complexity of humankind. If we collectively pay attention to all the ways humans affect technology, and technology affects humans, we can develop a strategy for continuous learning and iteration.
But who are the participants, and how do we invite them into a collaborative model? We know the active participants: those who design technology, the end users, our peers, and networks—but what about those unseen participants not visible in our mainstream economic and social systems? I’m a social worker, and like many professionals who call themselves by this name, I work with some of the most vulnerable members of society, those who may go unseen or represented in a reduced way in our data-based systems. They too are participants, and social workers can help bring their voices into system design.
I see interesting opportunities for social workers to contribute to the work ahead in human-centered technology. They are trained to work dynamically with individuals, groups, and systems. Across diverse practice areas, they bear witness to much of the nuance of the human-social-experience, and their work requires them to continually adapt to changing environments, circumstances, and policies. They are trained to work with the vulnerable, the marginalized, the neglected, as well as the wealthy, highly educated and the well-connected. They see who’s in, and who’s missing. Social workers are continuously evaluating ‘goodness of fit’ for a person within their environment and assessing risk and danger, all while working towards a goal of flourishing. This flexibility, paired with a healthy respect for human agency and social justice, gives them unique (often overlooked) perspectives, on the power and responsibility of the technological choices we make.
But we’re not primed to work together. The main challenge is we lack structures for collaboration and continual iteration. Social workers might have valuable perspective on systems work explored in scientific disciplines, but they speak a different language, and the work is siloed. Much of social work is experiential, based on extensive field experience, and needs to be translated into something useful for technologists. Technologists have the technical ability to investigate and exploit data about social experiences and phenomena but lack the history and social context. There is a natural complement here, but we need a systematic approach for bringing these groups together.
My work is focused at the intersection of social work, data, and technology, a space that gives voice to experts of the human domain in the digital realm. I lead the Data and Program Analytics department of a large non-profit organization that provides housing opportunity for the formerly homeless. My department develops software, collects data, conducts analyses, and distributes insights to better understand the work we do and the implications in the larger field of housing and homelessness. This work is my window for observing real-world examples of Ito’s collaborative model. From this perspective, let me share the following thoughts:
● How the ecological theory in social work prepares social workers to be complex systems thinkers and actors
● How social welfare technology interventions can have unintended consequence
● An example from my own work that brought social workers, computer scientists, and analysts together to design a human-centered technology system
As a discipline, social work walks an interesting line between the individual and the collective, self-determinism and environmental influence. Social work exists to foster individual well-being, healthy relationships, and to promote positive interdependent exchanges between persons and their environment. Social work has a Code of Ethics3 and fundamental to this work is respect for the constellation of the profession’s core values and how they are expressed within a mutual transaction: service, social justice, human dignity, interpersonal relationships, integrity, and competence.4
Social work prepares its professionals with rigorous field experience, and a holistic curriculum, which includes theories and frameworks that question and challenge complex social systems. First-year students have a basic conceptual understanding of feedback loops, nonlinear trajectories, and exposure to systems thinking. This foundation is useful for their subsequent coursework, which can take them in a variety of directions across micro to macro practice: political advocacy, behavioral economics, clinical and therapy modalities, social enterprise, just to name a few. The practice areas and careers social workers find themselves in are numerous and diverse, and what unites them is a systems perspective.
Students are first introduced to ecological systems theory (also called the ecological model) in their first semester of graduate school in a class called Human Behavior and the Social Experience. The ecological model uses concepts from biology and the physical sciences to describe the reciprocity between individuals and their environments. The model presents a means of examining how ‘good a fit’ people are within their physical environment and how the mutual exchange of person and place affects their livelihood. In this way, there is a focus on the edges of systems, where they interact or overlap, and what happens in those spaces when one ends and another begins.
Social work training has an interesting history with systems thinking that like the profession, evolved over time to be in alignment with the mission of the work. Systems theory was first introduced in the mid-20th century as a family systems model. Using biological theories to explain the adaptation of organisms to their environments, the theory was first applied to describe the exchange between families and their environment. Early models presented a reduced view of reciprocity where it was believed family members were influenced equally by environmental systems with equal power. 5
Early models emphasized ‘goodness of fit’ through adaptation of person and place, but did not acknowledge the influence of power dynamics, the role of the observer, and the relative power of each part of the helping system. Moving in a less reductionist direction, in the 1960's and 1970's social work systems theory assumed a more ecological approach, deconstructing the term "environment" into social determinants with varied levels of power and influence, dependent on the individual features and their desired level of connectedness to the system.6
In 1979, Urie Bronfenbrenner pushed ecological systems theory into practice as a model which identifies four environmental systems with and within which an individual interacts: individual, microsystem, mesosystem, exosystem, and macrosystem. These systems represent ecosystems and the immediate to distant influence in a person’s life.
From the initial wisdom of the physical sciences, ecological theory in social work further evolved in the 1980s to incorporate pressing factors such as the influence of power and oppression, the role of the observer in recording history, and the socio-cultural functions that speed up or slow down the pace at which social exchanges occur. Paramount to this theory in social work practice is an ecological perspective on race, ethnicity, and gender, and the power implicit in the transactional exchange. This is really the special sauce of social work, it does not hold a reductionist view of an individual’s experience, instead, it continuously considers the features and social constructs at play in a situation, interpreted through a social justice lens.
In education and in practice, social workers are trained to see the invisible social forces that inform behavior and social organization. With this approach, a person’s motivations can be as important as their actions. Trained to perceive the gestalt, they build their work sensibly and dynamically in response. In Resisting Reduction, Ito beautifully describes an intervention approach reminiscent of social work,
Better interventions are less about solving or optimizing and more about developing a sensibility appropriate to the environment and the time. In this way they are more like music than an algorithm.7
There is popular debate in the field of social work whether it is more art or science. Maybe it’s a synergy of art and science.8 Practitioners bring the unseen layers of influence into the light and challenge the forces in the system not viable for the whole. They evaluate entropy in the system by identifying and deconstructing the factors that do not promote growth: inter-generational poverty, historical oppression, systemic and institutionalized racism. Social work theory aims to understand determinants to strengthen intervention, instilling an approach akin to what Joi Ito describes as “Humility over Control”. Social workers are a voice for humans in the automated age.
What happens when technology tools, designed for social welfare, are developed without co-design from the social service providers and the ‘beneficiaries’ that will use them? Let’s look at an example Virginia Eubanks highlights in her book Automating Inequality. Despite best attempts to build a peer-reviewed, evidenced-based intervention, we still witness collateral damage from technology intervention in social systems, and the people served therein.
One of the case studies included in Automating Inequality focuses on the coordinated entry system for housing opportunity in Los Angeles. This policy approach has gained increasing national popularity in recent years, known colloquially as “the Match.com of homeless services”.9 In a nutshell, a coordinated entry program is a systematic approach designed to connect those experiencing homelessness with the limited available housing stock in the city. Given the obvious supply and demand problem, the approach uses a survey tool, VI-SPDAT (Vulnerability Index-Service Prioritization Decision Assistance Tool) to identify the most vulnerable people in need of housing and assign a priority score for housing.10
From my perspective, HUD did its due diligence when assessing possible survey tools. Prior to adoption they convened expert panels11 and assessed possible tools in regards to the following criteria and the degree to which it was: valid, reliable, inclusive, person-centered, user-friendly, strengths-based, sensitive to lived experience, housing first oriented, and transparent.12 According to HUD, the VI-SPDAT tool is, “designed to help guide case management and improve housing stability outcomes and provide an in-depth assessment that relies on the assessor’s ability to interpret responses and corroborate those with evidence.”13 These criteria certainly echo best practices in social work (for anyone working on the front lines of homelessness), and yet, as evidenced in Automating Inequality, there is a breakdown somewhere between the intention of the technology and actual impact in individuals’ lives. After meeting with case managers and hearing individuals’ stories, Eubanks eloquently reflects,
If homelessness is inevitable—like a disease or a natural disaster—then it is perfectly reasonable to use triage-oriented solutions that prioritize unhoused people for a chance at limited housing resources. But if homelessness is a human tragedy created by policy decisions and professional middle-class apathy, coordinated entry allows us to distance ourselves from the human impacts of our choice to not act decisively. As a system of moral valuation, coordinated entry is a machine for producing rationalization, for helping us convince ourselves that only the most deserving poor people are getting help. Those judged “too risky” are coded for criminalization. Those who fall through the cracks face prisons, institutions, or death. 14
So how did this go astray? As Eubanks so accurately observed, homelessness is not a simple person-to-home solution. The causes (like the solutions) of homelessness are as complex as those experiencing them. Since we can’t program into a machine all the reasons and feelings and actions that rendered people homeless, we create shortcuts: we design surveys and predictive analytics that give us a sense of objectivity and a facade of equity.
Our history follows us, even when we run in new, innovative directions, ‘disrupting’ the status quo. This is not to criticize these technology-policy interventions; I believe we are moving in the right direction. What we still need, is a deeper ecological look at complex problems, such as homelessness, with consideration for how these tools might limit, as they expand, our ability to connect people to resources. Joi Ito brings us to the same conclusion,
Today, it is much more obvious that most of our problems—climate change, poverty, obesity and chronic disease, or modern terrorism—cannot be solved simply with more resources and greater control. That is because they are the result of complex adaptive systems that are often the result of the tools used to solve problems in the past, such as endlessly increasing productivity and attempts to control things. This is where second-order cybernetics comes into play—the cybernetics of self-adaptive complex systems, where the observer is also part of the system itself.
Data about people and social conditions reflects the influence of the methods in which it was collected: the people, places, language, culture, and social context. It is further influenced by the methods of cleaning, standardizing, and the analysis methods applied. It’s not feasible to consider the entire individual/socio/political context when conducting analyses, but it’s myopic when you don’t. There is still great value in investigating social phenomena (or the subset we have the privilege to analyze), but given the limitations of both humans and machines, we develop these heuristics on what we know, with what we have, at the time we have it. A panacea solution is unlikely.
The tools and methods of data science, including predictive models, help us understand our history. While machine learning is positioned to anticipate and forecast what is coming, in my opinion, the true power of predictive analytics lies in how it helps us reflect on what has already happened; to deeply consider the factors that influence our trajectory, and assess whether we can—or should—trust the expected future results. Nate Silver, who has become a household name for election predictions, reminds us of the limits of probability in what he calls the “prediction paradox”: the more humility we have about our ability to make predictions, the more successful we can be in planning for the future”. 15 Machine Learning requires a humility that constantly acknowledges the model is limited and incomplete, requires evaluation (and reevaluation) to see how it performs when disturbed; this is like how you shake a ladder before you climb it to test how solidly it is positioned. The model is only as good as its data and social workers can help hold the ladder.
I am hopeful we are moving in the direction of a more resilient and adaptable technological era, but we still have so much work to do to bring diverse actors together in true collaboration. My work picks up somewhere where Eubanks’ case study leaves off. There’s sweeping movement of “data for good” and governments and NGOs and big corporations are finding places where their work overlaps. My work in data about and for social welfare faces the same challenges and limitations that Eubanks outlines. And it is motivated by the same goals of system improvement: to identify and involve the right stakeholders, and to examine the problem from more angles. Put simply, the goal is to help more people, in better ways. Aware of the opportunities and limitations, my team’s design approach for data systems and analysis strategy is based around the best “sensibility approaches” we can develop, informed by experts in the field, evaluated by the users, and amended in a continuous loop of feedback and improvement.
As fellow author in this edition, Sasha Constanza-Chock echoes in Design Justice, A.I, and the Escape from the Matrix of Destruction,
Different people experience algorithmic decision support systems differently, and we must redesign these systems based on the lived experience of those they harm. This is essential if we want to make space for many worlds, many ways of being, knowing, and doing, in our visions of A.I. and of planetary systems transformation. 16
I hope in this work we continue to reach a bit further and learn how the most vulnerable and silenced among us feel being at the other end of a data-based system. It is them who have the most to lose, and (we only hope) the most to gain from the systems we build to support them.
I’ll end here with a heartening example from my own work where computer scientists, social workers, and data practitioners collaboratively designed a program database for homeless outreach workers. Even within a social service organization, bringing these diverse practitioners together is challenging. It can come more naturally when using a human-centered design approach. The shared design process helps build empathy, mutual understanding, and creates a common language useful for grappling with the knotty questions surrounding ‘how to represent people as data points’?
Starting with open-ended questions in a creative environment initially promotes curiosity and conversation, and ultimately can bring creative, novel solution ideas to a shared problem.
I am continually heartened when I see social workers gaining comfort sharing ideas about data elements, a space outside of their expertise that is historically dominated by technologists (an elite few). I love seeing social workers inject person-first language into data systems: “person experiencing homelessness” rather than “homeless person”, and seeing them reinvent “unknown” options in demographic fields to describe situations that come up: “didn’t want to disclose”, “doesn’t know the answer”. When they go through the exercise of imagining a database they think about real people they know, care about, have struggled with, and they think about how this system could help them. These small suggestions reflect human-centered values and remind us that behind every data point is a real person, with joys and struggles like our own.
I am equally heartened when I see technologists, the computer scientists and data practitioners slow down their pace during the “gaining empathy phase”. They begin by asking questions (not unlike any software development model), but in the context of social welfare, these questions have the opportunity to expose them to more than typical end-user needs and tap into some less tangible human struggles and motivations.
The diversity of perspective going in leads to a more fully formed system with more understandable outputs. Working together from the beginning builds comfort and a relationship between these groups that makes feedback and reiteration easier (and more enjoyable). Working together creates the potential for real stories to be told from data points, and breathes life into algorithms.
I’m inspired by the small ways the paradigm Joi Ito presented in Resisting Reduction is manifesting, and I hope we are only at the beginning. We need to construct our technological future in a way that lets us leverage human experience and compassion. For this, I encourage that we look to our colleagues in social work, for the guidance to keep our technology path human-centered, paved with empathy, care, and reverence for our shared complex history of social progress.