"The target area is only two meters wide. It's a small thermal exhaust port, right below the main port. The shaft leads directly to the reactor system."
―Jan Dodonna on the attack on the first Death Star[src]
I work in and around government, but I live in Silicon Valley. My days are filled with the ins and outs of government programs like SNAP, Medicaid, probation, pretrial supervision, and clean slate and workforce training programs, but since I have spent the better part of my career in tech media, most of my social circle (and in fact most of the supporters of my work) are part of tech startups or the big consumer tech platform companies. Sometimes the gulf between these two worlds is staggering.
I have a thousand criticisms of government’s risk aversion and slowness to adapt (and am maniacal about the costs of this to society) but my biggest frustration is reserved for the technology world’s tendency to reduce the problems of government to those that they already know how to solve. I know I’m about to get triggered in a conversation with a tech person about the difficulties of working in government when they say the words “But can’t you just….?” Can’t you just throw out the legacy systems and start over? Can’t you just get Congress to mandate modern technology approaches? Can’t you just change the rules of procurement to make this easier? The most extreme version of this is a question I get relatively often. “You’ve been studying this for years now. What’s the answer? What’s the one law that we need to pass that will fix this all?” (And the assumption is that Silicon Valley power and might will be able in fact to get this putative law passed.)
I call this Death Star thinking. Many of us grew up profoundly influenced by Star Wars. What’s the lesson too many people take from that first and most influential episode in the franchise back in 1977? One incredibly well-placed shot into the thermal exhaust port and the entire apparatus of our oppression explodes spectacularly. All we really needed were the plans to the Death Star and a very talented fighter pilot guided by the truth (“the Force”.) Never mind that there are countless Death Stars ahead of us as the Imperial war (and the franchise) continues. That one glorious victory and the release it provided became the implicit theory of change of a particular demographic of my generation (in which I would count myself). When engineers and Silicon Valley operators ask me for the drafts of the legislation we should write, they’re just assuming I’ve got the blueprints of the Death Star and the computers have found the vulnerability.
The complexities of government and the difficulties of reforming it make Death Star thinking powerfully seductive. Just use [insert latest technology here], aim it at the exhaust port, and all your problems will be solved.
It doesn’t quite work that way. In fact, simply adding technology without understanding the complexity of the bureaucratic processes and how they got to be the way they are can just perpetuate the problem, or even make it worse.
Take one real life example, chronicled by Matthew Weaver, a former Google Site Reliability Engineer who came to federal government as part of the effort to save healthcare.gov in late 2013 and stayed on to fix technology processes and systems in the Veterans Administration and the Department of Defense. To illustrate not only the complexity but the paths to it, Weaver describes one tiny part of a larger technology project at the Air Force, a component in a larger suite of software that is meant to perform a calculation on certain time-sensitive data, collected by distributed measurement stations. The job of this software is to get the data from the stations to the software that does the calculation. Below is a summary of Weaver’s story, condensed here:
The measurement stations send the data using something called multicast user datagram protocol. Multicast UDP is a basic feature of the internet protocol since 1989. It is part of the internet code in almost every operating system in the world, so it costs no money. Every server operating system includes tools to check and debug multicast UDP. Sending the data through UDP would be fast and reliable.
But the Air Force project does not use UDP to send the data. Instead, a piece of software subscribes to the multicast UDP messages. It reads the data as it arrives, and checks the cryptographic signatures on each datagram. Then it re-codes the data into a format called XMLSEC, with another cryptographic signature. XMLSEC has a lot of overhead, so this step uses more memory and CPU time. This step also makes the message larger.
The software then uses a protocol called SOAP to put this XMLSEC message onto something called an Enterprise Service Bus (ESB). XMLSEC, ESBs, and SOAP are complicated. They come in lots of incompatible versions, and cost a lot of money. An ESB, for example, is actually an entire software suite itself. Checking and debugging this software also needs special tools and more specialized programming experience. Those tools don’t come by default as part of any operating system.
The ESB eventually delivers the new XMLSEC message with the monitoring data to yet another piece of software. This software uses SOAP to retrieve the XMLSEC message from the ESB, copies it into memory, and parses out the original data. This step uses a lot of CPU time and memory. After this step, though, the data once again appears in memory just as it did when it first arrived in a multicast UDP packet. Finally, a last piece of software copies the data into a shared segment of memory where the calculation software can access it.
You don’t have to have technical expertise to understand the implications of this approach to the project’s speed, reliability, and cost. But a natural conclusion is that the tendency to solve problems in this way is the natural result of one or more factors we can observe in government (a government contracting ecosystem financially incented to make projects bigger and more complex, a technical workforce with outdated skills) and that it is the job of our lawmakers to check these factors through legislation that changes those conditions and incentives. The problem with that assumption becomes clear, however, when Weaver doggedly traces the rules that drove this decision back to the source. Below I summarize Weaver’s story again:
Why did the contractor put all this inappropriate technology into this system? The contractor blames the customer. In fact, the request for proposal from the Air Force does require solutions that include an ESB, so this is accurate. The Air Force in turn blames the Department of Defense. Again, the DoD issued guidance as recently as 2010 that requires all technology acquisitions to be “Service Oriented Architectures,” that comply with the “Information Enterprise Architecture,” so this is also accurate. The DoD in turn blames the Office of Management and Budget, which indeed, as recently as 2008, issued requirements that all federal IT projects have a “Service Oriented Architecture.” The rules for SOA require an ESB. OMB in turn blames the Chief Information Officers (CIOs), the people who are supposed to define technology strategy for their federal agencies. As far back as 1999, the Federal CIO Council had deployed the Federal Enterprise Architecture (FEA) 5. FEA requires federal technology solutions of any kind to include an ESB, by way of SOA guidance.
Why would the CIOs do this? As it turns out, there are several pieces of legislation that instruct the CIOs to issue guidance like this. Two key ones are the 1993 Government Performance and Reform Act (GPRA) and the 1996 Clinger-Cohen Act (CCA).
So did Congress inadvertently mandate complex, dependency-multiplying technology in all of federal government’s software projects? Well, GPRA is designed to “improve government performance management.” Clinger-Cohen was designed to “improve the way the federal government acquires, uses and disposes information technology (IT).” In other words, Congress correctly diagnosed the problem of decreased agility and flexibility in government technology, and the CIOs chose, in part, technologies like ESB, which according to its Wikipedia entry “promotes agility and flexibility with regard to high protocol-level communication between applications.”
There’s a tremendous irony here. The purpose of a service oriented architecture is to get away from monolithic Death Star architectures, fragile and full of dependencies, and instead to build complex systems out of a network of smaller, independently-deployed components, defined by their interfaces–the set of requests that they agree to send out and respond to–rather than by their detailed implementation. You’d think that “I’m going to send out a multicast UDP packet” and “I’m going to expect to receive one,” could have been used as the service-oriented interface for the Air Force project that Weaver was trying to fix. Why didn’t this work? At each level of the government, broad guidelines had been hardened and made more specific, until by the time they were ready to be implemented, they created the same type of problem they had been intended to address.
Systems that are rigidly rules-based generally fail to respond appropriately to changes in rules. You can’t fix the problem of a system that is based on rigid rules by specifying new rules. The laws that Congress wrote were applied to a system that is in many ways too complex to accurately predict what will happen when you change its rules. And so the laws meant to correct past problems were, as happens so often, trumped by the law of unintended consequences, the one law that is always in effect.
Perhaps lawmakers would benefit from reading Joi Ito’s brilliant essay “Resisting Reductionism,” in which he states:
In order to effectively respond to the significant scientific challenges of our times, I believe we must view the world as many interconnected, complex, self-adaptive systems across scales and dimensions that are unknowable and largely inseparable from the observer and the designer.
“Complex” and “interconnected” are certainly accurate descriptors of government, and the scale of government rules and regulations means that it is unknowable by an individual. “Self-adaptive” is the hard part. That’s the part that corresponds to the Force in the Star Wars franchise: reliance not on what you know, but on what you don’t know, and the trust that allows you to find the best way to deal with the uniqueness of the current situation.
Though many would argue that self-adaptive is not an accurate description of government systems (yet), it is useful to consider the challenges of government through this lens. Ito suggests that:
Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.
So how would government systems “participate as responsible, aware and robust elements of even more complex systems?” Let me offer three stories from the current government reform landscape that suggest progress towards anti-reductive thinking.
First, we see early signs of the evolution of rulemaking and lawmaking in the application of practices from the technology world, not in the technology itself. Modern technology development is user-centered, data-driven, and iterative. Building something minimal, testing it with real users, and then revising it so that it becomes progressively better and more full-featured is at the heart of today’s technology development practices.
The art of crafting regulation is inherently complex, and increasing the number of changes it goes through as it is written would seem to increase the complexity of the end result. But when iterations are driven by contact with real people operating in real world situations, the result improves. Last year, regulators at the Health and Human Services Agency in Washington, DC were charged with implementing a law called MACRA (the Medicare Access and CHIP Reauthorization Act of 2015.) The MACRA team wanted the HHS Digital Service team, led by Mina Hsiang, to build the website that would implement this law, designed to allow Medicare to pay more for better care. What usually happens is that before regulators engage a tech team around a website for users (in this case, doctors and other providers of medical care), they spend many months of study and research, producing a specification describing in great detail the rules the web application will encode. Hsiang proposed that the team writing the regulations give them an early draft in about a fifth of the time it would normally take them, and let her team do an early version of the website based on that draft.
It’s normal practice for a tech team to test a site with users early in the development process; what was different this time is that the regulators could also see how users experienced and interpreted the rules they’d written, and change their language based on user behavior. They could then test the new language in a subsequent (still draft) version of the site, as the tech team put out new versions of the website to their test users. They did this four more times before the regulators called the rules final. The MACRA regulators reported that they’d just written the best rules of their career, having benefited for the first time from real-world feedback during the process.
This may be little more than a baby step towards responsible and aware, but iterative and user-centered, even in increments, could be the authentic path towards algorithmic regulation and self-correcting law.
Another way we move towards government systems that “participate as responsible, aware and robust elements of even more complex systems” is by avoiding the temptation to blow them up and start over, and instead beginning by instrumenting them to add visibility into their complexity and real-world outcomes from the bottom up. This is how we at Code for America have chosen to work in more than one (but not all) of our areas of focus, including in our federal food assistance program, called SNAP.
SNAP is a federally-funded, locally administered entitlement program. States are generally responsible for managing eligibility (which varies state to state but is governed by federal guidelines), enrollment, benefit distribution, and reporting and analysis. California suffers from the second lowest rate of SNAP participation in the country (after only Wyoming), despite having spent significant funds over the past two decades attempting to raise enrollment rates. In California, a largely liberal state, low enrollment is not politically driven, but seems to be a result of the unnecessary complexity of the system and resulting unintentional barriers to enrollment.
In California, the state has devolved most administration of the program to its 58 counties, and most of the counties have then federated into three separate administrative consortia. The consortia have each now built technology systems that manage the eligibility, enrollment, and benefit distribution, and a few other functions of SNAP administration. The requirements for these systems were determined to be the superset of all requirements each county developed before the existence of the consortia, so, even among famously complex government requirement sets, these were enormous. Eligibility and enrollment processes vary from county to county even within each consortium, despite sharing the technology that enables application and administrative processes, which means that each layer is increasingly complex. That in turn means that the general public experiences this complexity as a series of barriers to taking advantage of this entitlement program.
This is plainly visible in the application process itself, to start. Eligibility is largely determined by income, household size, and a few other factors, but the online application form for one of the California SNAP consortia asks over 200 questions and asks users to click through over 50 screens. The incumbent online application (which cost taxpayers $800M to build and $80M a year to maintain) doesn’t work on mobile devices and takes so long to get through that it is largely unusable at public libraries, since library computers usually time out after 30 minutes – and the app doesn’t allow users to save their work in progress. One of Code for America’s projects has been to create a streamlined, clearer, more respectful online application process that works on a mobile phone and can be completed, including uploading of required documents using the phone’s camera, in about seven minutes.
This intervention is not a new system, but rather an alternative interface; the data the Code for America application collects simply is simply used to create a benefits application in the existing system. Its real value has less to do with a better front door, and more to do with what happens next. As users proceed through each step in applying for benefits, the service follows up with them by text or email (most prefer text) and tracks what happens.
What happens is rarely straightforward. Applicants often get communications in formats they can’t access (snail mail they don’t receive or arrives too late to act upon) and in language they don’t understand. (I mean this literally, since the SNAP program in many counties offers communications in a variety of languages, and, through clerical mistakes, English-speaking applicants sometimes receive their letters in Mandarin, or vice versa. But we’ve also tested the English-language communications on many college-educated professionals, including this author, and found them unintelligible across a wide range of demographics due to the impenetrable legalese.) Applicants are asked for documentation they can’t provide, or to fill out additional forms with such threatening legal language they choose not to proceed.
Too often, the design of technology systems in government allows for little to no redesign of the system itself, which includes people, policies, and business processes. In the Code for America model, improving the SNAP program is largely a matter of analyzing the data from users to find patterns in the barriers they face and working with the administrators of the program to address the operational and policy drivers of those barriers. The intervention is a shadow system that highlights what’s not working from the bottom of the system up. The value is in instrumenting the system to provide visibility into user experience across silos.
This approach is incremental in nature, but creates a relentless machine for step-by-step change, in which improvements come far more quickly than by a massive process to rebuild systems entirely. Rebuilding the system without visibility into the barriers to users would just re-encode much of the operational and policy dysfunction that created those barriers in the first place.
The incremental approach of instrumenting systems and knocking down the barriers that then become visible is working for California, and can help make many other government systems more “responsible, aware, and robust.”
The third anti-reductive narrative of change in government speaks to another of Ito’s observations:
Better interventions are less about solving or optimizing and more about developing a sensibility appropriate to the environment and the time.
Tom Steinberg, working on these same issues in the UK, has said as much when he said that,
You can no longer run a country properly if the elites don’t understand technology in the same way they understand economics, ideology, or communications.
There is strategy, and there is culture, and we know what they say about what one does to the other. To the extent that culture is a product of the people and the narrative that surrounds them, there is much to do to build a workforce and a culture in government that is capable of the sensibility Ito describes. Government today is full of people with genuine desire to change the system and enormous will to do so, but the technical and design bench strength that enables constant iteration based on real-time feedback from users is still weak. The development of the sensibilities appropriate to the time are held back when agile, adaptive work is limited to pockets within the institution.
I frequently encounter public sector leaders who believe that, if only they could get the procurement right, a vendor could build them the right technology that will bring their agency or department into the 21st century. Sadly, you can’t procure digital competence, and you can’t buy a sensibility. But you can cultivate and nurture it. We must change deeply ingrained assumptions about the kinds of skills and training we need in government and where in our society technical and design talent is valued (not in government!) This long, hard, important work (and part of what Code for America is engaged in). There is no “solution,” only continued storytelling that shapes a new narrative and supports a new sensibility. But it is an important, valuable agenda that will get less attention and resources than it should because it doesn’t fit the model of a “solution.”
The need to believe that a Death Star-style solution is at hand – that we have analyzed the plans and found the single point of failure – runs deep in our culture. It’s a false comfort, but the work of changing the complex, important, and sometimes maddening systems of government is hard, and even the best of us sometimes indulge in the fantasy that that the slog we are engaged in is unnecessary. As a friend said the other day, “That’s why they call it fantasy.” But even the fantasy contains the clues that a delusion of knowability is not the way; the shot only hits the target when Luke Skywalker rejects his targeting computer, rejects knowability, and embraces the expansive, intuitive “Force,” which we might redefine here as that design sensibility that connects with people, understands their needs, and designs systems to work with them rather than against them.
Creating new conditions, new capabilities, and new sensibilities is hard work, but the strength of the institution to function as a set of interconnected, complex, self-adaptive systems is worth fighting for.