Skip to main content
SearchLoginLogin or Signup

Defining the Dimensions of the “Space” of Computing

The first computing machines were so large they filled entire rooms. Today they are ubiquitous, built invisibly into our environments. While it's tempting to view this change within a predetermined space of progress, we can still shape the future on our own terms.

Published onApr 22, 2019
Defining the Dimensions of the “Space” of Computing
·

In The Sciences of the Artificial, the psychologist and political scientist Herbert Simon describes “designing” as a search within a “space of alternatives.”1 His other views about design aside, Simon’s metaphor of searching a space may inform how we go about the tasks of designing digital artifacts and systems to support them. Simon may lead us to ask: What is the “space” of computing and the dimensions that define it?

Traditionally, we have thought of computing not in terms of a space of alternatives but in terms of improvements over time. Moore’s Law. Faster. Cheaper. More processors. More RAM. More mega-pixels. More resolution. More sensors. More bandwidth. More devices. More apps. More users. More data. More “engagement.” More everything.

What’s more and more has also become less and less — over time. The first computing machines were so large they filled entire rooms. Over the last fifty years, computers shrank so much that one would fit on a desktop, then in a shirt pocket. Today, computers-on-a-chip are built into “smart” devices all around us. And now they are starting to merge into our environments, becoming invisible and ubiquitous.

Early computing machines were rare and expensive. In the 1960s, the University of Illinois released a film bragging about having an astonishing 30 digital computers! A mantra of the early PC world became “one person, one computer.” Today, owning several computing devices is commonplace. In fact, your car probably contains a dozen micro-processors or more.

Clearly, what we think of as “computing” has changed — and will continue to change. No wonder, then, that most of our models of computing are progressions: timelines.

Historical Models of Computing

In 1980, Nicholas Negroponte, who cofounded the MIT Media Lab, spoke about the future of computing as “convergence.”2 He presented the publishing industry, broadcasting, and computing as three rings, and noted they were beginning to overlap and would soon converge. The idea that computing would become a medium, blurring the boundaries between industries and serving as a platform for communication, began to take hold. WiReD followed. Then the Internet.

Negroponte’s model of “convergence”.

In 1991, Larry Tesler, who was vice president for advanced products at Apple, wrote, “Computers began as cumbersome machines served by a technical elite and evolved into desktop tools that obeyed the individual. The next generation will collaborate actively with the user.”3 He described changes that came in intervals of a decade. He illustrated this as “The Four Paradigms of Computing” — “batch, time-sharing, desktop, and network,” — in an era-analysis table.

Tesler’s “four paradigms of computing” era-analysis table.

In 2018, Our World in Data updated the graph of the adoption rate of consumer technologies from 1900-2016.4 The diagram illustrated that technologies adopted in “waves” — much as economist Joseph Schumpeter predicted. Speed of adoption is increasing. For example, radios took roughly thirty-five years to reach 90 percent acceptance, whereas cell phones took less than ten.

Technology adoption by household in the United States.

These three ways of understanding the roles computing plays in our lives are time-based. They note the changes that have happened to computing on a timeline, as a linear sequence. Emerging trends are often portrayed as natural progressions or extensions of the sequence, be they VR or blockchain or AI. For example, Google CEO Sundar Pichai announced a suite of new products under the rubric “AI First,” which followed “Mobile” [First], which followed “Internet,” which followed “PC.” (Pichai’s era-analysis table shows computing beginning with PCs.)

Given the accelerating pace of change, the focus on timelines is understandable — all the more so, with the pressure for entrepreneurs, investors, and the press to identify the new, new thing.5 And yet, we have other options. We could, as Simon suggested, look at the space of possibilities.

The “Space” of Computing

Using the metaphor of space, the central question becomes: What are the dimensions? What is up/down? Front/back? Side-to-side? What follows is not an answer so much as a proposal. We make no claim that our list is necessary and sufficient — or mutually exclusive and collectively exhaustive. Instead, we hope to start a conversation, to nudge discourse in another direction, to ask: What else is possible?

Analog vs Digital

Early computing devices were “analog”; for example, the Antikythera mechanism, Charles Babbage’s difference engine, and Vannevar Bush’s differential analyzer. Likewise, early communications systems, the telegraph and telephone, were analog. Some readers may recall modems (modulator-demodulators), which converted digital computer signals to analog telephone signals and back again, in the early days of networked computing.

In the 1940s, binary digital computing emerged, and by the 1960s, it had become standard — the foundation on which the “modern” notion of computing is built, effectively vanquishing analog approaches to computing. So complete was the victory that today “digital” = “computing.”

The progress made possible by “printing” binary digital switches has been astonishing. And yet, something was lost. There are alternatives. For many years, Heinz von Foerster ran (and the U.S. government funded) a Biological Computing Lab (BCL) alongside the Digital Computing Lab (DCL) at the University of Illinois, where he explored analog approaches to computing. In addition, so-called “fuzzy logic” (and other non-Boolean logics) and perhaps the limits of current “neural nets” suggest that a purely binary digital approach to computing may eventually give way to a grand, Hegelian synthesis of analog and digital. Isn’t that the prediction of “cyborgs” and “wet-ware” in science fiction? Isn’t that the potential manifest in recent advances in neuroscience?

Centralized vs Distributed

Computing seems to swing, pendulum-like, between centralized and distributed poles.

In the early days of digital computing, mainframe computers were used for centralized processing, for example, organizing payroll, billing customers, and tracking inventory. Timesharing systems and mini-computer clusters began to distribute access — bringing computing closer to users. A decade later, personal computers democratized computing.

Originally, the Internet was designed to be completely “distributed” — with no central nodes to make the network vulnerable. In practice, centralized routing facilities, such as “Internet exchange points” (IXPs), which increase speed and reduce cost, may also make the network vulnerable. Internet pioneers touted its democratizing effects, such as enabling anyone to publish. More recently, Amazon, Facebook, Google, and others have emerged as giant monopolies, centralizing access to information and people.

Related to the centralized-vs-distributed dimension of computing is the shift from stand-alone products to connected products. We argue that stand-alone-vs-connected is not a dimension of computing. Clearly, computing devices “want” to be connected; that is, connected devices offer more value than stand-alone products. (See Metcalfe’s Law.) What’s more, so-called stand-alone products may not be as independent as the name suggests. In many cases, value is co-created through a product’s use. And in some cases, use relies on networks of service, to say nothing of networks of production and distribution. For example, even Kodak’s earliest cameras required film and processing and printing. “Smart, connected products” rely even more on networks to deliver information and services that enhance the product’s value. The question is not: Will the product be connected? But rather: Will the network be centralized or distributed?

Fixed vs Fluid

Forty years after the advent of personal computers, much of the software we use still produces “dead” documents, meaning the information in those documents is “fixed” — locked in the documents. In the desktop computing environment, software applications read and write specific file types, and moving information from one application to another can be difficult. Take word processors as an example; if you want to add a data-driven chart, you need a spreadsheet app; or if you want to add a sophisticated diagram, you need a drawing app. Updating the chart or diagram requires returning to the original application, and while Microsoft supports some file linking, most people find it easier to re-export and re-import.

Some “authoring tools” have attempted to address the issue. Apple’s Hypercard application, launched in 1987, showed that text, numbers, images, and scripting could be combined in a single, “fluid” environment.

In 1990, Microsoft offered OLE (object linking and embedding). Two years later, Apple launched OpenDoc, a software component framework aimed at supporting compound documents. In internal discussion at least, Apple considered the possibility of upending the application-first paradigm and replacing it with a document-first paradigm. Yet neither Apple nor anyone else has solved the underlying paradox. Users want to be in their data; but apps support developers; and developers create the variety that brings users to platforms.

Data-pipelining applications, such as IFTTT (If This Then That) (2011), begin to tackle the problem of connecting siloed data in networked environments.

In early 2018, Mike Bostock, one of the creators of D3.js launched the beta version of Observable, an interactive notebook for live data analysis, visualization, and exploration. The idea of this reactive programming environment was inspired by the principles of explorable explanations, a term coined by Bret Victor.6 It also has roots in notebook interfaces such as those in python’s Project Jupyter (2014) and Stephen Wolfram’s Mathematica (1988).

Controlling vs Collaborating

Simple tools require our active control. In a sense, we “push around” our tools — not just a broom, but also the hammer, and even a sophisticated tool such as a violin. Yet a certain class of tool takes on a degree of independence. After we fix a setpoint, a thermostat operates without our direct involvement. A smart thermostat, such as Nest (2011) may even attempt to “learn” our behaviors and adapt to them, modifying its own setpoint as it goes.

Smart thermostats aside, most of our interactions with computing systems require our active control. For example, in AutoDesk’s AutoCAD application (1982), the building will not draw itself (though the software may try to anticipate what we will do next and render it in advance). And yet, we might wish for more.

Negroponte, as founder of the MIT Architecture Machine Group (predecessor to MIT Media Lab), did. Negroponte was not interested in building a drafting tool; he wanted architects to have meaningful dialogues with the architecture machine, as if they are colleagues, albeit with different capabilities.7

Gordon Pask, as a regular visitor and consultant of the Architecture Machine Group, was interested in second-order cybernetics, conversation theory, and interactions between humans and machines. An example of Pask’s work was “Musicolour,” a device that would react to musicians playing through light. What was interesting was that the device wasn’t reacting directly to specific sounds, but to the novelty of the music played. So if the musician continued to play repetitive rhythms, “Musicolour” would get bored and cease to produce any visual output. As a result, the musician would reflect upon and alter what she had been playing as she engaged in a “dialogue” with “Musicolour.”8

In the era of Facebook and Alexa, it might be wise to revisit notions of collaboration and conversation proposed by Negroponte, Pask, and others.

Momentary vs Long-lasting

Most digital products have short life spans, because new models and new types of products are released all the time. That means formats change frequently. And that means it’s difficult to revisit past information in the long term. Try finding a DVD player or CD player, or even a Syquest drive or floppy-disk drive. It’s easier to read a book from 1500 than access a file from 1995.

Have you ever tried revisiting a website only to realize that it has been taken down? To address this, the Internet Archive, a nonprofit digital library, enables Internet users to access archived web pages through its Wayback Machine. (On the other hand, the European Union has promulgated a “right to be forgotten.”)

This dimension parallels the “Shearing Layers” concept, coined by architect Frank Duffy and later elaborated on by Stewart Brand as “Pace Layering.”9 In his book How Buildings Learn: What Happens After They’re Built, Brand described that, within a building, there are several layers of change that occur at different speeds.10 Brand extended pace layering to culture, describing a series of layers moving at different speeds. More generally, we might say that an aspect of systems resilience is the ability to adapt at different speeds. So far, our computing systems have not had that sort of resilience.

Peek-into vs Be-inside

Today, we use mobile devices as a sort of portal. We “peek into” the digital world through their rectangular screens. However, the digital world doesn’t have to be trapped inside these tiny rectangles.

The term “virtual reality” was first seen in the novel The Judas Mandala, by Damien Broderick, in 1982. Since then, there has been an increasing amount of investigation around the notion of virtual reality and augmented reality. Both of these research directions move us one step closer to directly interacting with digital environments.

Alternatively, Dynamicland, a computer research group co-founded by Alan Kay and Bret Victor, is researching “Dynamic Reality” through the augmentation of physical reality with cameras, projection, robotics, and more. They are building a communal computing system where an entire physical space is activated, and we can “be inside” a digital environment, while remaining “present” in the physical world and interacting with physical objects. Unlike virtual reality or augmented reality, Dynamicland is not creating a simulacrum or an illusion. By shifting from peeking into a digital environment to being inside a truly ubiquitous computing environment, we can finally move beyond screens and begin to acknowledge not just our fingers and eyes but our entire bodies.

Consuming vs Authoring

Currently, only those who have a programming background or resources have the ability to create digital artifacts. This forces the rest of the world to be consumers, rather than creators or authors. When creating digital products, our conception of target audience is “users.” The notion of “users” sometimes assumes that their main goal is consuming instead of playing, creating, or conversing. What if the digital environment was inclusive enough for anyone to create? How can we provide the tools that will enable everyone to become authors?

Today, most mobile devices are for consuming information; we swipe with our fingers to purchase items, watch videos, browse news, etc. Mobile devices do not really offer the option for us to create digital artifacts easily. Mobile apps are still coded on desktop and laptop machines.

Even on the World Wide Web, which we visit daily, our principal interaction is “surfing.” Creating websites or even web pages still requires special skills.

Still, blogs, vlogs, Pinterest, and other publishing tools offer hope. And tools such as Hypercard and Minecraft (2009) show what “empowerment” might look like if we enabled not just professional programmers, but anyone to create digital artifacts.

Other Possible Dimensions

What’s laid out above is not a conclusion, but perhaps a suggestion to explore and chart the dimensions that constitute the “space” of possible computing futures. By shifting our focus from a roadmap-for-product-improvements to a space-of-possibilities, we can have a different set of conversations around the future we would like to forge.

The dimensions proposed above are by no means complete or definitive. We considered others, including:

  • Serial Processing, Single Clock vs Parallel Processing (or Concurrent), Multi-Clock (perhaps an aspect of the centralized vs distributed dimension)

  • The Cathedral vs the Bazaar or Top-down vs Bottom-up or Proprietary vs Open-source (perhaps also aspects of centralized vs distributed)

  • Quality Engineering vs Agile Prototyping (perhaps another aspect of pace layering)

  • Virtual vs Physical (perhaps an aspect of Peek-into vs Be-inside)

  • Rich media vs Text or GUI vs Command-line vs or Mouse vs Keyboard (perhaps, loosely, aspects of consuming vs authoring)

  • Automation vs Augmentation (likewise, an aspect of Consuming vs Authoring or even a super-ordinate category)

Very likely, readers imagine still more possible dimensions. And, in a way, that’s the point — to provoke a discussion about which dimensions are important, to ask: What do we value?

Visualizing Paths in the “Space” of Computing

By examining the dimensions of the “space” of computing, we can “locate” or perhaps even “plot” the “position” of historic, incumbent, and proposed computing systems. Let’s revisit past computing paradigms and locate them in the “space” of computing.

We can take a closer look by examining a specific product. In 1964, IBM launched IBM System/360, the first family of computers designed for a range of applications. It was a revolutionary computer that allowed customers to purchase a low-end machine, and then migrate upward over time. What would IBM System/360’s coordinates be if it were located in the “space” of computing? As a mainframe computer, its computing power was centralized and packaged into its physical form. It was engineered to be used without much flexibility. It “listened” to commands as the operator literally pressed buttons or entered information. The computer was delivered between 1965 and 1978, or just a little over a decade. What was being computed was only visible when it output results. It allowed the operator to input what needed to be computed.

Below, each of the six dimensions described above is shown as a row. (Analog vs digital is not included, because most examples from the last 70 years will be digital.) The IBM System/360 is located as a point on each of the dimensions, and a path is drawn connecting each point, forming an overall pattern or configuration for the example. Admittedly, the points are gross approximations and open to debate.

Fast forward to today, computers have not only shrunk in size but can now also morph into many different shapes. For example, Amazon Echo, which was first introduced in 2014, can sit anywhere in your house and respond to simple questions via voice. In comparison to IBM System/360, where does Echo sit in the “space” of computing? As a voice assistant, it is able to connect to and communicate with other distributed services on the web through APIs. While it can help you keep a shopping list (short term), information is retained within the Amazon ecosystem (long term). How it responds to you is currently very limited; it requires and assumes that your questions or requests are straightforward and simple. While we are not sure how long the product will last, the Amazon Echo itself does (for better or for worse) retain your order history, preferences, etc. As a black cylindrical product, other than what it tells you it has recorded from the Amazon Echo app, it is almost impossible for it to communicate in an understandable way to a regular user what it is processing, listening to, and interpreting. The product is very much built for people to consume within the Amazon ecosystem.

In a similar way, we plotted the Amazon Echo.

By looking at the “snake” diagrams for IBM System/360 and Amazon Echo, we can see that the two products have traced two different paths in the “space” of computing. Comparing the two “snake” diagrams side by side, we can begin to comprehend the differences between two products not only at the hardware, operating system, and programming language level, but also in terms of how they augment and extend our human abilities.

As we can see, Amazon Echo is more flexible as a platform and a product, but has been built to be “consumed” more easily than the IBM System/360, which, at least for its audience, was in many ways more open to authoring.

As we have begun to demonstrate above, if we were to trace products in the “space” of computing, we may find that most recent and current products reside on the lefthand side of most of the dimensions. One reason for this may be the tremendous pressure on organizations and their managers to deliver growth.

In the past, when we thought of computing milestones, we have naturally associated those milestones with a timeline. In this case, Amazon Echo was introduced 50 years after IBM introduced System/360. However, by comparing the dimensions of two different products from two different computing eras, we can have a conversation about the two products not in terms of “advances” over time, but in terms of their similarities and differences of purpose and approach.

Our mental models — the frameworks we rely on to contextualize information — support thinking about different goals. The lean startup method helps us to quickly iterate and understand what customers like. The business-model canvas helps us to externalize a business’s value proposition and the resources needed to operate. The notion of user-centered design, in much of practice, focuses on creating products for people, largely with the assumption that they will use and consume, and perhaps little beyond that. While these frameworks have assisted us in a consumer-focused world, we have few frameworks for thinking about alternative worlds of computing.

Our Responsibility

The future is not predetermined. It remains to be invented.

While trending technologies dominate tech news and influence what we believe is possible and probable, we are free to choose. We don’t have to accept what monopolies offer. (We don’t have to use social media platforms that threaten democracy.) We can still inform and create the future on our own terms. We can return to the values that drove the personal computer revolution and inspired the first-generation Internet.

Glass rectangles and black cylinders are not the future. We can imagine other possible futures — paths not taken — by searching within a “space of alternative” computing systems, as Simon has suggested. In this “space,” even though some dimensions are currently less recognizable than others, by investigating and hence illuminating the less-explored dimensions together, we can co-create alternative futures.


Footnotes

Comments
2
?
Adam Stein:

Ty


Introducing Troubleshoot.dev

Exploring the dynamic realm of artificial intelligence (AI) can be a challenging endeavor due to the constant evolution and innovation in AI applications. Whether you’re a developer, a business owner, or simply passionate about AI, finding the right tools and applications tailored to your needs is crucial. Enter Troubleshoot.dev.


Introducing Troubleshoot.dev


   Troubleshoot.dev stands as an online hub dedicated to curating and showcasing the finest AI applications available today. As AI technology continues to progress rapidly across various industries such as healthcare, finance, entertainment, and customer service, pinpointing the ideal AI app for specific tasks can pose a challenge. Troubleshoot.dev tackles this issue by offering a well-organized space where users can effortlessly search, discover, and evaluate AI applications, whether free or paid.


Whether you’re in need of a tool for automating routine tasks, enhancing data analysis, or delving into deep learning models, Troubleshoot.dev has got you covered. The platform boasts a user-friendly interface that simplifies navigation through a vast selection of apps. Its unique categorization of AI tools into key areas like AI for business, AI for creative professionals, and AI for developers ensures that users can pinpoint the most suitable applications for their specific requirements.


Reasons to Utilize Troubleshoot.dev


1. Comprehensive Exploration of AI Apps

With the market flooded with numerous AI tools and applications, identifying the truly beneficial ones can be overwhelming. Troubleshoot.dev acts as a filter, presenting only the most effective and highly-rated AI apps. Whether you’re in search of AI-powered writing aids, image recognition tools, or automation solutions, the platform curates an extensive list of applications to cater to a variety of needs.


One of the standout features of Troubleshoot.dev is its tool categorization, enabling users to filter results based on preferences such as free vs. paid apps, specific use cases, or user ratings. This simplifies the process of finding the right AI app without the need for trial-and-error or extensive scouring of websites.


2. Diverse Range of AI Applications

Irrespective of your area of interest in AI, Troubleshoot.dev offers a plethora of apps to meet a wide range of needs. The platform showcases key categories of AI applications, including:


3. AI for Content Creation:

Writers, marketers, and creatives can leverage AI tools for generating text, designing visuals, and video editing. With AI-driven content generators, crafting high-quality blog posts, social media updates, and other written content] is now more efficient. Apps like Jasper, Copy.ai, and Writesonic enhance writing productivity and creativity.


4. AI for Business: 

Entrepreneurs and business owners can access troubleshoot.dev AI solutions for automating customer service, enhancing marketing strategies, and streamlining operations]. Chatbots, predictive analytics, and AI-driven customer support platforms are crucial for modern businesses, and troubleshoot.dev offers a multitude of options.


5. AI for Developers: 

Developers seeking tools to accelerate coding, debugging, and software testing can find AI-powered solutions tailored to their needs. The platform features a wide array of development tools, including AI for code completion, performance optimization, and bug detection.


6. AI for Data Analysis: 

Professionals working with large datasets can benefit from powerful AI tools for data mining, statistical analysis, and predictive modeling. Machine learning algorithms and data visualization tools aid in extracting valuable insights from complex data.


7. AI for Image and Video Processing: 

AI tools focusing on computer vision are increasingly vital in sectors like security, healthcare, and entertainment. Troubleshoot.dev simplifies the exploration of tools for AI automating image recognition, video editing, and facial recognition.


8. Free and Paid Options:

Troubleshoot.dev presents both free and paid AI apps, providing users with flexibility based on their budget and requirements. Many of the free tools available are robust, enabling individuals and small businesses to access high-quality AI solutions without hefty costs. For users with advanced needs, the platform showcases premium tools offering enhanced features and customization.


The inclusion of free apps proves particularly advantageous for individuals entering the realm of AI technology. Whether you’re a student exploring AI or a small business seeking to optimize processes on a limited budget, you can access potent tools without significant upfront investment. Premium applications, on the other hand, offer advanced functionality to elevate your AI projects.


9. Thorough Reviews and Ratings:

Assessing the effectiveness of AI applications can be challenging during browsing. Troubleshoot.dev addresses this issue by providing comprehensive reviews and ratings for each AI app listed on the platform. These reviews are based on user feedback and expert assessments, aiding in making informed decisions about the worthiness of tools.


Each app listing on Troubleshoot.dev includes detailed descriptions, screenshots, pricing details, and user ratings. Users can easily gauge the app’s ratings, preferred features, and updates or enhancements. This transparency facilitates making educated decisions when selecting the appropriate AI solution.


10. Keeping Abreast of Latest AI Trends:

The pace of advancement in AI technology is rapid, with new tools, updates, and breakthroughs introduced regularly. Troubleshoot.dev ensures users are updated on the latest trends and advancements in the AI domain. The platform offers news and updates on the newest apps, emerging AI trends, and innovations, ensuring users are informed about cutting-edge solutions.


Whether you seek the latest generative AI tools or advancements in machine learning, Troubleshoot.dev keeps you abreast of the AI revolution.


11. Community and Assistance:

The field of AI can be intricate and demanding to navigate, but www.Troubleshoot.dev fosters a supportive community of users and developers. The platform encourages interaction, allowing users to ask questions, share insights, and discuss experiences with various AI tools. This collaborative setting ensures users receive the necessary help while exploring AI applications.


Moreover, www.Troubleshoot.dev offers support for individuals new to AI or seeking guidance on implementing specific AI tools in their projects. Whether through FAQs, tutorials, or community forums, users can easily access resources to commence or troubleshoot any challenges faced.


12. Enhancing Your AI Experience with Troubleshoot.dev:

By providing a plethora of resources, a user-friendly interface, and a meticulously curated selection of both free and paid AI applications, www.Troubleshoot.dev significantly enhances your journey in AI. Whether you aim to automate tasks, streamline workflows, or harness AI for innovation, Troubleshoot.dev serves as a one-stop platform for discovering the right tools and maximizing AI technology.


Conclusion


In a world increasingly influenced by artificial intelligence, the selection of the right AI apps can be pivotal in staying ahead of the curve. Troubleshoot.dev offers a robust platform where users can effortlessly discover, evaluate, and access AI tools tailored to their needs. From comprehensive app categories and detailed reviews to free and paid options, Troubleshoot.dev emerges as the ultimate destination for individuals looking to leverage the potential of AI.


If you aspire to unlock the full capabilities of AI, Troubleshoot.dev is your go-to destination.


AI Girlfriend Free AI Tools Latest Merch Deals Remote Software Jobs

authentic jerseys store attorney jobs