Philip Sugg

Thinking about Automation

In the 1980s, an aeronautical engineer named Earl Wiener came up with a set of principles about what software automation did to airplanes and pilots. Here are a few of them:

  1. Exotic devices create exotic problems.

  2. Digital devices tune out small errors while creating opportunities for large errors.

  3. Whenever you solve a problem you usually create one. You can only hope that the one you created is less critical than the one you eliminated.

These examples give you the flavor, but Wiener’s larger point was that software is an abstraction over an unstable world of human and material factors. He had seen, firsthand, the effect that automation had on a pilot’s ability to fly an airplane, but his points have an obvious relevance to other uses of automation. Similar uncertainty about the effects could be applied to an algorithm that checks on a bank transaction for money laundering, home software that regulates the lights and temperature, or a cloud-based program that does your taxes for you.

All of these systems operate on the same assumption: that the people who designed it understand real life thoroughly enough to map out its rules, to trigger actions, and account for their effects with further rules. To make the contrast clear: no one has succeeded in writing software that designs a successful advertising campaign, or performs a heart transplant, for obvious reasons: because there isn’t the same confidence that anyone understands the rules at play. What separates domains that have been automated from those that have not is someone’s confidence that they understand what is possible in that slice of the world. This includes knowledge of the common states, the improbable outcomes (e.g., edge cases), and the conditions of success and failure.

The meaning of automation for software has of late been obscured by big, intimidating trends. These include “artificial intelligence,” or the idea that computers are actively reasoning about evidence they have in front of them. And it includes the proliferation of data about everything (big data, microtargetted data, etc.). Implicit in both developments is the assumption that as data multiplies, most things that matter have become data, that they can be represented in a machine or “machineable” form that computers understand.

But in both cases what we are really talking about is automation. As Earl Wiener argues, automation gives power, at the cost of creating new problems that are more complex and harder to understand than the original system that was automated.

Automation is the belief that the future can be made to resemble the past, that change can be locked in, that normal is what is routine. The language of automation pervades modern software. All of the following are automation concepts: scale, retention, recurrence, monthly X, compound (exponential) growth, runway, customer acquisition, lifetime value, etc.

The automation of everything is a uniquely interesting–and challenging–problem for a technical writer. Automation may change the world fast (even faster than anyone intended or anticipated), but the human capacity to document the change, to explain it, to understand it moves at a more static pace. Technical writing is just the attempt to put what changes or moves at inhuman speeds on a level that humans can understand.

I care about helping to make automation better; that is, more targeted, with less side effects, at a scale appropriate to the problem. More humane. I’m best positioned to help with that as a tech writer, which in the big picture is someone dedicated to sharing knowledge about invention and innovation that would be otherwise go unknown.

Permalink