The engineer looks at the law and asks, why is it so sloppy? Take the DMCA-CDA disparity, or the fact that anonynous political robocalls are legal in many states while anonymous political commercials are not. The software engineer wonders why this all can’t be straightened out.
The lawyer looks at software and asks a different question: why is it so oppressive? Software, quite often, doesn’t do what the user wants, and the user is powerless to change it.
At a high level, lawyers and coders are after the same thing: cohesion. The engineers seeks an internal cohesion within the architecture, lawyers seek a cohesion of the code with society.
The foundation of most discussions about the intersection of code and law is Lawrence Lessig’s Code and Other Laws of Cyberscape (1999). Lessig, inspired by Joel Reidenberg’s observation that “code is law,” conceived of a model which included both. He explained that for any human behavior, there are generally four types of constraints: economic, legal, normative, and architectural. Of these four modalities, architecture is unique because it’s absolute and non-negotiable. The architecture of cyberspace is software code, and likely few of its denizens quite realize that point.
Lessig has not refined his model further; he has put his energies, for most of the past decade, into promoting openness as an antidote to oppressive code. In 2005, James Grimmelmann, a software engineer completing his law degree, took on Lessig in his article “Regulation by Software” in the Yale Law Journal. Grimmelmann aimed to undermine Lessig’s contention that software was a subset of architecture; it was different enough, he reasoned, that software was a separate modality.
[Grimmelmann’s paper is not particularly newsworthy today; it is three years old and I don’t know how much he stands by it. Danielle Citron recently cited it in a paper as a key work critiquing Lessig, so I started reading it and producing this analysis. By coincidence, we’ve been invited to sit on a conference panel together.]
On the face of it, Grimmelman’s dissent is daft. After all, the four modalities universally apply; architecture merely takes a vastly different form based on its environment. It can be the civil engineering of buildings, highways; the urban layout of traffic lights and payphones; the mechanical construction of automobiles and turnstiles; the chemical makeup of paint, furnishing fabrics, cigarettes, and other substances. Software is just the architecture of cyberspace. Certainly, Grimmelmann argues that software is unique enough, and that should be considered.
Grimmelmann’s argument is multi-pronged, but it essentially rests on the placticity of software. He cites Frederick Brooks, author of the seminal tract of software project management, The Mythical Man Month (1975): “Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.” What Brooks was observing was the sheer power of the software architect, who singlehandedly can conjure up a software tool used by millions.
Here is Grimmelman’s interpretation [emphasis mine]:
That programmers have such flexibility does not necessarily mean that users do. Our hypothetical programmer could easily choose to make her calculator program use decimal notation, scientific notation, or both. But once she has made that choice, the user cannot easily undo it. When users are powerless over software, it is often because programmers have made design decisions that leave users without power. Indeed, this imbalance is part of the effectiveness of regulation by software.
In other words, software is not universally plastic, at least, not for everyone. Grimmelmann is actually making a better argument here against the oppressiveness of software.
Here’s a better example which bridges the mechanical and software architecture worlds. Consider the device on an automobile fittingly named a governor, which limits the speed of a car. A speed governor is not required, nor is a speed set, by the U.S. government; 49CFR571.105 (S6.6) leaves it up the manufacturer. Historically the governors were mechanical devices, and could be modified or removed by a hot-rodder (who would presumably know to substitute in high-performance tires as well). But these settings have increasingly become electronic, and harder to modify; as one auto enthusiast laments on a bulletin board: “Manufacturers hard-wire things like that into the rest of your electrical system so your can’t mess with it.”
It would be a mistake to describe fixity as a salient feature of software. On the whole, car manufacturers have been supplying more features directly to the driver. Advanced electronics on newer cars allow the driver to shift to four-wheel drive, or change the suspension, on the fly. The revolution in microelectronics has in fact allowed many technologies to become “smart” and provide greater control to the user.
What is noteworthy about software is the prevalence of intermediaries in the deployment process. When you buy a car, you get what came out of the factory, which is what the engineers designed; this is the same for civil or urban or chemical architectures as well. You can customize your car, but you are not going to reproduce it for other drivers. Nor is it cheap to have your car customized by someone else; you do not install patches or modules to it as you would software. By contrast, most software regularly goes through one or more intermediaries. A piece of software is produced by a vendor, implemented by a customer, operated by the end-user, who may well be a customer representative acting as your intermediary. Similarly, you can customize open source software distribute your version. With no costs of production or distribution, you can have vast power to substitute the architecture of the software — if it allows you to.
Software vendors generally like to restrict usage to people who pay for the product. They also wish to be able to support the software. Thus most vendors compile the software before shipping it; it is additioanlly governed by restrictive licenses. Sometimes a software license is written to allow the intermediary alone to modify the software, and no one else.
This is where the genius of Richard Stallman came in two decades ago. (Grimmelmann cites Stallman generally for the quoted passage a few paragraphs above.) Stallman had the revolutionary idea to create a software which would guarantee to the rights to every user, regardless of the intermediaries. This he called “copyleft” and made it a part of the GNU Public License in 1988. There is a central paradox to the GPL: in order to guarantee freedom, the freedom of the intermediary is limited from limiting others’ freedom to become limiting intermediaries. An alternative approach was codified a decade later as “open source”: it grants the freedom to the intermediary to restrict further usage, through traditional software licenses, if they so choose. (Lessig groups both variants under the term “open code”; they make their source code open, regardless of the licensing used. Others use FOSS, “Free and Open Source Software”).
What troubles Stallman and Grimmelmann (and Lessig, and many others) is this artificial fixity of software inherent in proprietary code and restrictive licenses. This is the root of oppressiveness. Furthermore, typical computer software is often quite buggy, it is vulnerable to common failure– without common accountability. Here is how Grimmelmann puts it:
In a complex software system, it maybe nearly impossible to determine who made the relevant regulatory decision. In any software system that is itself composed of smaller subsystems, the actual basis for decision may well be a composite of rules specified by different programmers.
For this he cites Helen Nissenbaum, “Accountability in Computerized Society.” Nissenbaum clarifies this as the “many hands” problem, wherein many different people have contributed to a software or system with no clear responsibility established. But Nissenbaum was careful not to pin this solely on computing:
The systematic erosion of accountability is neither a necessary nor inevitable consequence of computerization; rather it is a consequence of co-existing factors discussed above: many hands, bugs, computers-as-scapegoat, and ownership without liability, which act together to obscure accountability. Barriers to accountability are not unique to computing. Many hands create barriers to responsible action in a wide range of settings, including technologies other than computing; failures can beset other technologies even if not to the degree, and in quite the same way, as bugs in computer systems
In other words, success has many fathers, but failure is an orphan. What Grimmelmann and Nissenbaum agree on is that accountability too time-consuming to because of the way that code is historically produced. When human lives are lost, a prosecutor may bring charges, or a government, which sets the discovery process in motion. But the common failures of everyday software escape such common accountability.
This is ultimately not the thrust of Grimmelmann’s paper. As with Lessig, he views people as intermediaries of culture. The current intellectual property regime, using a combination of code and law and code, has tried to assert control over the cultural artifacts which people wish to copy, remix, and share texts, music, and video. In some areas this has been devastating (multi-million dollar lawsuits against people accused of filesharing); in other areas this has been accomodating. Many apparent copyright violations are left alone, under a policy Tim Wu has dubbed “tolerated use.” This may be out of sheer pragmatism, or otherwise deference to Lessig’s advocacy. Certainly it remains an important battle to fight– but it’s not the only battle.
A separate path must also consider code writers intermediaries of responsibility. Software code is not merely used in “cyberspace,” after all. It is used everywhere: in business, government, education– environments not generally confused for a cybernetic realm. In all of these capacities software acts as a regulator. But it is not generally limiting personal autonomy; it is implementing common decisions to administer policy. There’s just less of an identifiable cultural-academic movement united to investigate the public policy ramifications of oppressive software. Danielle Citron is one such advocate; as she writes in her forthcoming paper Technological Due Process: “Computer programs seamlessly combine rulemaking and individual adjudications without the critical procedural protections owed either of them.”
It seems to me we ought to be designing common software with common accountability. I’ll cover that in the next essay.