I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.
Richard Brautigan, “All Watched Over by Machines of Loving Grace”
In the first post on James Beniger’s Control Revolution, I have advanced the claim that this book should be interpreted as a work of synthetic philosophy, which Eric Schliesser defines as
a style of philosophy that brings together insights, knowledge, and arguments from the special sciences with the aim to offer a coherent account of complex systems and connect these to a wider culture or other philosophical projects (or both).
Beniger closes the last section of Chapter 4 (titled, aptly enough, “Reductionism and synthesis”) with the following words:
Today, continuing technological development—more than any charismatic thinker—suggests the integrative machinery we might build from the spare parts amassed by our various disciplines. The rise of the Information Society, more than the corresponding development of information theory, has exposed the centrality of information processing, communication, and control to human society. It is to these fundamental processes and not to any particular level in the hierarchy of living systems that we might hope to reduce our accumulating knowledge of human organization and society.
For Beniger, control theory plays the role of “the special science” as a source of insight that could “offer a coherent account of complex systems” that underlie human organization and society. The central concept underlying Beniger’s view of control is that of programming. In order to understand what he means by programming and why it is so prominent in his account, we will start with Maxwell’s demon.
Of demons and programs
First described in 1867, James Clerk Maxwell’s thought experiment involving a “being who can play a game of skill with the molecules” was intended as a refutation of the second law of thermodynamics (it is impossible to decrease the entropy, or to increase the order, of a system without doing work). This being (which, thanks to Lord Kelvin, is now universally referred to as a “demon”) is situated next to a door installed in a partition separating a container of gas into two chambers (designated A and B) and can classify the molecules approaching the partition from either chamber into three classes: slow molecules moving from A towards B; fast molecules moving from B towards A; and all others. The demon opens the door only for the molecules in the first two classes, and keeps it closed otherwise. Eventually, all the fast molecules end up in A and all the slow molecules end up in B, with the end result that chamber A has a higher temperature than chamber B. This increases the order in the system (makes it “more organized”) compared to the initial equilibrium configuration, seemingly without doing any work (the operation of opening and closing the door is assumed to be perfectly lossless).
From the perspective of control theory, Maxwell’s demon is a feedback controller: It makes a measurement (classifying the molecules into three classes) and takes an action based on the measurement (open the door for molecules of the first two classes, keep it closed for molecules of the third class), and it is the act of measurement that involves an expenditure of energy, with the net result that the total entropy of the entire system (the feedback interconnection between the demon and the gas) will increase, in perfect accord with the second law. Perhaps this should have been obvious to Maxwell, whose 1868 paper “On governors” on the stability of steam engines controlled by Watt’s centrifugal governor can rightly be thought of as the first mathematical analysis of a feedback control system. However, it had to wait until Leo Szilárd’s groundbreaking analysis in 1929.1 Szilárd’s insight was that, in order to make the decision of whether to open the door or to keep it closed, the demon had to acquire information, and information acquisition had an energetic cost associated with it. It is exactly as Roger Brockett had pointed out: feedback, signal processing, and Boolean logic can explain whatever physics cannot.
This is where Beniger brings in the idea of a program: The operation of the demon can be represented as a flowchart, a graphical depiction of a finite procedure that can be viewed in algorithmic terms. Here, the “basic lessons of Maxwell’s demon—that control involves programming, that programs require inputs of information, that information does not exist independent of matter and energy and therefore must incur costs in terms of increased entropy” are taken as self-evident, and in particular as establishing beyond any doubt the clear link between control and computation or algorithms. This helps explain why he casts such a wide net. To Beniger it’s programs all the way down, and he locates the primordial source of this in the first living systems. In this, he is certainly not alone—for example, the biophysicist Werner Loewenstein, in his book The Touchstone of Life, speaks of proteins as “molecular demons” that extract information from their environment by making measurements and use it to control various processes in the living cell; the molecular biologist Dennis Bray does something similar in his book Wetware: A Computer in Every Living Cell. If even the cellular processes are under control of programs, it is certainly tempting to see programs operating on multiple levels of analysis.
Genes, brains, rules, algorithms
Beniger paints a compelling picture of this using the example of controlling rush hour traffic. There are two ways, according to him, to come up with the latent factors underlying the manifest social behavior of the commuters: The process explanation derives the emergent collective behavior via constrained interactions between goal-seeking individuals, the “invisible hand” of rush-hour traffic. The program explanation, on the other hand, involves various levels of programming:
genetic programming, encoded in each cell of each commuter and determining the distributions of reaction times and stress levels; cultural programming, encoded in neural structures of the brain and defining certain norms and etiquette of the commute; organizational programming, encoded in traffic law and employer regulations and determining patterns of carpooling and parking; and mechanical programming, encoded in the timing devices of traffic lights and helping to maintain the larger patterns planned by traffic control engineers.
In other words, it is the interaction of multiple programs that purportedly explains the emergent complexity of rush-hour traffic. It’s programs all the way down, from the algorithms operating traffic control network down to the biological programs ticking in the cells of the planners and of the commuters. Beniger opts for the program description because it allows us, as he says, to address the complexity of socioeconomic systems directly “rather than through reification of planning and purpose to the aggregate level.”
Is it really programs all the way down?
Now, as I wrote in the first post, this is one weakness of Beniger’s account. He equates all control with what Jan Willems called “intelligent control,” the usual set-up familiar from cybernetics and involving signal processing and feedback from measurements to actions. This rules out many other control mechanisms at the outset, both in natural and in artificial systems. Now, intelligent controllers are usually analyzed in terms of signal flow diagrams, which often lend themselves to clean symbolic representations as programs in a formal language. This viewpoint is indispensable when we implement controllers as embedded systems using digital computers and logic. However, again as pointed out by Willems, there are numerous examples of “passive” control devices that do not rely on signal processing; instead, by virtue of being interconnected physically with some other system, they act to constrain the variety of behaviors that could be generated by this other system in the absence of interconnection. It is something like the “invisible hand,” but manifesting itself through a network of constraints imposed by the interconnection—for example, if I connect two pipes in a hydraulic system, then there will be two constraints operating at the site of the interconnection: the equality of pressure (the pressure in pipe 1 will equalize with the pressure in pipe 2) and the conservation of flow (the flow out of pipe 1 will equalize with the flow into pipe 2). Now, when modeling such systems computationally, we would impose these constraints by means of appropriate variable assignments in a program, but it would be silly to argue that the actual, physical behavior of fluids in these systems is governed by programs. It’s the fallacy of misplaced concreteness all over again.
To illustrate this point further, Willems uses technological examples like pressure valves, heat sinks, shock absorbers, and the like; for a biological example we could take the discussion of the vascular system by Alvaro Moreno and Matteo Mossio (which they connect to the ideas of Humberto Maturana and Francisco Varela on organizational closure in biological systems). The role of the vascular system is to deliver oxygen to cells on much shorter timescales than would be possible via uncontrolled diffusive transport. There is no signal processing going on here, just basic fluid dynamics, yet a moment of reflection shows that the vascular system is a controller in the sense of Willems: We could imagine an animal without a vascular system, with blood just sloshing around in an uncontrolled manner, hemmed in only by the physical boundaries of the body; by contrast, the network of blood vessels redirects and channels the flow of blood with much greater efficiency. Of course, Beniger could counter this argument by referring to the genetic program in the animal’s genome, which is partly responsible for instantiating and maintaining the vascular system during the lifetime of the organism. This, however, is not relevant to the control function of the vascular system itself—there are many instances when something like a program was involved in the manufacture of a particular control system, yet the system itself, once instantiated, operates as a passive, nonalgorithmic control device. There is no need to think of the operation of the cardiovascular system as algorithmic even if algorithmic descriptions of it could be used profitably by bioengineers in their computational simulations (for example, for the purpose of testing and prototyping new medical devices).
The bottom line is, Beniger’s idea of using control theory as a synthetic framework for analysis of socioeconomic systems is both powerful and compelling, but the emphasis he puts on programming is not warranted. One highly nontrivial idea in the book is the identification of three dimensions of control: existence (or being), experience (or behaving), and evolution (or becoming). These three aspects come together in a big way in the context of autonomy and adaptiveness, and we will take them up next.
(to be continued)
It is now recognized that Szilárd’s work may have been the first analysis of a control system using the notion of information, nineteen years before the publication of A Mathematical Theory of Communication by Claude Shannon. In addition to information theory, it has presaged the developments in cybernetics and in stochastic thermodynamics.