In the first two posts on James Beniger’s Control Revolution, I have put forward an interpretation of that book as an instance of synthetic philosophy (see Eric Schliesser’s latest formulation for the Aristotelian Society). As the title of his book indicates, control is the central concept in Beniger’s account, which equates control with programming.
Like for many other writers, Beniger’s main illustration of the program-control identity is Maxwell’s demon. In its informational instantiation, Maxwell’s demon is described in procedural terms: It makes measurements of molecular velocities and takes actions based on the measurements. This mapping between observations and actions is encoded in the demon’s program. Beniger sees programs operating on and between multiple scales, from genetic programming in living cells to cultural programming and on, to the levels of social norms, customs, laws, and policy.1 As I wrote earlier, he argued that first programs (and thus first instances of control) had emerged in the first living systems; this was successively followed by the emergence of culture, bureaucracy, and modern technology, and each of these major transitions was accompanied by the emergence of new types of programs. Since I already spent enough time critiquing this program-centric perspective, I will devote the third (and final) post on Beniger’s book to his deep discussion of three levels of control that are often overlooked.
Being, behaving, becoming; or, existence, experience, evolution
In his discussion of the thermostat (another prototypical cybernetic system), Beniger quotes the evolutionary biologist Ernst Mayr:
It is not the thermostat that determines the temperature of a house, but the person who sets the thermostat . . . Negative feedbacks improve the precision of goal seeking, but they do not determine it. Feedback devices are only executive mechanisms that operate during the translation of a program. Therefore, it places the emphasis on the wrong point to define teleonomic processes in terms of the presence of feedback devices. They are mediators of the program, but as far as the basic principle of goal achievement is concerned, they are of minor consequence.
The cyberneticists’ nearly exclusive focus on feedback and behavior, Beniger argues, neglects two other aspects of control, namely being, i.e., the maintenance of the functional organization of the system itself that allows for feedback control to operate in the first place, and becoming, i.e., the process by which the structure, function, and organization of the system can undergo change and evolve through time. In the case of the thermostat, someone (say, an HVAC technician) has to install and maintain the system, and someone else (say, the occupant of the house) has to set the desired temperature—this is the dimension of being. Moreover, if we are talking about systems like the Nest thermostat, there will be regular software updates and new designs coming out—this is the dimension of becoming. Crucially, though, the thermostat system relies on others for its being and becoming in order to keep behaving as intended.
These three levels of control are present in living systems as well; here, the three B’s (Being, Behaving, Becoming) are also conceptualized by Beniger as the three E’s: Existence, Experience, Evolution.
Existence or being, the problem of maintaining organization—even in the absence of external change—counter to entropy
Experience or behaving, the problem of adapting goal-directed processes to variation and change in external conditions
Evolution or becoming, the problem of reprogramming less successful goals and procedures while at the same time preserving more successful ones
Beniger is completely correct in saying that the overemphasis on behavior or experience loses sight of both the key infrastructural underpinnings of control systems and their evolution over time. The neglect of the level of being is also the target of Willems’ critique—indeed, maintenance of being is often the role of what he calls passive control devices. For example, the use of rigid rods in mechanical linkages serves as a means of enforcing system-level constraints (the distances between certain pairs of points have to remain the same throughout operation). That way, the entire linkage can function as needed and, in particular, can be part of other systems (such as the steam engine governor). We can legitimately speak of control here, but it is not of a feedback or intelligent control variety. However, since Willems’ interest was mainly in engineered control systems, he did not say much about the level of evolution. For this, one has to turn to biologists and philosophers of biology.
Control’s code script?
Per Beniger, we have to look to the emergence of the earliest living systems in order to isolate the first appearance of all three levels of control. I think he is right about this in general, if not in the specifics. His insistence on the primacy of programs is reminiscent of what the biologist Sydney Brenner wrote in 2012 for the Turing centennial:
Arguably the best examples of Turing's and von Neumann's machines are to be found in biology. Nowhere else are there such complicated systems, in which every organism contains an internal description of itself. The concept of the gene as a symbolic representation of the organism—a code script—is a fundamental feature of the living world and must form the kernel of biological theory.
Here, Brenner talks about von Neumann’s ideas from the 1940s on universal constructors (self-reproducing automata that are capable of synthesizing other automata given their description, as well as making copies of themselves). There are inanimate physical systems that are highly ordered, for example, crystals. But, unlike living cells, crystals do not contain internal descriptions of themselves. According to Beniger, this is what distinguishes living systems on all levels and scales (which includes systems composed of other living systems). However, there are good reasons to question the program interpretation of the genome (as Henri Atlan and Moshe Koppel do, for example), but there is clearly a great deal of robustness and flexibility in using “discrete” (symbolic or linguistic) interfaces to connect “continuous” hierarchical levels in complex systems—you can see this in the writings of Howard Pattee on the emergence of hierarchical control in biological systems or in Manuel DeLanda’s treatment of complex systems as assemblages comprising both material and expressive components. We can see this working at multiple levels, from cells to organisms to societies. And we can clearly see in all of these examples multiple manifestations of Beniger’s three levels of control.
Von Neumann’s work predated the advances in molecular biology; but, as Brenner points out, he recognized the critical distinction between the program describing a given function and the function realization itself, which Erwin Schrödinger failed to see in his 1944 book What Is Life? The Physical Aspect of the Living Cell. As documented by Horace Freeland Judson in The Eighth Day of Creation, an extremely fascinating (and also extremely lengthy!) historical account of the molecular revolution in biology,
Today, one sees—as Brenner was first to point out to me—that the absolute distinction between protein and DNA, between the substance that makes up the machinery and the substance that carries the instructions to make the duplicate machine, exactly and fully embodies what von Neumann concluded. But this interesting correspondence could not be recognized until much too late to matter. … His theory of self-reproducing automata was completed from his notes and manuscripts only after his death, and published in 1966.
As Brenner himself told Judson,
"There is something to be thought about here that has not yet been formulated successfully. John von Neumann wrote a very interesting essay many years ago, in which he asked, How does one state a theory of pattern vision? And he said, maybe the thing is that you can't give a theory of pattern vision—but all you can do is to give a prescription for making a device that will see patterns! In other words, where a science like physics works in terms of laws, or a science like molecular biology, to now, is stated in terms of mechanisms, maybe now what one has to begin to think of is algorithms. Recipes. Procedures.”
However, neither Beniger nor Brenner pay close attention to the important distinction between systems that are rule-following vs. rule-driven vs. rule-governed.2 Rule-following systems are aware of the rules because they have a metalanguage for formulating, analyzing, and synthesizing these rules. Human organizations and individuals are rule-following. Rule-driven systems contain explicit descriptions and physical instantiations of these rules, but lack a metalinguistic capacity for self-reference. Embedded systems and software are examples of rule-driven systems. Finally, rule-governed systems are systems whose behavior is lawful and can be described and modeled using formal systems of rules, but they do not contain encodings of these rules—e.g., quantum-mechanical particles do not have Schrödinger’s equation inscribed in them. In my view, the main weakness of Beniger’s account is his conflation of these three distinct modes in which rules and systems interact.
Control: closure, constraints, coherence
The notion of multiple interlocking processes at play in complex systems is rather natural, and has been a consistent theme in many writings on the subject. Many key insights can be found in a 1994 technical report by Bernard Gaveau, Charles Rockland, and Sanjoy Mitter. Among other things, they single out the key role of control in the context of complex autonomous systems: achieving and maintaining coherence. This is how they frame this notion:
In some sense, “coherence" is the counterpart for embedded or embodied systems of the notion of “consistency" for a formal system. But we feel that the notion of “coherence" is much richer and more subtle. For example, there is in essence only one way to be inconsistent, namely to give rise to a contradiction, whereas there are many ways for a system to be (potentially) incoherent. This is perhaps the basis for the success of formal logic. Similarly, this is perhaps at the root of the formal mathematical comparability of distinct formal systems, including model-theoretic methods. It is unlikely that correspondingly sharp modes of comparison will be available for autonomous systems. The relevance of “universality" is also in question. Thus, while there is a (unique up to equivalence) universal Turing machine which can simulate any Turing machine, it is unlikely in the extreme that there is anything like a “universal" coherent (autonomous) system. Similarly, we would be much surprised if there were to be anything in this setting corresponding to the notion of a “categorical theory"(such as the axioms for the real number system), i.e., one for which any two realizations are "equivalent" (in any of a variety of senses).
It is important to recognize that "coherence" can accommodate "inconsistency". This is a key difference between autonomous systems, which are embedded in the world, and formal systems. For a formal system an inconsistency is generally taken as catastrophic. For an autonomous system, on the other hand, inconsistency need not cause a problem until it is necessary for actions to be taken; even then, it is a matter of conflict resolution, possibly on a case by case basis, rather than of rooting out contradictions in the “foundations" of the system. Some examples are: inconsistent beliefs held simultaneously, inconsistent goals, conflicting values, etc. "Resolving" these conflicts may involve the making of “choices". This “decision-making" process, in order to terminate in timely fashion, may require more than “disinterested" (or “universal", or “objective") principles of “rationality"; it may require the involvement of “internalist", possibly body-based mechanisms.
In another paper, Mitter says that this should be interpreted in terms of Whitehead’s process philosophy:3
The word coherence is being used here in the sense of Whitehead (1978) and is a concept which is broader than logical consistency. It requires viewing the system as a “whole" which always has an environment and a value system (internal). Besides, the system residing in its environment is capable of observation by a multitude of external observers, each observer possessing its own value system.
One can find these ideas in the writings of Robert Rosen on metabolism-repair systems, in the work of Francisco Varela and Humberto Maturana on autopoiesis and organizational closure, and (in an elaborated and extended form) in the work of Alvaro Moreno and Matteo Mossio on biological autonomy.4 The connection to Beniger’s levels of control is quite clear: Unlike engineered systems, such as thermostats, power plants, or AI systems, living systems have internal mechanisms for maintaining coherence via networks of interdependent constraints (existence), while remaining open to their world (experience) and interacting with it both as subject and object (evolution). Similar mechanisms are at work at larger scales (individuals, families, organizations, societies, etc.), although, as Beniger and DeLanda hasten to emphasize contra thinkers like Hegel, Spencer, or Spengler, societies are not living organisms. But, like living organisms, they have to expend effort to stave off the forces of entropy and anomie and to preserve their code script, so that future generations may decide to keep it or to revise it.
It is a curious coincidence that “policy” is also control theorists’ term for the observations-to-action mapping of the kind instantiated in Maxwell’s demon.
Sunny Y. Auyang, Mind in Everyday Life and in Cognitive Science, MIT Press, 2001.
See also Steven Shaviro, Without Criteria: Kant, Whitehead, Deleuze, and Aesthetics (the numbers refer to pages in the 1978 edition of Whitehead’s Process and Reality):
The principle of coherence stipulates that “no entity can be conceived in complete abstraction from the system of the universe” (3). In order to exist, a given entity presupposes, and requires, the existence of certain other entities, even though (or rather, precisely because) it cannot be logically derived from those other entities, or otherwise explained in their terms. Coherence means, finally, that “all actual entities are in the solidarity of one world” (67).
In other words, coherence is not logical, but ecological. It is exemplified by the way that a living organism requires an environment or milieu— which is itself composed, in large part, of other living organisms similarly requiring their own environments or milieus. In this way, and despite the difference in vocabulary, Whitehead’s coherence is close to what Deleuze and Guattari call consistency.
See the discussion of Moreno and Mossio in my previous post on Beniger’s book.
Always a pleasure to read these, Max! I think having a brief summary at the end of each piece would be a nice idea to make these accessible to a much wider audience!