The Swedish control engineer Karl Åström famously called control “the hidden technology”—as he phrased it, control is “widely used, very successful, [yet] seldom talked about except when disaster strikes.” There is an underappreciated aspect of this which, I think, serves as a good explanatory filter for a lot of what’s been going on recently. Roughly, it amounts to the following: Because control acts to reduce externally perceived complexity, it may lead to a false impression that, just because things have been going reasonably well for so long, certain mechanisms or practices or policies are no longer necessary and can be done away with.
One of the signatures of uncontrolled complexity is instability. When control engineers talk about instability, they refer to systems that can exhibit violent divergence from intended behavior and, once they enter this dangerous zone, are incredibly resistant to corrective control actions. Properly designed controls keep systems in their stable regions, but the sheer scale of possible dangers calls for respect and humility on the part of the engineer. Unstable systems are everywhere—in markets, energy, public health, biotechnology, financial industry, transportation, political decision-making, and more. The behavior of individuals and institutions involved in the operation and use of these systems is one of the sources of instability, but it can also contribute to reliable functioning when interconnected with properly designed control policies. These include not only explicit rules and regulations, but also norms, conventions, and culture. Technology can (and should) play a role as well. However, it is precisely because these control mechanisms keep instabilities in check, there is a risk of complacency on the part of individual actors, which can (and, as we are witnessing in real time, does) lead to crises. Here are some salient examples.
Air traffic safety
After the horrific plane crash in DC, there was a renewed conversation about air traffic safety in general. So-called “near misses” seem to occur more often than previously thought, yet somehow, because most of these events were not catastrophic, there was no widespread realization that various layers of control that keep air travel safe, from technological solutions to human operators, are being stretched thin. Increasing numbers of close calls should serve as error signals that alert the users and the operators to potential for catastrophic failure. Instead, in practice they seem to make people complacent. They assume that, because nothing terrible has happened so far, nothing terrible can happen in the future.
Vaccination
Vaccination is one of our most successful control technologies, explicitly designed for systems with potentially highly destructive unstable modes (epidemics and pandemics). Its efficacy derives both from feedback control that keeps the populations of pathogens in check and from network effects on the level of human populations that result in herd immunity. It certainly feels like highly infectious diseases, such as measles, mumps, chicken pox, or polio, are basically a thing of the past. The resulting reduction of complexity overlays on the general lack of awareness of the feedback loops needed to keep these diseases in check. The interplay of these two effects leads to complacency on the one hand and to vaccine skepticism on the other. The growing numbers of people choosing not to vaccinate their children even against measles and chicken pox threaten to upend the carefully maintained stable equilibrium state precisely because this stable equilibrium state has such low externally perceived complexity.
Public administration and rationality
As we speak, Elon Musk’s DOGE operatives are bulldozing straight past the Chesterton’s fence that surrounds the maze of technocratic solutions and kludges responsible for the day-to-day functioning (such as it is) of the US government. When Jürgen Habermas wrote about the legitimation crisis in 1973, he was talking about the public’s diminished confidence in the ability of the modern administrative state to resolve or mitigate conflicts of values or worldviews that inevitably arise as an emergent property of market liberalism. Habermas viewed this as a crisis of rationality, which he understood specifically in communicative terms and for which the vibrant public sphere is a prerequisite. What we are witnessing is also a crisis of rationality, albeit of a different nature. As Bent Flyvbjerg writes in “Rationality and power,”
the fact that the power of rationality emerges mostly in the absence of confrontation and naked power makes rationality appear as a relatively fragile phenomenon; the power of rationality is weak. If we want the power of reasoned argument to increase in the local, national, or international community, then rationality must be secured. Achieving this increase involves long term strategies and tactics which would constrict the space for the exercise of naked power and Realpolitik in social and political affairs. Rationality, knowledge, and truth are closely associated. “The problem of truth,” says Foucault, is “the most general of political problems.” The task of speaking the truth is “endless,” according to Foucault, who adds that “no power can avoid the obligation to respect this task in all its complexity, unless it imposes silence and servitude.” Herein lies the power of rationality.
That first sentence, “the power of rationality emerges mostly in the absence of confrontation and naked power makes rationality appear as a relatively fragile phenomenon,” is a crisp statement of the phenomenon noted by Åström in the context of the public sphere and its relation to power.1 Rationality, understood in Weberian terms as matching of available means to desired ends, is a feedback control mechanism, but, to borrow an apt term from Jean Carlson and John Doyle, it is robust yet fragile. It is robust insofar as a bunch of programs written in different dialects of COBOL, some from many decades ago, interface with each other relatively smoothly for the US treasury payment systems to keep chugging along; yet it is fragile when it faces the maw of DOGE’s wood chipper.
The Åström critique
Actors in socioeconomic systems are capable of reflecting on being controlled and acting as a result of this reflection. This can manifest itself in different ways. For example, the Lucas critique in macroeconomics states that the decision rules used by individual economic agents can (and will) change in response to changes in macroeconomic policies, thus potentially invalidating the models that had been informing policy design. Lucas was urging macroeconomists to pay attention to microfoundations, i.e., to the behaviors and interactions of individual agents comprising the economy. Michael Arbib, Oliver Selfridge, and Edwina Rissland put it nicely in “A dialogue on ill-defined control”:
the task of the economic decision-maker is not to control a physical system that knows nothing, but to control a system that is in turn modelling the controllers! And, of course, the economists are part of the economy that is to be controlled. The world, which is the ill-defined system that we are trying to control, is a world full of people. And they are each trying to control certain aspects of a world full of people.
A colorful (and currently relevant) example of the Lucas critique in action is about whether the government should keep investing in expensive protective measures for various facilities that had never been compromised in the past. However, if this investment is eliminated, then the facts on the ground will change, and a security violation may occur thus invalidating the predictive model. The Åström critique, if I may call it that, points to a related but different phenomenon. Applied to sociotechnical contexts, it describes the state of affairs where the agents in a system are so inured to the complexity-reducing function of control that they become completely oblivious both to the dangers of uncontrolled complexity and to the ever-present need for control mechanisms to keep this complexity at bay. Like the Lucas critique, the Åström critique is also a story about microfoundations, but here the individuals or institutions alter their behavior not because of a perceived change in policy, but because of a perceived absence of complexity. Because the control mechanisms have been working as intended, the participants in the system may begin acting as if the controls were not there at all. If this keeps going, we may one day forget what these controls were in the first place and how to implement them. This is our Weberian moment, rationality must be secured.
Thanks to Adam Elkus for making me aware of Flyvbjerg’s article in a Bluesky thread.
The Raginsky Critique is an interesting twist on the Preparedness Paradox. I agree that we are seldom aware of the regulatory power of feedback systems. And on top of this, though control theory sympathizers would like to prove otherwise, control theoretic tools are imperfect at identifying potential failure modes and inappropriate levels of operator apathy. It's hard! As much as I love John and his aspirations, I have grown to think that control theoretic feedback only gives actionable insights for a small subset of feedback systems.
Related to the Prepardeness Paradox, the Y2K bug was a fun example where people panicked about interconnectedness and probably overcorrected. https://www.argmin.net/p/in-the-year-2000
Really nice article.