The Swedish control engineer Karl Åström famously called control “the hidden technology”—as he phrased it, control is “widely used, very successful, [yet] seldom talked about except when disaster strikes.” There is an underappreciated aspect of this which, I think, serves as a good explanatory filter for a lot of what’s been going on recently. Roughly, it amounts to the following: Because control acts to reduce externally perceived complexity, it may lead to a false impression that, just because things have been going reasonably well for so long, certain mechanisms or practices or policies are no longer necessary and can be done away with.
One of the signatures of uncontrolled complexity is instability. When control engineers talk about instability, they refer to systems that can exhibit violent divergence from intended behavior and, once they enter this dangerous zone, are incredibly resistant to corrective control actions. Properly designed controls keep systems in their stable regions, but the sheer scale of possible dangers calls for respect and humility on the part of the engineer. Unstable systems are everywhere—in markets, energy, public health, biotechnology, financial industry, transportation, political decision-making, and more. The behavior of individuals and institutions involved in the operation and use of these systems is one of the sources of instability, but it can also contribute to reliable functioning when interconnected with properly designed control policies. These include not only explicit rules and regulations, but also norms, conventions, and culture. Technology can (and should) play a role as well. However, it is precisely because these control mechanisms keep instabilities in check, there is a risk of complacency on the part of individual actors, which can (and, as we are witnessing in real time, does) lead to crises. Here are some salient examples.
Air traffic safety
After the horrific plane crash in DC, there was a renewed conversation about air traffic safety in general. So-called “near misses” seem to occur more often than previously thought, yet somehow, because most of these events were not catastrophic, there was no widespread realization that various layers of control that keep air travel safe, from technological solutions to human operators, are being stretched thin. Increasing numbers of close calls should serve as error signals that alert the users and the operators to potential for catastrophic failure. Instead, in practice they seem to make people complacent. They assume that, because nothing terrible has happened so far, nothing terrible can happen in the future.
Vaccination
Vaccination is one of our most successful control technologies, explicitly designed for systems with potentially highly destructive unstable modes (epidemics and pandemics). Its efficacy derives both from feedback control that keeps the populations of pathogens in check and from network effects on the level of human populations that result in herd immunity. It certainly feels like highly infectious diseases, such as measles, mumps, chicken pox, or polio, are basically a thing of the past. The resulting reduction of complexity overlays on the general lack of awareness of the feedback loops needed to keep these diseases in check. The interplay of these two effects leads to complacency on the one hand and to vaccine skepticism on the other. The growing numbers of people choosing not to vaccinate their children even against measles and chicken pox threaten to upend the carefully maintained stable equilibrium state precisely because this stable equilibrium state has such low externally perceived complexity.
Public administration and rationality
As we speak, Elon Musk’s DOGE operatives are bulldozing straight past the Chesterton’s fence that surrounds the maze of technocratic solutions and kludges responsible for the day-to-day functioning (such as it is) of the US government. When Jürgen Habermas wrote about the legitimation crisis in 1973, he was talking about the public’s diminished confidence in the ability of the modern administrative state to resolve or mitigate conflicts of values or worldviews that inevitably arise as an emergent property of market liberalism. Habermas viewed this as a crisis of rationality, which he understood specifically in communicative terms and for which the vibrant public sphere is a prerequisite. What we are witnessing is also a crisis of rationality, albeit of a different nature. As Bent Flyvbjerg writes in “Rationality and power,”
the fact that the power of rationality emerges mostly in the absence of confrontation and naked power makes rationality appear as a relatively fragile phenomenon; the power of rationality is weak. If we want the power of reasoned argument to increase in the local, national, or international community, then rationality must be secured. Achieving this increase involves long term strategies and tactics which would constrict the space for the exercise of naked power and Realpolitik in social and political affairs. Rationality, knowledge, and truth are closely associated. “The problem of truth,” says Foucault, is “the most general of political problems.” The task of speaking the truth is “endless,” according to Foucault, who adds that “no power can avoid the obligation to respect this task in all its complexity, unless it imposes silence and servitude.” Herein lies the power of rationality.
That first sentence, “the power of rationality emerges mostly in the absence of confrontation and naked power makes rationality appear as a relatively fragile phenomenon,” is a crisp statement of the phenomenon noted by Åström in the context of the public sphere and its relation to power.1 Rationality, understood in Weberian terms as matching of available means to desired ends, is a feedback control mechanism, but, to borrow an apt term from Jean Carlson and John Doyle, it is robust yet fragile. It is robust insofar as a bunch of programs written in different dialects of COBOL, some from many decades ago, interface with each other relatively smoothly for the US treasury payment systems to keep chugging along; yet it is fragile when it faces the maw of DOGE’s wood chipper.
The Åström critique
Actors in socioeconomic systems are capable of reflecting on being controlled and acting as a result of this reflection. This can manifest itself in different ways. For example, the Lucas critique in macroeconomics states that the decision rules used by individual economic agents can (and will) change in response to changes in macroeconomic policies, thus potentially invalidating the models that had been informing policy design. Lucas was urging macroeconomists to pay attention to microfoundations, i.e., to the behaviors and interactions of individual agents comprising the economy. Michael Arbib, Oliver Selfridge, and Edwina Rissland put it nicely in “A dialogue on ill-defined control”:
the task of the economic decision-maker is not to control a physical system that knows nothing, but to control a system that is in turn modelling the controllers! And, of course, the economists are part of the economy that is to be controlled. The world, which is the ill-defined system that we are trying to control, is a world full of people. And they are each trying to control certain aspects of a world full of people.
A colorful (and currently relevant) example of the Lucas critique in action is about whether the government should keep investing in expensive protective measures for various facilities that had never been compromised in the past. However, if this investment is eliminated, then the facts on the ground will change, and a security violation may occur thus invalidating the predictive model. The Åström critique, if I may call it that, points to a related but different phenomenon. Applied to sociotechnical contexts, it describes the state of affairs where the agents in a system are so inured to the complexity-reducing function of control that they become completely oblivious both to the dangers of uncontrolled complexity and to the ever-present need for control mechanisms to keep this complexity at bay. Like the Lucas critique, the Åström critique is also a story about microfoundations, but here the individuals or institutions alter their behavior not because of a perceived change in policy, but because of a perceived absence of complexity. Because the control mechanisms have been working as intended, the participants in the system may begin acting as if the controls were not there at all. If this keeps going, we may one day forget what these controls were in the first place and how to implement them. This is our Weberian moment, rationality must be secured.
Thanks to Adam Elkus for making me aware of Flyvbjerg’s article in a Bluesky thread.
The Raginsky Critique is an interesting twist on the Preparedness Paradox. I agree that we are seldom aware of the regulatory power of feedback systems. And on top of this, though control theory sympathizers would like to prove otherwise, control theoretic tools are imperfect at identifying potential failure modes and inappropriate levels of operator apathy. It's hard! As much as I love John and his aspirations, I have grown to think that control theoretic feedback only gives actionable insights for a small subset of feedback systems.
Related to the Prepardeness Paradox, the Y2K bug was a fun example where people panicked about interconnectedness and probably overcorrected. https://www.argmin.net/p/in-the-year-2000
Can one design a control system that changes the very structures of the system it seeks to control?
I am not an engineer, though I had aspirations to become one long ago, and am fairly critical of engineers in particular and engineering generally. My interactions with engineers in professional environments have only reinforced this position. Rational to the point of stupidity, blinded by theory, oblivious to their systemic role, socially inept, what comes most to mind is an old, somewhat insensitive, line from a Cory Doctorow novella that (I'm paraphrasing) "engineers are basically high-functioning autistics [I object to the ableist pejorative implication] who have no idea how normal people behave" (related: the engineer's fallacy). Nor do they ever seem to ask if the problem they are tasked with solving *should* be solved, i.e. controlled, nor if this control policy, decided by someone other than themselves whose intentions are never questioned, perpetuates harm for the sake of stability (i.e. business as usual).
My position in this was as quality assurance, a duct-taper to use David Graeber's term, someone at the bottom of a corporate system, who corrected (or attempted to), the many, many mistakes made by workers, managers, administrators, customers, and yes, engineers. Our staff engineer was shocked when I first mentioned to him the various ways that workers and managers cheated in order to hit their performance goals (Goodhart's law was in full effect), as though it never occurred to him, sitting in the air-conditioned office, staring at graphs and spreadsheets or whatever he looked at, that the pressures that workers and managers were under (i.e. being threatened by their bosses, worked to exhaustion, fear of termination, etc.) might compel them to work around unachievable goals (which are never questioned). To the engineer, whose job was never in danger, the metrics were a feedback mechanism for optimizing performance; to the workers and low-level managers, they were a nightmare that kept operations in perpetual fear and stress, with predictable yet ignored consequences.
Due to many factors (scientific management, inter-department squabbling and miscommunication, employee deskilling, poor training, tyrannical and sociopathic managers, a competitive work culture, unsafe/unhealthy work conditions, massive hierarchization, etc.), it was impossible to change anything within the system itself. Managers blew smoke (i.e. lied) all the way up the chain of command, and were merciless to their subordinates. I suspect that cybernetic feedback (as in the Cybersyne model) would have terrified them, as they were always trying to maintain an illusion of control and competence, which would quickly have been dispelled were executives to see how workers at the bottom were treated. But all this assumes the execs would have cared. Why should they, when profits kept going up and there were always new workers to replace the ones that broke?
One particular event stuck with me (which I will keep vague out of a desire for brevity and anonymity). Some higher-ups decided to change some of the hardware systems that we (my department) relied upon to do our jobs, and tasked Maintenance with implementing the changes. They never informed us what the consequences of these changes would be so that we could give informed feedback that would influence their implementation. The result was that they disconnected machinery, that we needed to do our jobs, from the network but left it plugged in so it looked like it functioned, did not tell us that this was what they were doing, and we (I) did not find this out until afterward when I discovered the changes and it was too late to undo anything, severely compromising our operation and drastically increasing the amount of time necessary to complete tasks in a very high-paced environment. Still trying to figure out what was wrong (first rule of QA is you never assume anyone knows what they're doing), I contacted Tech Support only to be referred to an engineer high up in the company, who proceeded to talk to me as though I were an idiot. In his world, he apparently thought that systems functioned as he thought they did, that employees were rational and knew how to do their jobs, that errors were rare, and that misunderstandings or conflicts could be resolved simply but sitting down and talking about them. He could not wrap his rational mind around emotions, office politics, incompetence, conflicting motives, and apparently social skills. My working world was one governed by Murphy's Law. I saw all our systems, that apparently people like him designed, fail in ways ranging from the repetitively mundane to the spectacular, and instead of valuing my experiences and insights they were dismissed as irrelevant and foolish. Can you tell I'm still salty about it?
The many years of similar experiences led me ultimately to concluding that the structure of the company (the system) produced the behavior I observed (I hadn't yet heard of systems dynamics) and that there was nothing I could do about it. The end for me came when some other higher-ups decided to deskill my department, after I fought (yes, fought) for years to protect it from incursion by efficiency wonks and managerial incompetence. Would it have done any good to explain how deskilling workers makes them worse at their jobs, because it deprives them of a theory of operation, of a knowledge of how their labor fits into a larger system, of the consequences of their and everyone else's actions? That this lack of knowledge would cause them to make more errors that would in turn require more effort and resources to correct? That the work itself was physically destructive to the human body, the stress psychologically destructive (I don't think I knew anyone who didn't use recreational drugs to cope with it), that the company literally turns people into profit? The term "meat grinder" comes to mind. Engineers made this.
One of the themes in this article seems to center around reconciling the macroeconomic with the microeconomic. In the Humanities spaces I now inhabit, this is well-trodden, if unresolved, ground (i.e. reconciling Marx and Freud). I've noticed how different disciplines seem to come across the same problems completely independently of one another and have completely different language for describing the same phenomena, and also don't seem to realize that others are already talking about them and have been for some time. My concern is that some of the systems we see failing, however functional they are, perpetuate oppression, misery, inequality, injustice, and environmental destruction. Does the engineer designing a hydroelectric dam care about the ecological impact, the endemic species that will be extirpated by the reservoir? Do they care about the factory worker who will be physically disabled by middle age from the repetitive and strenuous labor they perform, which the engineer never shall, in order to hit some goal their masters set for them, which maximizes company profit? Should the bank computers with their COBOL programs be kept running in service of this system that works (for a given set of criteria)? Can change be built into such a system? Does the engineer understand their role in this?
Engineering can also be a bubble.