The Raginsky Critique is an interesting twist on the Preparedness Paradox. I agree that we are seldom aware of the regulatory power of feedback systems. And on top of this, though control theory sympathizers would like to prove otherwise, control theoretic tools are imperfect at identifying potential failure modes and inappropriate levels of operator apathy. It's hard! As much as I love John and his aspirations, I have grown to think that control theoretic feedback only gives actionable insights for a small subset of feedback systems.
Related to the Prepardeness Paradox, the Y2K bug was a fun example where people panicked about interconnectedness and probably overcorrected. https://www.argmin.net/p/in-the-year-2000
"This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator."
"... the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator."
Can one design a control system that changes the very structures of the system it seeks to control?
I am not an engineer, though I had aspirations to become one long ago, and am fairly critical of engineers in particular and engineering generally. My interactions with engineers in professional environments have only reinforced this position. Rational to the point of stupidity, blinded by theory, oblivious to their systemic role, socially inept, what comes most to mind is an old, somewhat insensitive, line from a Cory Doctorow novella that (I'm paraphrasing) "engineers are basically high-functioning autistics [I object to the ableist pejorative implication] who have no idea how normal people behave" (related: the engineer's fallacy). Nor do they ever seem to ask if the problem they are tasked with solving *should* be solved, i.e. controlled, nor if this control policy, decided by someone other than themselves whose intentions are never questioned, perpetuates harm for the sake of stability (i.e. business as usual).
My position in this was as quality assurance, a duct-taper to use David Graeber's term, someone at the bottom of a corporate system, who corrected (or attempted to), the many, many mistakes made by workers, managers, administrators, customers, and yes, engineers. Our staff engineer was shocked when I first mentioned to him the various ways that workers and managers cheated in order to hit their performance goals (Goodhart's law was in full effect), as though it never occurred to him, sitting in the air-conditioned office, staring at graphs and spreadsheets or whatever he looked at, that the pressures that workers and managers were under (i.e. being threatened by their bosses, worked to exhaustion, fear of termination, etc.) might compel them to work around unachievable goals (which are never questioned). To the engineer, whose job was never in danger, the metrics were a feedback mechanism for optimizing performance; to the workers and low-level managers, they were a nightmare that kept operations in perpetual fear and stress, with predictable yet ignored consequences.
Due to many factors (scientific management, inter-department squabbling and miscommunication, employee deskilling, poor training, tyrannical and sociopathic managers, a competitive work culture, unsafe/unhealthy work conditions, massive hierarchization, etc.), it was impossible to change anything within the system itself. Managers blew smoke (i.e. lied) all the way up the chain of command, and were merciless to their subordinates. I suspect that cybernetic feedback (as in the Cybersyne model) would have terrified them, as they were always trying to maintain an illusion of control and competence, which would quickly have been dispelled were executives to see how workers at the bottom were treated. But all this assumes the execs would have cared. Why should they, when profits kept going up and there were always new workers to replace the ones that broke?
One particular event stuck with me (which I will keep vague out of a desire for brevity and anonymity). Some higher-ups decided to change some of the hardware systems that we (my department) relied upon to do our jobs, and tasked Maintenance with implementing the changes. They never informed us what the consequences of these changes would be so that we could give informed feedback that would influence their implementation. The result was that they disconnected machinery, that we needed to do our jobs, from the network but left it plugged in so it looked like it functioned, did not tell us that this was what they were doing, and we (I) did not find this out until afterward when I discovered the changes and it was too late to undo anything, severely compromising our operation and drastically increasing the amount of time necessary to complete tasks in a very high-paced environment. Still trying to figure out what was wrong (first rule of QA is you never assume anyone knows what they're doing), I contacted Tech Support only to be referred to an engineer high up in the company, who proceeded to talk to me as though I were an idiot. In his world, he apparently thought that systems functioned as he thought they did, that employees were rational and knew how to do their jobs, that errors were rare, and that misunderstandings or conflicts could be resolved simply but sitting down and talking about them. He could not wrap his rational mind around emotions, office politics, incompetence, conflicting motives, and apparently social skills. My working world was one governed by Murphy's Law. I saw all our systems, that apparently people like him designed, fail in ways ranging from the repetitively mundane to the spectacular, and instead of valuing my experiences and insights they were dismissed as irrelevant and foolish. Can you tell I'm still salty about it?
The many years of similar experiences led me ultimately to concluding that the structure of the company (the system) produced the behavior I observed (I hadn't yet heard of systems dynamics) and that there was nothing I could do about it. The end for me came when some other higher-ups decided to deskill my department, after I fought (yes, fought) for years to protect it from incursion by efficiency wonks and managerial incompetence. Would it have done any good to explain how deskilling workers makes them worse at their jobs, because it deprives them of a theory of operation, of a knowledge of how their labor fits into a larger system, of the consequences of their and everyone else's actions? That this lack of knowledge would cause them to make more errors that would in turn require more effort and resources to correct? That the work itself was physically destructive to the human body, the stress psychologically destructive (I don't think I knew anyone who didn't use recreational drugs to cope with it), that the company literally turns people into profit? The term "meat grinder" comes to mind. Engineers made this.
One of the themes in this article seems to center around reconciling the macroeconomic with the microeconomic. In the Humanities spaces I now inhabit, this is well-trodden, if unresolved, ground (i.e. reconciling Marx and Freud). I've noticed how different disciplines seem to come across the same problems completely independently of one another and have completely different language for describing the same phenomena, and also don't seem to realize that others are already talking about them and have been for some time. My concern is that some of the systems we see failing, however functional they are, perpetuate oppression, misery, inequality, injustice, and environmental destruction. Does the engineer designing a hydroelectric dam care about the ecological impact, the endemic species that will be extirpated by the reservoir? Do they care about the factory worker who will be physically disabled by middle age from the repetitive and strenuous labor they perform, which the engineer never shall, in order to hit some goal their masters set for them, which maximizes company profit? Should the bank computers with their COBOL programs be kept running in service of this system that works (for a given set of criteria)? Can change be built into such a system? Does the engineer understand their role in this?
The Raginsky Critique is an interesting twist on the Preparedness Paradox. I agree that we are seldom aware of the regulatory power of feedback systems. And on top of this, though control theory sympathizers would like to prove otherwise, control theoretic tools are imperfect at identifying potential failure modes and inappropriate levels of operator apathy. It's hard! As much as I love John and his aspirations, I have grown to think that control theoretic feedback only gives actionable insights for a small subset of feedback systems.
Related to the Prepardeness Paradox, the Y2K bug was a fun example where people panicked about interconnectedness and probably overcorrected. https://www.argmin.net/p/in-the-year-2000
As far as operator apathy is concerned, Lisanne Bainbridge was writing about this very cogently in 1983 in "Ironies of automation" (https://www.adaptivecapacitylabs.com/IroniesOfAutomation-Bainbridge83.pdf), which should be required reading for control theory sympathizers!
"This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator."
"... the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator."
Can one design a control system that changes the very structures of the system it seeks to control?
I am not an engineer, though I had aspirations to become one long ago, and am fairly critical of engineers in particular and engineering generally. My interactions with engineers in professional environments have only reinforced this position. Rational to the point of stupidity, blinded by theory, oblivious to their systemic role, socially inept, what comes most to mind is an old, somewhat insensitive, line from a Cory Doctorow novella that (I'm paraphrasing) "engineers are basically high-functioning autistics [I object to the ableist pejorative implication] who have no idea how normal people behave" (related: the engineer's fallacy). Nor do they ever seem to ask if the problem they are tasked with solving *should* be solved, i.e. controlled, nor if this control policy, decided by someone other than themselves whose intentions are never questioned, perpetuates harm for the sake of stability (i.e. business as usual).
My position in this was as quality assurance, a duct-taper to use David Graeber's term, someone at the bottom of a corporate system, who corrected (or attempted to), the many, many mistakes made by workers, managers, administrators, customers, and yes, engineers. Our staff engineer was shocked when I first mentioned to him the various ways that workers and managers cheated in order to hit their performance goals (Goodhart's law was in full effect), as though it never occurred to him, sitting in the air-conditioned office, staring at graphs and spreadsheets or whatever he looked at, that the pressures that workers and managers were under (i.e. being threatened by their bosses, worked to exhaustion, fear of termination, etc.) might compel them to work around unachievable goals (which are never questioned). To the engineer, whose job was never in danger, the metrics were a feedback mechanism for optimizing performance; to the workers and low-level managers, they were a nightmare that kept operations in perpetual fear and stress, with predictable yet ignored consequences.
Due to many factors (scientific management, inter-department squabbling and miscommunication, employee deskilling, poor training, tyrannical and sociopathic managers, a competitive work culture, unsafe/unhealthy work conditions, massive hierarchization, etc.), it was impossible to change anything within the system itself. Managers blew smoke (i.e. lied) all the way up the chain of command, and were merciless to their subordinates. I suspect that cybernetic feedback (as in the Cybersyne model) would have terrified them, as they were always trying to maintain an illusion of control and competence, which would quickly have been dispelled were executives to see how workers at the bottom were treated. But all this assumes the execs would have cared. Why should they, when profits kept going up and there were always new workers to replace the ones that broke?
One particular event stuck with me (which I will keep vague out of a desire for brevity and anonymity). Some higher-ups decided to change some of the hardware systems that we (my department) relied upon to do our jobs, and tasked Maintenance with implementing the changes. They never informed us what the consequences of these changes would be so that we could give informed feedback that would influence their implementation. The result was that they disconnected machinery, that we needed to do our jobs, from the network but left it plugged in so it looked like it functioned, did not tell us that this was what they were doing, and we (I) did not find this out until afterward when I discovered the changes and it was too late to undo anything, severely compromising our operation and drastically increasing the amount of time necessary to complete tasks in a very high-paced environment. Still trying to figure out what was wrong (first rule of QA is you never assume anyone knows what they're doing), I contacted Tech Support only to be referred to an engineer high up in the company, who proceeded to talk to me as though I were an idiot. In his world, he apparently thought that systems functioned as he thought they did, that employees were rational and knew how to do their jobs, that errors were rare, and that misunderstandings or conflicts could be resolved simply but sitting down and talking about them. He could not wrap his rational mind around emotions, office politics, incompetence, conflicting motives, and apparently social skills. My working world was one governed by Murphy's Law. I saw all our systems, that apparently people like him designed, fail in ways ranging from the repetitively mundane to the spectacular, and instead of valuing my experiences and insights they were dismissed as irrelevant and foolish. Can you tell I'm still salty about it?
The many years of similar experiences led me ultimately to concluding that the structure of the company (the system) produced the behavior I observed (I hadn't yet heard of systems dynamics) and that there was nothing I could do about it. The end for me came when some other higher-ups decided to deskill my department, after I fought (yes, fought) for years to protect it from incursion by efficiency wonks and managerial incompetence. Would it have done any good to explain how deskilling workers makes them worse at their jobs, because it deprives them of a theory of operation, of a knowledge of how their labor fits into a larger system, of the consequences of their and everyone else's actions? That this lack of knowledge would cause them to make more errors that would in turn require more effort and resources to correct? That the work itself was physically destructive to the human body, the stress psychologically destructive (I don't think I knew anyone who didn't use recreational drugs to cope with it), that the company literally turns people into profit? The term "meat grinder" comes to mind. Engineers made this.
One of the themes in this article seems to center around reconciling the macroeconomic with the microeconomic. In the Humanities spaces I now inhabit, this is well-trodden, if unresolved, ground (i.e. reconciling Marx and Freud). I've noticed how different disciplines seem to come across the same problems completely independently of one another and have completely different language for describing the same phenomena, and also don't seem to realize that others are already talking about them and have been for some time. My concern is that some of the systems we see failing, however functional they are, perpetuate oppression, misery, inequality, injustice, and environmental destruction. Does the engineer designing a hydroelectric dam care about the ecological impact, the endemic species that will be extirpated by the reservoir? Do they care about the factory worker who will be physically disabled by middle age from the repetitive and strenuous labor they perform, which the engineer never shall, in order to hit some goal their masters set for them, which maximizes company profit? Should the bank computers with their COBOL programs be kept running in service of this system that works (for a given set of criteria)? Can change be built into such a system? Does the engineer understand their role in this?
Engineering can also be a bubble.
Really nice article.