Dan Davies (whose substack I very much recommend) recently wrote about the sense in which our mathematical handle on systems of sufficiently high complexity (e.g., the brain or the economy) is more metaphorical in nature, rather than quantitative. The complexity manifests itself not only on the object level, pertaining to the system itself as it is constituted as a worldly object, but also on the meta level, pertaining to the complexity of the theories we use to model the system with the usual goals of prediction and control. Dan quotes Stafford Beer’s 1959 book, Cybernetics and Management; I will add a few more quotations from very different sources.
Friedrich Hayek, “The theory of complex phenomena,” in Critical Approaches to Science & Philosophy, edited by Mario Bunge, 1964:
Economic theory is confined to describing kinds of patterns which will appear if certain general conditions are satisfied; it can rarely if ever derive from this knowledge any predictions of specific phenomena. This is seen most clearly if we consider those systems of simultaneous equations which since Leon Walras have been widely used to represent the general relations between the prices and the quantities of all commodities produced and sold. They are so framed that if we were able to fill in all the blanks, i.e., if we knew all the parameters of these equations, we could calculate the prices and quantities of all the commodities. But, as at least the founders of this theory clearly understood, its purpose is not “to arrive at a numerical calculation of prices,” because it would be “absurd” to assume that we can ascertain all the data. Prediction of the formation of this general kind of pattern rests on certain very general factual assumptions (such as that most people engage in trade in order to earn an income, that they prefer a larger income to a smaller one, that they are not prevented from entering whatever trade they wish, etc.) —assumptions which determine the scope of the variables but not their particular values; it is, however, not dependent on the knowledge of the more particular circumstances which we would have to know in order to be able to predict prices or quantities of particular commodities. No economist has yet succeeded in making a fortune by buying or selling commodities on the basis of his scientific prediction of future prices. (Even though some may have done so by selling such predictions!)
To the physicist it often seems puzzling why the economist should bother to formulate such equations, since he admittedly sees no chance of determining the numerical values of the parameters which would enable him to derive the values of the individual magnitudes. Even many economists seem loath to admit that those systems of equations are not a step toward specific predictions of individual events but the final results of their theoretical efforts—a description merely of the general character of the order we shall find under specifiable conditions but can never translate into a prediction of particular manifestations.
Michael Arbib, Oliver Selfridge, and Edwina Rissland, “A dialogue on ill-defined control,” in Adaptive Control of Ill-Defined Systems, ed. by Arbib, Rissland, and Selfridge, 1984:
The task of the economic decision-maker is not to control a physical system that knows nothing, but to control a system that is in turn modelling the controllers! And, of course, the economists are part of the economy that is to be controlled. The world, which is the ill-defined system that we are trying to control, is a world full of people. And they are each trying to control certain aspects of a world full of people. And the problem is: How do we coordinate all these different control systems to achieve some sort of overall goal when each control system is changing the environment of other control systems which in turn adapt and evolve. There is no more important problem than that.
…
The control is distributed. There are indeed systems whose components have different goals, but which can agree on a higher level goal for some kind of overall satisfaction. Perhaps a good marriage is like that. Or a long-lasting peace treaty anong nations. So we are all embedded in our own control systems.
…
The trouble is that when you specify a task, it is in the language of human intentionality. There is a belief that arises out of too much contact with computers that the world runs by information. But the universe does not run by information; it runs by dynamics that are constrained. And so systems have their lawfulness, and you can't impose human intentionality on them, regardless. That's the trouble here. You can do it inside computers, which are about the worst artefacts to use as images of the real world that I can think of, because switching networks have totally arbitrary dynamics as symbol manipulators. But they are a very poor sample of the real world. And that's the difficulty.
Michael Arbib and Mary Hesse, The Construction of Reality, 1986:
If we look at the implications of recent discussions of the theory ladenness of observation, of realism and the use of scientific models, we find that use of language in scientific theory conforms closely to the metaphoric model. Scientific revolutions are, in fact, metaphoric revolutions, and theoretical explanation should be seen as metaphoric redescription of the domain of phenomena.
Scientific data are initially described either in an "observation" language or in the language of a familiar theory and are then redescribed in terms of a theoretical model that allows two apparently disparate situations to interact in a novel way. For example, sounds and waves on water are both parts of our everyday observation; what is novel is the suggestion that there is something about sound akin to waves—not the wetness or the sight of whitecaps but an underlying regularity of motion. We recognize some positive analogy between the two systems, and the negative analogy creates a tension that can invest the phenomena with new meaning. Metaphor causes us to "see" the phenomena differently and causes the meanings of terms that are relatively observational and literal in the original system to shift toward the metaphoric meaning. Terms such as "harmony," "resonance," and "pitch" come to be used with precise meanings derived from the wave model. Meaning is constituted by a network, and metaphor forces us to look at the intersections and interaction of different parts of the network. In terms of the metaphor, we can find and express deeper analogies between diverse phenomena; or, of course, in the case of bad metaphors we may find we are misled by them.
…
Scientific models are, in the end, intended to satisfy what we have called the pragmatic criterion; this satisfaction will generally require that their local applications can be expressed in locally stable and consistent language and, if necessary, in the form of deductive arguments. This is one limiting case of the view that "all language is metaphorical." However, it does not entail a return to the view that science is distinguished by a special literal use of language in which meanings are given exclusively by empirical states of affairs ("truth conditions") and for which truth is explicated by a simple correspondence theory. It follows that we cannot assume, with some present-day realists, that the relative success of scientific models in satisfying the pragmatic criterion shows that they are ideally intended to be true descriptions of the real, underlying structure of the world. The strong realist view seems, like Bacon's alphabet of nature, to require an ideal universal language exactly matching the world in its essential features. This view neglects the facts that scientific theory has to be based in some natural language or other and that the historical sequence of fundamental theories do not exhibit convergence toward universal truth in any ideal language. Perhaps there is no such language, in which case there are no strictly universal laws of nature, only discoverable regularities in our local (though large) regions of space and time. Or, perhaps, there is such a language, but known to God only, in which case there will be universal laws of nature; but it is a strong assumption to suppose we can attain them. We have no sufficient reasons from the local success of science for making the assumption, and moreover we do not need it for the ordinary business of theorizing in the scientific languages we actually use.
A.G.J. MacFarlane, “Information, knowledge, and control,” in Essays on Control: Perspectives in the Theory and its Applications, edited by H.L. Trentelman and J.C. Willems, 1993:
In the same way that we can understand the mechanisms of the weather while only having a limited ability to predict it (most short term weather prediction depends heavily on the empirical device of looking at what is actually happening via satellite photography), so we can understand the qualitative aspects of how complex systems work while only being able to control them to a limited extent. For example, compare the complexity of the models normally available for a petrochemical refining plant with the complexity of the controllers normally used to control it. It is commonplace to use plant models having hundreds or even thousands of state variables while using controllers which have only tens of state variables. Indeed I was once shown over a large research establishment where there were three adjacent offices all full of dynamical specialists. It was explained to me that the first office contained modellers building very large plant models, the middle office contained people creating reduced order models of the plant—from these same very high order models—for the people in the end office to use in designing the control systems to be used on the plant. I have always considered this to be a deeply instructive parable. All these activities were necessary—the first to understand it, and to devise a point of departure for further modelling work on the process; the second to produce a related, simpler system which reflected only those aspects of plant behaviour whose complexity could be successfully matched by a feasible controller design, which would in turn achieve an attainable level of effective interaction with the plant. (Anyone who thinks that this point of view is exaggeratedly pessimistic should try designing and testing effective controllers for some simple mechanical systems such as a quadruple inverted pendulum.)
Bernard Gaveau, Charles Rockland, and Sanjoy K. Mitter, “Autonomy and adaptiveness: a perspective on integrative neural architectures,” Laboratory for Information and Decision Systems technical report, MIT, 1994:
It is not clear to us, for example, that the general idea of "mathematization" of science which was so successful in physics and, in principle at least, in chemistry, can be applied, even in principle, to an autonomous system context. In other words, the traditional ideology that "mathematical physics" is the ultimate state of an "exact science", may be totally invalid in the present context. The whole methodology of physics, engineering science, and modem technology in general, is the modeling of the systems one wants to describe, in a very precise terminology, either using infinitesimal calculus or other kinds of "calculus" (symbol manipulation), in such a way that one can simulate the systems, predict their behavior, and act on them, if necessary. In the setting of autonomous systems, the role of theory may be rather different, and less centered on sharp prediction of system behavior. We anticipate that this role will be to provide a framework for discussing the nature of the "coherence" of the system, and how it is achieved and maintained. Typically, whether in the context of natural or of engineered systems, the issue of "coherence" is treated as, at best, a background consideration, and certainly not as. a focus of inquiry in itself. We believe that any theory of autonomous systems will need to bring "coherence" into the foreground. We expect that, correspondingly, new kinds of mathematics will be necessary.
It may well be that there are "laws", or principles, of coherent organization. However, it is doubtful that these take on the character of physical laws, such as Newton's laws or even the laws of quantum mechanics. Considered in an engineering context, they are much more likely to be akin to the "principles", explicit or intuitive, used by a creative designer. A case in point is the design of "layered hierarchical systems". Physical laws have a role to play here, but the final design incorporates principles which may not be expressible within current mathematical frameworks.