There is an old joke: A mathematician, a physicist, and an engineer were given a red rubber ball and asked to find its volume. The mathematician measured the ball’s diameter and evaluated a volume integral. The physicist immersed the ball in a tub full of water and measured the volume of the displaced water. The engineer looked the ball over, found the model number and the serial number, and then looked the volume up in the “Red Rubber Ball” table. Most people take this as a funny sideswipe at engineers’ hard-nosed pragmatism, but there is an important point to be made here about the epistemological framework of engineering.
The philosopher Gilbert Ryle made a big deal of the distinction between “knowing that” (or declarative knowledge) and “knowing how” (or procedural knowledge). Given the intimate link between engineering and artifacts, the latter form of knowledge comes up all the time in the context of system design and underlies various functional theories used by engineers, although the former type is also important as it underwrites the lower-level physical theories. However, the joke about the “Red Rubber Ball” table also highlights a third type of knowledge: “knowing when.” In other words, a key tenet of engineering epistemology is not just about knowing how to apply a particular strategy, it is also about knowing when to stick with it and when to opt for something else.
Mark Wilson, in his phenomenal book Physics Avoidance, goes into great detail about what he calls “conceptual strategy,” a pragmatically inflected network of cognitive tools and models we rely on in our dealings with the world, in both the ordinary, common-sense way and the scientific or engineering mode:
We are not supernatural intellects; we forever remain the evolved descendants of humble hunter-gatherers, who must cobble together and redirect our modest computational inheritance in the pursuit of more sophisticated objectives. Philosophers often proceed on the presumption that we possess bigger brains and inferential skills than we do, able to juggle descriptive parameters and computational processes far beyond our actual capacities. But with a limited stock of words and smallish brains, we must forever seek roundabout strategies that allow us to handle the extremely large range of challenges that we confront within science and everyday practice (such workaround tactics are called “strategies of physics avoidance” in [a later essay]).
In this, Wilson finds himself in qualified agreement with the “ordinary language” philosophy of J.L. Austin, whom he quotes in the epigraph to “Pragmatics’ place at the table” (Essay 1 of Physics Avoidance):
[O]ur common stock of words embodies all the distinctions men have found worth drawing, and the connections they have found worth drawing, in the lifetimes of many generations: these surely are likely to be more numerous, more sound, since they have stood up to the long test of the survival of the fittest and more subtle, at least in all ordinary and reasonably practical matters, than any that you or I are likely to think up in our arm-chairs of an afternoon—the most favored alternative method.
Wilson’s interest in all of this, as he points out many times, is primarily that of a philosopher of language, but he argues that the computational and modeling practices of engineers carry the same evolutionary lineage through “the lifetimes of many generations:”
To keep their reasoning practices within practical bounds, engineers divide a modeling task into a collection of more or less independent algorithms, controlled by an external register that monitors the overarching strategic purposes to which the localized results eventually contribute. We follow similar policies of strategic guidance in everyday discourse as well, for essentially the same reasons.
The “red rubber ball” joke is then a nice illustration of what Wilson refers to as “investigative moods,” or contextual controls that guide the modeling or the design effort. He argues that these investigative moods are indispensable precisely because they “reduce syntactic complexity and allow for significant reasoning compression as well.” While appealing to a look-up table in the context of the joke may seem like overkill, there are many scenarios in which it is preferable for engineers to rely, for a variety of reasons, on empirical or phenomenological knowledge available in the form of tables rather than on deductive-nomological explanations favored by logical empiricists. The aerospace engineer Walter G. Vincenti, in his thorough study of engineering epistemology What Engineers Know and How They Know It, gives the example of data tables of aerodynamic performance of wing sections. They do not explain air flow over wing sections or why one design might be preferable to another, but they did guide the design of early airplanes and provided initial steps for making predictions that could be bootstrapped into more sophisticated theories amenable to computer implementation.
The evolutionary gloss that Wilson puts on J.L. Austin’s view of language (stopping shy of worrying whether one may stumble into “violating Oxbridge linguistic etiquette”) is echoed by Vincenti, but Vincenti appeals instead to evolutionary epistemology ideas of Donald Campbell and Karl Popper. Evolutionary epistemology takes a more relaxed approach to conceptualizing the growth of knowledge, allowing not only for logic and deductive reasoning, but also for inherently fallible induction or genetic algorithms that proceed by trial-and-error, random variation, and selection. Vincenti cites the history of airfoil shape studies of David R. Davis and Eastman Jacobs as an example of the interplay between “blind”1 trial-and-error and theoretical models:
At the level of the device, the knowledge required is the airfoil’s shape. Davis's variations in shape were almost completely blind in any meaningful sense, virtually simple cut-and-try even though represented by sophisticated-looking equations. Jacobs chose his variations much less blindly on the basis of a rational theoretical concept and careful analysis of experimental experience; to the considerable extent that boundary-layer phenomena and the effects of surface roughness were uncertain or unknown, however, his variations still had a large element of blindness. Although the theoretical concept seemed sound in principle, there was no assurance at the time that any of the resulting airfoils would be worth retaining. Both Davis's and the better of Jacobs’s airfoils were selected initially for retention on the basis of wind-tunnel tests and the criterion universal to technology: does it (in some practical sense) work? Davis's airfoil was soon discarded when Jacobs’s sections appeared and when it was judged to no longer work as airplane speeds went up. Jacobs's sections died out more slowly for their original laminar-flow purpose when they failed to accomplish that purpose on actual airplanes in flight; they found an unintended niche of retention for a period, however, as high-speed airfoils.
As machine learning and AI are adopted widely beyond just making people click on ads, these ideas acquire greater significance. I will revisit all of this in the next few posts, especially in the context of interpretations of probability and stochastic systems.
Vincenti: ““Blind” is used here to connote that variations take place, not randomly, but only without complete or adequate guidance. “