In this and in the next few posts, I plan to lay out my thoughts on what constitutes the unique philosophy of engineering, as distinct from that of science or even technology.1 The basic notion is that of an open system, an entity distinguished from the rest of the world (its environment) but interfacing with it. The system and the environment can affect each other, and engineers are interested in this two-way interaction. The system is then an artifact which has a function or purpose. This is the viewpoint nicely articulated, for example, by Herbert Simon in The Sciences of the Artificial:
Fulfillment of purpose or adaptation to a goal involves a relation among three terms: the purpose or goal, the character of the artifact, and the environment in which the artifact performs. When we think of a clock, for example, in terms of purpose we may use the child’s definition: “a clock is to tell time.” When we focus our attention on the clock itself, we may describe it in terms of arrangements of gears and the application of the forces of springs or gravity operating on a weight or pendulum.
But we may also consider clocks in relation to the environment in which they are to be used. Sundials perform as clocks in sunny climates—they are more useful in Phoenix than in Boston and of no use at all during the Arctic winter. Devising a clock that would tell time on a rolling and pitching ship, with sufficient accuracy to determine longitude, was one of the great adventures of eighteenth-century science and technology. To perform in this difficult environment, the clock had to be endowed with many delicate properties, some of them largely or totally irrelevant to the performance of a landlubber’s clock.
Sanjoy Mitter, in “Some conceptual foundations of systems and computer science” (1987), writes that in engineering
one is interested in systems which are not of nature but man-made, where one might want to create a new device to perform a specific function, or shape the national economy to grow along a particular path or to synthesize a complex system consisting of interconnections of subsystems to perform a complex task. These systems are not isolated in the sense that they interact with an external environment, they do have inputs or external influences, some of which can be controlled and some of which are uncontrolled, the behavior of the system can be observed (perhaps inaccurately) and finally the behavior of the system can be changed by a feedback mechanism which feeds the input via a control mechanism into the system. The concern here is one of synthesis (not of analysis) and the process is one of invention and not of discovery.
Jan Willems, in “Thoughts on system identification” (2006), writes:
An important point that is unfortunately all too often absent or insufficiently emphasized in discussions about mathematical models is the issue of open versus closed systems. In open systems, we aim for a law that describes the relation between system variables, but does not go to the point of stating what will actually happen. The interaction with the (unmodeled) environment is explicitly part of the model, but what the environment will be, is not specified. The paradigmatic example of an open system is an input/output system, which explains the output in terms of the input, but leaves the input as imposed by the environment, unexplained and unspecified. The paradigmatic example of a closed system is an autonomous system. Systems and control theory, signal processing, and computing science seem to be the only areas which clearly address open systems. Mathematicians and physicists, for example, unfortunately usually end up viewing dynamical systems as closed systems.
Open systems interact with their environment; in engineered systems, this interaction, and whether and to what extent it is aligned with the desired function or goals, is the object of design, optimization, implementation, and maintenance. Automobiles, airplanes, computers, bridges, power plants, GPS satellites, the Internet, the national economy are all open systems. Their openness to their environment is the domain of the contingent: A model of an open system only specifies the relation between the system’s attributes, both manifest (these include the observable and manipulable variables crossing the system-environment boundary) and latent (pertaining to the internal organization of the system), it does not mandate specific outcomes. These ideas go back all the way to Aristotle’s Nicomachean Ethics: “Téchnē (art) is a state of capacity to produce with a true logos (course of reasoning).” That is, the art of the engineer is to produce (or bring into existence) things that could be otherwise, the actualization of some specific potentialities selected from the set of all possible interactions between the system and its environment (what Willems refers to as the system’s behavior).
This viewpoint is often termed, somewhat misleadingly, the “black-box approach.” On this view, the system is conceptualized as a device that produces responses when subjected to external stimuli, and its behavior (in the Willemsian sense) is the set of all possible stimulus-response realizations.2 For example, this is how Leon Chua describes an electrical device:
[A]n electrical device is any electric contrivance having two or more electrically accessible terminals through which electric power may be applied to achieve a useful purpose.
The complete behavioral description of the device is then conceptualized as a Gedankenexperiment, in which the device is connected to an arbitrary excitation network, and then we seek to describe the collection of all possible current and voltage waveforms that can be generated at the interface between the device and the excitation network. It is the possibility of swapping out one excitation network for another that renders the device an open system. Of course, this is too general to be useful, so we have to open up the black box, decompose it into tractable subsystems, study each in isolation (by treating the remainder of the system as its environment), and then reconnect everything and see how all the component subsystems interact. This strategy, which Willems describes as “tearing, zooming, and linking,” relies on the reciprocal interaction between two types of theories: physical and functional. It is the latter that are unique to engineering and systems science in general.
While the literature on the philosophy of engineering is nowhere near as extensive as that on the philosophy of science, there are nevertheless a number of interesting papers and books which I will often be referring to. As an example, take a look at the “Philosophy and engineering” issue of The Monist.
It should be pointed out that Willems rejected any a priori designation of a system’s external attributes as inputs or outputs in favor of a fairly austere empiricism: Which of the externally observable attributes of the system are the inputs and which are the outputs should be deduced from the law-like regularities of their covariation, not imposed at the outset.