Daniel Dennett (1942-2024)
We all came out from under Dennett's overcoat of transducers and effectors.
I’ve had a complicated history with the writings of Daniel Dennett, who passed away on April 19th, just three days ago. As a teenager who had grown up reading the science fiction and the philosophical writings of Stanislaw Lem, I was greatly influenced by The Mind’s I, a collection of stories and essays on the nature of the mind and the self that Dennett had put together with Douglas Hofstadter. At some point, I went through a brief New Atheist phase, à la Dennett and Dawkins. Now, two decades later, I have come to appreciate Dennett’s broad intellect and interests, his genuine interest in the sciences of computation and cognition, his appreciation of engineering as the mode of inquiry complementary to natural science, and his wit. I find his radically adaptationist approach to evolution way too reductive, but his openness to engaging with a wide variety of sources was truly one of a kind. The discussion of Friedrich Nietzsche’s ideas was the high point of Darwin’s Dangerous Idea (while the broadside against Stephen Jay Gould and Richard Lewontin was the low point of that book).
My two favorite books among his writings are Elbow Room, in which he provides a compatibilist account of free will, and The Intentional Stance. I want to quote one passage from the latter:
Our methodological solipsism dictates that we ignore the environment in which the organism resides—or has resided—but we can still locate a boundary between the organism and its environment and determine the input and output surfaces of its nervous system. At these peripheries there are the sensory transducers and motor effectors. The transducers respond to patterns of physical energy impinging on them by producing syntactic objects—"signals"—with certain properties. The effectors at the other end respond to other syntactic objects—"commands"—by producing muscle flexions of certain sorts. An idea that in various forms licenses all speculation and theorizing about the semantics of mental representation is the idea that the semantic properties of mental representations are at least partially determinable by their relations, however, indirect, with these transducers and effectors. If we know the stimulus conditions of a transducer, for instance, we can begin to interpret its signal—subject to many pitfalls and caveats. A similar tentative and partial semantic interpretation of "commands" can be given once we see what motions of the body they normally produce. Moving toward the center, downstream from the transducers and upstream from the effectors, we can endow more central events and states with representational powers, and hence at least a partial semantic interpretation … .
For the moment, however, we should close our eyes to this information about transducer sensitivity and effector power and treat the transducers as "oracles" whose sources of information are hidden (and whose obiter dicta are hence uninterpreted by us) and treat the effectors as obedient producers of unknown effects. This might seem to be a bizarre limitation of viewpoint to adopt, but it has its rationale: it is the brain's-eye view of the mind, and it is the brain, in the end that does all the work … . Brains are syntactic engines, so in the end and in principle the control functions of a human nervous system must be explicable at this level or remain forever mysterious.
With his characteristic eloquence and precision, he has more or less summarized the open systems framework of control theory (for example, the idea of the senses as oracles is quite close to the notion of input as something free and unexplained by the system itself). I am sure he would have loved learning about the work of Jan Willems on the behavioral approach to control systems.
Towards the end of his life, he became increasingly worried about the anti-humanist strain of modern AI, which he summarized in “The problem with counterfeit people,” published in The Atlantic last year. While I don’t agree with the alarmist tone of that piece (and I definitely think citing Yuval Harari, or even taking seriously anything he writes, is overkill), it does point to the Janus-like nature of the intentional stance: We tend to ascribe not only mental states, but also internal values, to systems we perceive as acting with intent. This has been indispensable in the development of culture, ethics, the arts, but it can also lead to irreversible loss of control—not in the doomsday sense in which the AI risk community means it, but in the more mundane (and more ominous) sense that we will no longer be able to rely on our political and economic systems as mechanisms for embodying (if very imperfectly) the idea of Habermasian communicative rationality, which is meant to serve human purposes and underwrite human values. Similar concerns were put forth by the French philosopher Jean-Pierre Dupuy in his book The Mark of the Sacred:
Many philosophers [are] saying that Descartes’s dream—of putting man in the place of God, as the master and possessor of nature—turned into a nightmare, with the result that mastery is now itself in urgent need of being mastered. I fear that they have not the least understanding of what is really at issue. They fail to see that the technology now taking shape at the intersection of a great many fields aims precisely at non-mastery. I repeat: the engineer of tomorrow will be an apprentice sorcerer not by negligence or incompetence; he will be one deliberately. He will begin by imagining and designing complex organisms in the form of mathematical models, and will then try to determine, by systematically exploring the landscape of their functional properties, which behaviors they are capable of supporting. In adopting a “bottom-up” approach of this kind, he will be more an explorer and an experimenter than a builder; his success will be measured more by the extent to which these creatures surprise him than by their agreement with a set of preestablished criteria and specifications. Fields such as artificial life, genetic algorithms, robotics, and distributed artificial intelligence already display this character. In the years ahead the aspiration to non-mastery threatens to reach fruition with the demiurgic manipulation of matter on the atomic and molecular scale by nanotechnologies. Moreover, to the extent that the scientist is now likelier to be someone who, rather than seeking to discover a reality independent of the mind, investigates instead the properties of his own inventions (more as a researcher in artificial intelligence, one might say, than as a neurophysiologist), the roles of engineer and scientist will come to be confused and, ultimately, conflated with each other. Nature itself will become what humans have made of it, by unleashing in it processes over which, by design, there is no mastery.
We should keep these warnings in mind, let we lose sight of the humanistic spirit of engineering.
Agree much w your take on DD. But his trick is to introduce syntax and semantics = transduction. His illusion and trick for his audience. These interfaces are alike but not =. Unclear that he successfully responded to Searle.
Nice tribute to a very broad thinker. Re: Habermas, is the concern that A.I./LLMs are not accountable for their speech acts in the same way humans are? Or something else?