Since the comments have veered into philosophy, I can ask a question which I originally held back.
Maxim, your philosophical interests typically hop between logical empirical philosophy and continental metaphysics. Is there a philosophical map in your head that reconcile metaphysics denial (to put it crudely) of the logical positivists with the sort of continental philosophy that prefigures in certain other kinds of arguments you make (Yuk Hui's _Recursivity and Contingency_, which I read after your recommendation, comes to mind as a concrete example).
Good question! To set the record straight, though: While I am interested in logical empiricism (aka logical positivism) among other things, I view their fixation on language and verification as misguided. My epistemological sensibilities are more aligned with constructive empiricism, pragmatism, and operationalism, roughly along the lines of Mark Wilson and Hasok Chang. I don't think this should be much of a surprise given that I am a professor of engineering. As far as continental metaphysics goes, I have been influenced by it in two directions. The first is a sharp awareness of the profound limitations of technocratic solutions, which also engenders a certain degree of skepticism toward universalist views of technology. This is the line of thought articulated by Jean-Pierre Dupuy and Yuk Hui. The second is ontology, ethics, and their roles in shaping and influencing what Popper called "metaphysical research programmes." These underlie the "why" of science and technology.
Thank you sharing your philosophical influences and positions. I like to file people and their (philosophical) opinions away into little boxes and am now happy to do the same with yours.
I will not push this thread any further that this.
PS: Mario Bunge's _Exact, and Scientific Metaphysics_ programme kind of sounds a lot like Popper's, but there must be differences.
If you already brought up the cybernetics connection, I think (?) another contact point is the emphasis, from the very beginning, of feedback loop systems -- both from the Bigelow Wiener Rosenblueth and from McCulloch and Pitts. Stressing that purposeful systems aren't merely a collection of simple reflex arcs / simple input-output devices, but act based on some internal states as well etc.
What I find interesting in light of your analysis is that, in fact, McCulloch and Pitts were almost explicit about relating the idea of the system's state to the notion of abstraction (from the section "consequences" in their paper, emphasis mine):
> Causality, which requires description of states and a law of necessary connection relating them, has appeared in several forms in several sciences, but never, except in statistics, has it been as irreciprocal as in this theory. Specification for any one time of afferent stimulation and of the activity of all constituent neurons, each an “all-or-none” affair, determines the state. Specification of the nervous net provides the law of necessary connection whereby one can compute from the description of any state that of the succeeding state, but the inclusion of disjunctive relations prevents complete determination of the one before. Moreover, the regenerative activity of constituent circles renders reference indefinite as to time past. Thus our
knowledge of the world, including ourselves, is incomplete as to space and indefinite as to time. **This ignorance, implicit in all our brains, is the counterpart of the abstraction which renders our knowledge useful**. The role of brains in determining the epistemic relations of our theories to our observations and of these to the facts is all too clear, for it is apparent that every idea and every sensation is realized by activity within that net, and by no such activity are the actual afferents fully determined.
Even more broadly, from the neuro side, and though not cited directly in the famous M-P paper, the idea of "reverberating activity" in neural loopy circuits (in modern terms, recurrent) has been suggested and discussed at just about the same time and earlier, by Lorente de Nó, Lashley, and later by Hebb etc. McCulloch himself, in his 1949 "The brain computing machine" [1] attributes the idea to Lorente de Nó and to Kubie (there's no explicit citation, but a relevant work of Kubie -- who was a psychoanalyst -- can be found in [2]. There's some nice quotes there about feed-forward vs recurrent connectivity, again if we were to modernize the terminology).
[1] McCulloch, Warren S. "The brain computing machine." Electrical Engineering 68.6 (1949): 492-497.
[2] Kubie, Lawrence S. "A theoretical application to some neurological problems of the properties of excitation waves which move in closed circuits." Brain 53.2 (1930): 166-177.
Very apropos quotation from McCulloch and Pitts! By the way, Kleene's 1951 RAND memo "Representation of events in nerve nets and finite automata" both makes the notion of internal state explicit and presents an algebraic framework for analyzing both M-P nets and finite-state automata.
Regarding feedback loops: the distinction between simple reflex arcs and feedback loops tracks the distinction between systems that are memoryless vs. the ones with memory. However, Wiener & co. did not quite get at the notion of what control theorists would refer to as the (internal) state after 1960, as they were still working in the frequency domain and modeled feedback in terms of transfer functions. Wiener uses the 1930s methods of Bode and Nyquist in _Cybernetics_, for example. Ashby seems to have zeroed in on the idea of the internal state much earlier.
This is a very nice post, as always. Let me expand a little on the empiricist philosophical angle. There was a strand of more-or-less empiricist philosophy, in the late 19th and early 20th century, which aimed to replace "posits" of seemingly-problematic entities with "logical constructions" out of less problematic ones, often through some sort of equivalence classing. The paradigmatic example was Frege's construction of the natural numbers as equivalence classes under 1-1 correspondence, so that one could say things which English would sound like "the number 3 is the class of all triples" (and in fact not have that be circular), followed by Wiener's construction of "ordered pair" from sets. Inspired by examples like this, Whitehead (followed later by Russell) tried to give constructions of spatial points as equivalence classes of all the extended regions of space that we'd ordinarily say included the point. (I'm speaking very loosely.) Similarly Whitehead tried to construct instants of time out of equivalence classes of overlapping extended time intervals, and point-instants in space time similarly. Russell, in _The Analysis of Matter_ (1927) tried to describe particles like electrons in this way, and it's part of what was going on with Carnap's _Logical Structure of the World_, and some other efforts on the part of the Logical Positivists. At one point in graduate school I actually wanted to write a paper on state-space reconstruction as a _successful_ instance of this strategy, but I never did because it seemed too arcane...
(Wesley Salmon, who articulated one version of predictive sufficiency based on equivalence classing around 1970, was a student of Hans Reichenbach, one of the leading Logical Positivists, though whether he was directly thinking of examples like Frege, Wiener or Whitehead, I couldn't say.)
Right, I think your précis of empiricist ideas in terms of aiming for something like state-space reconstruction is on target. Carnap's late program of rational reconstruction of science, for example, was based on Ramsey sentences (cf. Chapter 26 of his _Philosophical Foundations of Physics_). Ramsey's proposal was to replace theoretical entities with latent variables, so that a scientific theory is a sentence containing manifest variables (observational terms) and existential quantification over these latent variables. This is, essentially, what Jan Willems was doing in behavioral control theory: View the manifest behavior as a section of the full behavior, with existential quantification over latent variables. Some of these latent variables become state variables when they act as conditional independence generators.
From logical empiricism (aka positivism), we can also go to van Fraassen's constructive empiricism, where he introduces the distinction between the extensional set-theoretic approach to (re)construction of scientific theories (Carnap, Suppes) and the intensional approach, which he credits to Hermann Weyl and calls the state-space approach (cf. _The Scientific Image_). Again, what van Fraassen calls empirical substructure of a theory is its manifest content, and the latent content is the state-space structure. Interestingly, the realization problem in control is also viewed as introducing what they call "splitting variables" as a way to "explain" correlations in manifest content by introducing appropriate decouplings via conditional independence given latent variables.
I self-bet as I was reading the early paragraphs that you'd mention Crutchfield-Shalizi epsilon-machine approach to building optimal predictors. So I am surprised---hence the question.
Do you see connections to _their_ predictive sufficiency approach?
Since I didn't want to write a length post, I had to force myself to resist the temptation to go beyond the few original sources. That said, the Shalizi-Crutchfield predictive sufficiency approach is directly related to many of these ideas, with important precursors in some of the early results on stochastic realization theory, as well as in Frank Knight's predictive process theory.
I have tried to sketch out some of the historical precursors, and successors, in [http://bactra.org/notebooks/prediction-process.html]. (I was not aware of all of these when I was writing my papers with Jim, and it wouldn't surprise me if I have missed others.) I should however have explicitly mentioned Nerode in that notebook, since it was definitely one of the sources! (Though second-hand through later accounts of finite automata, I'd never read the original Nerode paper until just now.)
There are some connections with the Hartmanis-Stearns book. Another key source is Samuel Eilenberg's two-volume "Automata, Languages, and Machines." Incidentally, Eilenberg mentions the 1958 "Sequential functions" paper by George Raney and says that "this is a remarkable paper since its precision and mathematical clarity are outstanding when compared with the level of writing in this subject in 1958. It has been inexplicably ignored."
Since the comments have veered into philosophy, I can ask a question which I originally held back.
Maxim, your philosophical interests typically hop between logical empirical philosophy and continental metaphysics. Is there a philosophical map in your head that reconcile metaphysics denial (to put it crudely) of the logical positivists with the sort of continental philosophy that prefigures in certain other kinds of arguments you make (Yuk Hui's _Recursivity and Contingency_, which I read after your recommendation, comes to mind as a concrete example).
Good question! To set the record straight, though: While I am interested in logical empiricism (aka logical positivism) among other things, I view their fixation on language and verification as misguided. My epistemological sensibilities are more aligned with constructive empiricism, pragmatism, and operationalism, roughly along the lines of Mark Wilson and Hasok Chang. I don't think this should be much of a surprise given that I am a professor of engineering. As far as continental metaphysics goes, I have been influenced by it in two directions. The first is a sharp awareness of the profound limitations of technocratic solutions, which also engenders a certain degree of skepticism toward universalist views of technology. This is the line of thought articulated by Jean-Pierre Dupuy and Yuk Hui. The second is ontology, ethics, and their roles in shaping and influencing what Popper called "metaphysical research programmes." These underlie the "why" of science and technology.
Thank you sharing your philosophical influences and positions. I like to file people and their (philosophical) opinions away into little boxes and am now happy to do the same with yours.
I will not push this thread any further that this.
PS: Mario Bunge's _Exact, and Scientific Metaphysics_ programme kind of sounds a lot like Popper's, but there must be differences.
Very interesting indeed!
If you already brought up the cybernetics connection, I think (?) another contact point is the emphasis, from the very beginning, of feedback loop systems -- both from the Bigelow Wiener Rosenblueth and from McCulloch and Pitts. Stressing that purposeful systems aren't merely a collection of simple reflex arcs / simple input-output devices, but act based on some internal states as well etc.
What I find interesting in light of your analysis is that, in fact, McCulloch and Pitts were almost explicit about relating the idea of the system's state to the notion of abstraction (from the section "consequences" in their paper, emphasis mine):
> Causality, which requires description of states and a law of necessary connection relating them, has appeared in several forms in several sciences, but never, except in statistics, has it been as irreciprocal as in this theory. Specification for any one time of afferent stimulation and of the activity of all constituent neurons, each an “all-or-none” affair, determines the state. Specification of the nervous net provides the law of necessary connection whereby one can compute from the description of any state that of the succeeding state, but the inclusion of disjunctive relations prevents complete determination of the one before. Moreover, the regenerative activity of constituent circles renders reference indefinite as to time past. Thus our
knowledge of the world, including ourselves, is incomplete as to space and indefinite as to time. **This ignorance, implicit in all our brains, is the counterpart of the abstraction which renders our knowledge useful**. The role of brains in determining the epistemic relations of our theories to our observations and of these to the facts is all too clear, for it is apparent that every idea and every sensation is realized by activity within that net, and by no such activity are the actual afferents fully determined.
Even more broadly, from the neuro side, and though not cited directly in the famous M-P paper, the idea of "reverberating activity" in neural loopy circuits (in modern terms, recurrent) has been suggested and discussed at just about the same time and earlier, by Lorente de Nó, Lashley, and later by Hebb etc. McCulloch himself, in his 1949 "The brain computing machine" [1] attributes the idea to Lorente de Nó and to Kubie (there's no explicit citation, but a relevant work of Kubie -- who was a psychoanalyst -- can be found in [2]. There's some nice quotes there about feed-forward vs recurrent connectivity, again if we were to modernize the terminology).
[1] McCulloch, Warren S. "The brain computing machine." Electrical Engineering 68.6 (1949): 492-497.
[2] Kubie, Lawrence S. "A theoretical application to some neurological problems of the properties of excitation waves which move in closed circuits." Brain 53.2 (1930): 166-177.
Very apropos quotation from McCulloch and Pitts! By the way, Kleene's 1951 RAND memo "Representation of events in nerve nets and finite automata" both makes the notion of internal state explicit and presents an algebraic framework for analyzing both M-P nets and finite-state automata.
Regarding feedback loops: the distinction between simple reflex arcs and feedback loops tracks the distinction between systems that are memoryless vs. the ones with memory. However, Wiener & co. did not quite get at the notion of what control theorists would refer to as the (internal) state after 1960, as they were still working in the frequency domain and modeled feedback in terms of transfer functions. Wiener uses the 1930s methods of Bode and Nyquist in _Cybernetics_, for example. Ashby seems to have zeroed in on the idea of the internal state much earlier.
Kleene's memo: https://www.rand.org/pubs/research_memoranda/RM704.html
This is a very nice post, as always. Let me expand a little on the empiricist philosophical angle. There was a strand of more-or-less empiricist philosophy, in the late 19th and early 20th century, which aimed to replace "posits" of seemingly-problematic entities with "logical constructions" out of less problematic ones, often through some sort of equivalence classing. The paradigmatic example was Frege's construction of the natural numbers as equivalence classes under 1-1 correspondence, so that one could say things which English would sound like "the number 3 is the class of all triples" (and in fact not have that be circular), followed by Wiener's construction of "ordered pair" from sets. Inspired by examples like this, Whitehead (followed later by Russell) tried to give constructions of spatial points as equivalence classes of all the extended regions of space that we'd ordinarily say included the point. (I'm speaking very loosely.) Similarly Whitehead tried to construct instants of time out of equivalence classes of overlapping extended time intervals, and point-instants in space time similarly. Russell, in _The Analysis of Matter_ (1927) tried to describe particles like electrons in this way, and it's part of what was going on with Carnap's _Logical Structure of the World_, and some other efforts on the part of the Logical Positivists. At one point in graduate school I actually wanted to write a paper on state-space reconstruction as a _successful_ instance of this strategy, but I never did because it seemed too arcane...
(Wesley Salmon, who articulated one version of predictive sufficiency based on equivalence classing around 1970, was a student of Hans Reichenbach, one of the leading Logical Positivists, though whether he was directly thinking of examples like Frege, Wiener or Whitehead, I couldn't say.)
Right, I think your précis of empiricist ideas in terms of aiming for something like state-space reconstruction is on target. Carnap's late program of rational reconstruction of science, for example, was based on Ramsey sentences (cf. Chapter 26 of his _Philosophical Foundations of Physics_). Ramsey's proposal was to replace theoretical entities with latent variables, so that a scientific theory is a sentence containing manifest variables (observational terms) and existential quantification over these latent variables. This is, essentially, what Jan Willems was doing in behavioral control theory: View the manifest behavior as a section of the full behavior, with existential quantification over latent variables. Some of these latent variables become state variables when they act as conditional independence generators.
From logical empiricism (aka positivism), we can also go to van Fraassen's constructive empiricism, where he introduces the distinction between the extensional set-theoretic approach to (re)construction of scientific theories (Carnap, Suppes) and the intensional approach, which he credits to Hermann Weyl and calls the state-space approach (cf. _The Scientific Image_). Again, what van Fraassen calls empirical substructure of a theory is its manifest content, and the latent content is the state-space structure. Interestingly, the realization problem in control is also viewed as introducing what they call "splitting variables" as a way to "explain" correlations in manifest content by introducing appropriate decouplings via conditional independence given latent variables.
I self-bet as I was reading the early paragraphs that you'd mention Crutchfield-Shalizi epsilon-machine approach to building optimal predictors. So I am surprised---hence the question.
Do you see connections to _their_ predictive sufficiency approach?
Since I didn't want to write a length post, I had to force myself to resist the temptation to go beyond the few original sources. That said, the Shalizi-Crutchfield predictive sufficiency approach is directly related to many of these ideas, with important precursors in some of the early results on stochastic realization theory, as well as in Frank Knight's predictive process theory.
I have tried to sketch out some of the historical precursors, and successors, in [http://bactra.org/notebooks/prediction-process.html]. (I was not aware of all of these when I was writing my papers with Jim, and it wouldn't surprise me if I have missed others.) I should however have explicitly mentioned Nerode in that notebook, since it was definitely one of the sources! (Though second-hand through later accounts of finite automata, I'd never read the original Nerode paper until just now.)
Thanks!
Also curious to know if you see connections to Hartmanis-Stearns' _Algebraic Structure Theory of Sequential Machines_ work.
There are some connections with the Hartmanis-Stearns book. Another key source is Samuel Eilenberg's two-volume "Automata, Languages, and Machines." Incidentally, Eilenberg mentions the 1958 "Sequential functions" paper by George Raney and says that "this is a remarkable paper since its precision and mathematical clarity are outstanding when compared with the level of writing in this subject in 1958. It has been inexplicably ignored."
Raney's paper: https://dl.acm.org/doi/10.1145/320924.320930