From Socrates onward, a central Western philosophical project has been to reduce reasoning and judgment to explicit, calculable rules, a project that finds its technological culmination in modern artificial intelligence.
By Hubert L. Dreyfus, from What Computers Can't Do
Key Arguments
- Dreyfus traces the origin of this aspiration to Socrates’ demand for a standard of piety that could be applied to all actions, treating moral judgment as if it could be based on an explicit rule or procedure.
- Plato generalizes this into the thesis that all genuine knowledge must be stateable in explicit definitions and instructions that anyone can apply, thereby excluding intuitive and tradition‑based skill from the realm of knowledge.
- Hobbes explicitly identifies reasoning with calculation, reducing thought to the manipulation and summation of discrete ‘parcels’ or ‘bits’.
- Leibniz attempts to construct a universal formal language and calculus in which all concepts receive characteristic numbers and all disputes can be settled by calculation.
- Boole develops a binary algebra that formalizes logical operations in a way directly suitable for mechanical or electronic implementation.
- Babbage’s Analytic Engine and subsequent digital computers operate precisely as discrete state symbol‑manipulating devices, realizing in hardware the syntactic conception of reasoning envisioned by Hobbes and Leibniz.
- Dreyfus notes that digital computers, by manipulating abstract symbols according to formal rules without appeal to interpretation or intuition, appear to fulfill the philosophical demand for a completely formalized, syntactic form of reason.
- He links this development to Heidegger’s claim that cybernetics represents the culmination of the Western philosophical tradition, suggesting that the longstanding dream of formalizing reason has reached a practical implementation.
Source Quotes
I Since the Greeks invented logic and geometry, the idea that all reasoning might be reduced to some kind of calculationso that all arguments could be settled once and for allhas fascinated most of the Western tradition's rigorous thinkers. Socrates was the first to give voice to this vision.
I Since the Greeks invented logic and geometry, the idea that all reasoning might be reduced to some kind of calculationso that all arguments could be settled once and for allhas fascinated most of the Western tradition's rigorous thinkers. Socrates was the first to give voice to this vision. The story of artificial intelligence might well begin around 450 B.C. when (according to Plato) Socrates demands of Euthyphro, a fellow Athenian who, in the name of piety, is about to turn in his own father for murder: "I want to know what is characteristic of piety which makes all actions pious . . . that I may have it to turn to, and to use as a standard whereby to judge your actions and those of other men."
The story of artificial intelligence might well begin around 450 B.C. when (according to Plato) Socrates demands of Euthyphro, a fellow Athenian who, in the name of piety, is about to turn in his own father for murder: "I want to know what is characteristic of piety which makes all actions pious . . . that I may have it to turn to, and to use as a standard whereby to judge your actions and those of other men." 1§ Socrates is asking Euthyphro for what modern computer theorists would call an "effective procedure," "a set of rules which tells us, from moment to moment, precisely how to behave."2 Plato generalized this demand for moral certainty into an epistemological demand. According to Plato, all knowledge must be stateable in explicit definitions which anyone could apply.
It already expressed a basic moral and intellectual demand, and the success of physical science seemed to imply to sixteenth-century philosophers, as it still seems to suggest to thinkers such as Minsky, that the demand could be satisfied. Hobbes was the first to make explicit the syntactic conception of thought as calculation: "When a man reasons, he does nothing else but conceive a sum total from addition of parcels," he wrote, "for REASON . . . is nothing but reckoning. . . . " 5 It only remained to work out the univocal parcels or "bits" with which this purely syntactic calculator could operate; Leibniz, the inventor of the binary system, dedicated himself to working out the necessary unambiguous formal language.
Of course, we can also write up this practice, since it is at bottom just another theory more complex and particular. . . .10 Leibniz had only promises, but in the work of George Boole, a mathematician and logician working in the early nineteenth century, his program came one step nearer to reality. Like Hobbes, Boole supposed that reasoning was calculating, and he set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a Calculus. . . . "11 Boolean algebra is a binary algebra for representing elementary logical functions.
Thus even an analogue computer, provided that the relation of its input to its output can be described by a precise mathematical function, can be simulated on a digital machine.14* But such machines might have remained overgrown adding machines, had not Plato's vision, refined by two thousand years of metaphysics, found in them its fulfillment. At last here was a machine which operated according to syntactic rules, on bits of data. Moreover, the rules were built into the circuits of the machine.
Once the machine was programmed there was no need for interpretation; no appeal to human intuition and judgment. This was just what Hobbes and Leibniz had ordered, and Martin Heidegger appropriately saw in cybernetics the culmination of the philosophical tradition.15* Thus while practical men like Eckert and Mauchly, at the University of Pennsylvania, were designing the first electronic digital machine, theorists, such as Turing, trying to understand the essence and capacity of such machines, became interested in an area which had thus far been the province of philosophers: the nature of reason itself. In 1950, Turing wrote an influential article, "Computing Machinery and Intelligence," in which he points out that "the present interest in 'thinking machines' has been aroused by a particular kind of machine, usually called an 'electronic computer' or a 'digital computer.'"16 He then takes up the question "Can [such] machines think?"
Key Concepts
- Since the Greeks invented logic and geometry, the idea that all reasoning might be reduced to some kind of calculationso that all arguments could be settled once and for allhas fascinated most of the Western tradition's rigorous thinkers.
- Socrates was the first to give voice to this vision.
- Plato generalized this demand for moral certainty into an epistemological demand.
- Hobbes was the first to make explicit the syntactic conception of thought as calculation: "When a man reasons, he does nothing else but conceive a sum total from addition of parcels," he wrote, "for REASON . . . is nothing but reckoning. . . . "
- Like Hobbes, Boole supposed that reasoning was calculating, and he set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a Calculus. . . . "
- At last here was a machine which operated according to syntactic rules, on bits of data.
- This was just what Hobbes and Leibniz had ordered, and Martin Heidegger appropriately saw in cybernetics the culmination of the philosophical tradition.
Context
Opening of section I, where Dreyfus sets the historical-philosophical background to AI by recounting the lineage from Greek logic through early modern rationalism to digital computation.