The early work on perceptrons exemplifies the 'first step fallacy': limited, highly constrained successes in machine learning were rashly extrapolated into sweeping claims about machine intelligence, as illustrated by Alvin Toffler’s unqualified assertion that such experiments show machines can learn from mistakes and outperform human students without acknowledging the narrowness and seriousness of the limitations involved.

By Hubert L. Dreyfus, from What Computers Can't Do

Key Arguments

  • Dreyfus explicitly calls Perceptrons 'a perfect illustration of the first step fallacy', indicating that the perceptron work shows how an initial, modest breakthrough is mistaken for evidence of imminent general intelligence.
  • He identifies Alvin Toffler’s summary of Rosenblatt’s work as 'typical of this falacious extrapolation', treating Toffler’s popularization as a paradigmatic case of overclaiming.
  • Toffler’s statement that experiments 'demonstrate that machines can learn from their mistakes, improve their performance, and in certain limited kinds of learning, outstrip human students' is presented without any indication of the narrow scope and serious constraints under which these results were obtained.
  • Dreyfus emphasizes that Toffler 'gives no indication of the seriousness of these limitations', underscoring that ignoring domain restrictions and structural constraints is precisely what constitutes the first step fallacy.

Source Quotes

Perceptrons is a perfect illustration of the first step fallacy. (See note 84 above.) Typical of this falacious extrapolation is Toffler's claim (Future Shock, p.
Perceptrons is a perfect illustration of the first step fallacy. (See note 84 above.) Typical of this falacious extrapolation is Toffler's claim (Future Shock, p. 186) that: "Experiments by . . . Frank Rosenblatt and others demonstrate that machines can learn from their mistakes, improve their performance, and in certain limited kinds of learning, outstrip human students." Toffler gives no indication of the seriousness of these limitations.
Frank Rosenblatt and others demonstrate that machines can learn from their mistakes, improve their performance, and in certain limited kinds of learning, outstrip human students." Toffler gives no indication of the seriousness of these limitations. 3.

Key Concepts

  • Perceptrons is a perfect illustration of the first step fallacy.
  • Typical of this falacious extrapolation is Toffler's claim (Future Shock, p. 186) that: "Experiments by . . . Frank Rosenblatt and others demonstrate that machines can learn from their mistakes, improve their performance, and in certain limited kinds of learning, outstrip human students."
  • Toffler gives no indication of the seriousness of these limitations.

Context

In the concluding footnotes to Part I, 'Ten Years of Research in Artificial Intelligence (1957–1967)', Dreyfus briefly comments on perceptrons and cites Alvin Toffler’s popular account as an example of how early, narrowly circumscribed AI results were exaggerated into broad claims about machine learning and intelligence, which he labels the 'first step fallacy'.