Dreyfus contends that the work collected in Minsky’s 'Semantic Information Processing' provides no evidence for Minsky’s optimism that AI programs are approaching superior heuristics for managing large knowledge structures or that artificial intelligence will be 'substantially solved' within a generation; instead, the programs remain narrowly specialized, ad hoc, and dependent on putting aside the real problem of structuring and storing vast amounts of knowledge, while their authors appeal to a new version of the 'first step' fallacy to justify calling them progress.
By Hubert L. Dreyfus, from What Computers Can't Do
Key Arguments
- He cites Minsky’s own quantitative estimates: Quillian’s program 'now contains a few hundred facts,' while Minsky estimates that '"a million facts would be necessary for great intelligence."'
- Minsky openly acknowledges a scaling problem: 'each of "the programs described [in this book] will work best when given exactly the necessary facts, and will bog down inexorably as the information files grow."'
- Yet Minsky still asserts, in another book, that 'within a generation . . . few compartments of intellect will remain outside the machine's realmthe problem of creating "artificial intelligence" will be substantially solved.' Dreyfus juxtaposes these claims to question their basis.
- Dreyfus concludes that 'Certainly there is nothing in Semantic Information Processing to justify this confidence.'
- He recalls Minsky’s criticism of early programs for lack of generality—'"Each program worked only on its restricted specialty, and there was no way to combine two different problemsolvers."'—and then argues that 'Minsky's solutions are as ad hoc as ever.'
- Minsky himself concedes that 'The programs described in this volume may still have this character, but they are no longer ignoring the problem. In fact, their chief concern is finding methods of solving it.' Dreyfus counters that 'there is no sign that any of the papers presented by Minsky have solved anything.'
- Dreyfus emphasizes that 'They have not discovered any general feature of the human ability to behave intelligently. All Minsky presents are clever special solutions, like Bobrow's and Evans', or radically simplified models such as Quillian's, which work because the real problem, the problem of how to structure and store the mass of data required has been put aside.'
- He characterizes Minsky’s response as 'a new version of the first step fallacy': Minsky claims that 'The fact that the present batch of programs still appear to have narrow ranges of application does not indicate lack of progress toward generality. These programs are steps toward ways to handle knowledge.' Dreyfus maintains that this merely re-labels restricted, non‑generalizing work as 'first steps' without showing a path to generality.
- Dreyfus summarizes Phase II as a 'game' of creating the 'appearance of complexity' while deferring engagement with real complexity: 'In Phase II the game seems to be to see how far one can get with the appearance of complexity before the real problem of complexity has to be faced, and then when one fails to generalize, claim to have made a first step.'
Source Quotes
II Significance of Current Difficulties What would be reasonable to expect? Minsky estimates that Quillian's program now contains a few hundred facts. He estimates that "a million facts would be necessary for great intelligence."49 He also admits that each of "the programs described [in this book] will work best when given exactly the necessary facts, and will bog down inexorably as the information files grow." 50 Is there, thus, any reason to be confident that these programs are approaching the "superior heuristics for managing their knowledge structure" which Minsky believed human beings must have; or, as Minsky claims in another of his books, that within a generation . . . few compartments of intellect will remain outside the machine's realmthe problem of creating "artificial intelligence" will be substantially solved.51 Certainly there is nothing in Semantic Information Processing to justify this confidence.
Minsky estimates that Quillian's program now contains a few hundred facts. He estimates that "a million facts would be necessary for great intelligence."49 He also admits that each of "the programs described [in this book] will work best when given exactly the necessary facts, and will bog down inexorably as the information files grow." 50 Is there, thus, any reason to be confident that these programs are approaching the "superior heuristics for managing their knowledge structure" which Minsky believed human beings must have; or, as Minsky claims in another of his books, that within a generation . . . few compartments of intellect will remain outside the machine's realmthe problem of creating "artificial intelligence" will be substantially solved.51 Certainly there is nothing in Semantic Information Processing to justify this confidence. As we have seen, Minsky criticizes the early programs for their lack of generality.
He estimates that "a million facts would be necessary for great intelligence."49 He also admits that each of "the programs described [in this book] will work best when given exactly the necessary facts, and will bog down inexorably as the information files grow." 50 Is there, thus, any reason to be confident that these programs are approaching the "superior heuristics for managing their knowledge structure" which Minsky believed human beings must have; or, as Minsky claims in another of his books, that within a generation . . . few compartments of intellect will remain outside the machine's realmthe problem of creating "artificial intelligence" will be substantially solved.51 Certainly there is nothing in Semantic Information Processing to justify this confidence. As we have seen, Minsky criticizes the early programs for their lack of generality.
As we have seen, Minsky criticizes the early programs for their lack of generality. "Each program worked only on its restricted specialty, and there was no way to combine two different problemsolvers."52 But Minsky's solutions are as ad hoc as ever. Yet he adds jauntily: The programs described in this volume may still have this character, but they are no longer ignoring the problem.
"Each program worked only on its restricted specialty, and there was no way to combine two different problemsolvers."52 But Minsky's solutions are as ad hoc as ever. Yet he adds jauntily: The programs described in this volume may still have this character, but they are no longer ignoring the problem. In fact, their chief concern is finding methods of solving it.53 But there is no sign that any of the papers presented by Minsky have solved anything. They have not discovered any general feature of the human ability to behave intelligently.
In fact, their chief concern is finding methods of solving it.53 But there is no sign that any of the papers presented by Minsky have solved anything. They have not discovered any general feature of the human ability to behave intelligently. All Minsky presents are clever special solutions, like Bobrow's and Evans', or radically simplified models such as Quillian's, which work because the real problem, the problem of how to structure and store the mass of data required has been put aside. Minsky, of course, has already responded to this apparent shortcoming with a new version of the first step fallacy: The fact that the present batch of programs still appear to have narrow ranges of application does not indicate lack of progress toward generality.
All Minsky presents are clever special solutions, like Bobrow's and Evans', or radically simplified models such as Quillian's, which work because the real problem, the problem of how to structure and store the mass of data required has been put aside. Minsky, of course, has already responded to this apparent shortcoming with a new version of the first step fallacy: The fact that the present batch of programs still appear to have narrow ranges of application does not indicate lack of progress toward generality. These programs are steps toward ways to handle knowledge.54 In Phase II the game seems to be to see how far one can get with the appearance of complexity before the real problem of complexity has to be faced, and then when one fails to generalize, claim to have made a first step. Such an approach is inevitable as long as workers in the field of AI are interested in producing striking results but have not solved the practical problem of how to store and access the large body of data necessary, if perhaps not sufficient, for full-scale, flexible, semantic information processing.
Minsky, of course, has already responded to this apparent shortcoming with a new version of the first step fallacy: The fact that the present batch of programs still appear to have narrow ranges of application does not indicate lack of progress toward generality. These programs are steps toward ways to handle knowledge.54 In Phase II the game seems to be to see how far one can get with the appearance of complexity before the real problem of complexity has to be faced, and then when one fails to generalize, claim to have made a first step. Such an approach is inevitable as long as workers in the field of AI are interested in producing striking results but have not solved the practical problem of how to store and access the large body of data necessary, if perhaps not sufficient, for full-scale, flexible, semantic information processing.
Key Concepts
- Minsky estimates that Quillian's program now contains a few hundred facts. He estimates that "a million facts would be necessary for great intelligence."49
- He also admits that each of "the programs described [in this book] will work best when given exactly the necessary facts, and will bog down inexorably as the information files grow." 50
- within a generation . . . few compartments of intellect will remain outside the machine's realmthe problem of creating "artificial intelligence" will be substantially solved.51
- Certainly there is nothing in Semantic Information Processing to justify this confidence.
- "Each program worked only on its restricted specialty, and there was no way to combine two different problemsolvers."52 But Minsky's solutions are as ad hoc as ever.
- Yet he adds jauntily: The programs described in this volume may still have this character, but they are no longer ignoring the problem. In fact, their chief concern is finding methods of solving it.53
- They have not discovered any general feature of the human ability to behave intelligently. All Minsky presents are clever special solutions, like Bobrow's and Evans', or radically simplified models such as Quillian's, which work because the real problem, the problem of how to structure and store the mass of data required has been put aside.
- Minsky, of course, has already responded to this apparent shortcoming with a new version of the first step fallacy: The fact that the present batch of programs still appear to have narrow ranges of application does not indicate lack of progress toward generality. These programs are steps toward ways to handle knowledge.54
- In Phase II the game seems to be to see how far one can get with the appearance of complexity before the real problem of complexity has to be faced, and then when one fails to generalize, claim to have made a first step.
Context
In the 'Significance of Current Difficulties' subsection, Dreyfus evaluates the overall import of the Phase II programs collected by Minsky, contrasting Minsky’s explicit acknowledgments about scaling and narrowness with his sweeping predictions, and arguing that these works represent ad hoc tricks bolstered by a 'first step' rhetoric rather than genuine progress toward general, human‑like intelligence.