For intelligent behavior, a crucial difference between human concerns and machine utility functions is that human ultimate concerns are flexible, made only as specific as required by the situation, and pervasively present in each moment structuring experience and guiding relevance selection, whereas a machine’s table of objectives is fixed, specific, and only episodically consulted to evaluate options.

By Hubert L. Dreyfus, from What Computers Can't Do

Key Arguments

  • Dreyfus raises the question: 'What difference does it make when one is trying to produce intelligent behavior that one's evaluations are based on a utility function instead of some ultimate concern?'
  • He notes, 'One difference, which Watanabe notes without being able to explain, is that a table of values must be specific, whereas human concerns only need to be made as specific as the situation demands.'
  • He links this flexibility to human conceptual and linguistic abilities: 'This flexibility is closely connected with the human ability to recognize the generic in terms of purposes, and to extend the use of language in a regular but nonrulelike way.'
  • He contrasts merely terminal goals with pervasive concern: 'Moreover, man's ultimate concern is not just to achieve some goal which is the end of a series; rather, interest in the goal is present at each moment structuring the whole of experience and guiding our activity as we constantly select what is relevant in terms of its significance to the situation at hand.'
  • By contrast, 'A machine table of objectives, on the other hand, has only an arbitrary relation to the alternatives before the machine, so that it must be explicitly appealed to at predetermined intervals to evaluate the machine's progress and direct its next choice.'
  • He criticizes attempts by Simon and Reitman to simulate motivation: 'Herbert Simon and Walter Reitman have seen that emotion and motivation play some role in intelligent behavior, but their way of simulating this role is to write programs where "emotions" can interrupt the work on one problem to introduce extraneous factors or work on some other problem.'
  • He observes that such programmers 'do not seem to see that emotions and concerns accompany and guide our cognitive behavior. This is again a case of not being able to see what one would not know how to program.'

Source Quotes

What difference does it make when one is trying to produce intelligent behavior that one's evaluations are based on a utility function instead of some ultimate concern? One difference, which Watanabe notes without being able to explain, is that a table of values must be specific, whereas human concerns only need to be made as specific as the situation demands. This flexibility is closely connected with the human ability to recognize the generic in terms of purposes, and to extend the use of language in a regular but nonrulelike way.
One difference, which Watanabe notes without being able to explain, is that a table of values must be specific, whereas human concerns only need to be made as specific as the situation demands. This flexibility is closely connected with the human ability to recognize the generic in terms of purposes, and to extend the use of language in a regular but nonrulelike way. Moreover, man's ultimate concern is not just to achieve some goal which is the end of a series; rather, interest in the goal is present at each moment structuring the whole of experience and guiding our activity as we constantly select what is relevant in terms of its significance to the situation at hand.6 A machine table of objectives, on the other hand, has only an arbitrary relation to the alternatives before the machine, so that it must be explicitly appealed to at predetermined intervals to evaluate the machine's progress and direct its next choice.
This flexibility is closely connected with the human ability to recognize the generic in terms of purposes, and to extend the use of language in a regular but nonrulelike way. Moreover, man's ultimate concern is not just to achieve some goal which is the end of a series; rather, interest in the goal is present at each moment structuring the whole of experience and guiding our activity as we constantly select what is relevant in terms of its significance to the situation at hand.6 A machine table of objectives, on the other hand, has only an arbitrary relation to the alternatives before the machine, so that it must be explicitly appealed to at predetermined intervals to evaluate the machine's progress and direct its next choice. Herbert Simon and Walter Reitman have seen that emotion and motivation play some role in intelligent behavior, but their way of simulating this role is to write programs where "emotions" can interrupt the work on one problem to introduce extraneous factors or work on some other problem.
Moreover, man's ultimate concern is not just to achieve some goal which is the end of a series; rather, interest in the goal is present at each moment structuring the whole of experience and guiding our activity as we constantly select what is relevant in terms of its significance to the situation at hand.6 A machine table of objectives, on the other hand, has only an arbitrary relation to the alternatives before the machine, so that it must be explicitly appealed to at predetermined intervals to evaluate the machine's progress and direct its next choice. Herbert Simon and Walter Reitman have seen that emotion and motivation play some role in intelligent behavior, but their way of simulating this role is to write programs where "emotions" can interrupt the work on one problem to introduce extraneous factors or work on some other problem. 7 They do not seem to see that emotions and concerns accompany and guide our cognitive behavior.
Herbert Simon and Walter Reitman have seen that emotion and motivation play some role in intelligent behavior, but their way of simulating this role is to write programs where "emotions" can interrupt the work on one problem to introduce extraneous factors or work on some other problem. 7 They do not seem to see that emotions and concerns accompany and guide our cognitive behavior. This is again a case of not being able to see what one would not know how to program. Heidegger tries to account for the pervasive concern organizing human experience in terms of a basic human need to understand one's being.

Key Concepts

  • One difference, which Watanabe notes without being able to explain, is that a table of values must be specific, whereas human concerns only need to be made as specific as the situation demands.
  • This flexibility is closely connected with the human ability to recognize the generic in terms of purposes, and to extend the use of language in a regular but nonrulelike way.
  • man's ultimate concern is not just to achieve some goal which is the end of a series; rather, interest in the goal is present at each moment structuring the whole of experience and guiding our activity as we constantly select what is relevant in terms of its significance to the situation at hand.
  • A machine table of objectives, on the other hand, has only an arbitrary relation to the alternatives before the machine, so that it must be explicitly appealed to at predetermined intervals to evaluate the machine's progress and direct its next choice.
  • their way of simulating this role is to write programs where "emotions" can interrupt the work on one problem to introduce extraneous factors or work on some other problem.
  • They do not seem to see that emotions and concerns accompany and guide our cognitive behavior. This is again a case of not being able to see what one would not know how to program.

Context

After distinguishing concerns from objectified values, Dreyfus explicitly contrasts the pervasive, situationally flexible role of human ultimate concerns with the episodic, specific role of machine utility tables, and criticizes AI treatments of emotion and motivation.