16.023 read it tonight

From: Humanist Discussion Group (by way of Willard McCarty (w.mccarty@btinternet.com)
Date: Tue May 14 2002 - 01:22:09 EDT

  • Next message: Humanist Discussion Group (by way of Willard McCarty : "16.026 new on WWW: Ubiquity 3.13"

                    Humanist Discussion Group, Vol. 16, No. 23.
           Centre for Computing in the Humanities, King's College London
                   <http://www.princeton.edu/~mccarty/humanist/>
                  <http://www.kcl.ac.uk/humanities/cch/humanist/>

             Date: Tue, 14 May 2002 06:14:11 +0100
             From: Willard McCarty <w.mccarty@btinternet.com>
             Subject: kinships and differences

    A book for the attention of anyone interested in our disciplinary kinships:
    The Boundaries of Humanity: Humans, Animals, Machines. James J. Sheehan and
    Morton Sosna, eds. Berkeley: University of California Press, 1991. This
    collection of essays is based on the papers given at a conference at
    Stanford University in April 1987 under the auspices of the Stanford
    Humanities Center <http://shc.stanford.edu/> (which also once published the
    fine journal, Stanford Humanities Review
    <http://www.stanford.edu/group/SHR/>, now defunct).

    In the Boundaries volume the most obviously relevant essays are those in
    Part II, Humans and Machines, esp. Allen Newell, "Metaphors for Mind,
    Theories of Mind: Should the Humanities Mind?"; Terry Winograd, "Thinking
    Machines: Can there be? Are we?" (reprinted in D. Partridge and Y. Wilks,
    The Foundations of Artificial Intelligence, Cambridge: Cambridge Univ.
    Press, 1990, pp. 167-189); Sherry Turkle, "Romantic Reactions: Paradoxical
    Responses to the Computer Presence"; Stuart Hampshire, "Biology, Machines,
    and Humanity". Of those the essays by Newell and Winograd come close to
    *required* reading for us -- and make a very interesting contrast of
    attitudes within the AI community of which we must be aware.

    Winograd is an important ally, as the book he did with Fernando Flores,
    Understanding Computers and Cognition, demonstrates. In this piece he
    recognizes the grains of truth from both sides, from the futurologists
    proclaiming the dawn of machina sapiens and from the critics pointing to
    "the vain pretensions of those who seek to understand mind as computation".
    Then he finds the much more complex and interesting picture these grains
    lead us to.

    Newell's learned arrogance and very interesting rhetorical moves are also
    worth close study. These moves are typical of the genre of pronouncements
    ex cathedra to non-specialists: (1) dismissal of a set of questions, kind
    of knowledge or area of study as unimportant, irrelevant etc.; (2) deferral
    of a promised fulfilment, or what Jerry Pournelle used to call the "Real
    Soon Now" strategy. By the first he relegates metaphor to the realm of the
    literary, i.e. subjective and decorative, so that computation as a
    "metaphor for mind" can be dismissed as essentially meaningless, in favour
    of a contrastingly scientific "theory of mind". A sideswipe at science
    studies, with reference only to Latour and Woolgar, is supposed to restore
    the notion of clean, unproblematic objectivity to science. From there it's
    a relatively short step to diagrams of cognitive processes as these are
    implemented in a system he is working on -- to which, of course, none of us
    have access. What he says about the system and the research strategy for
    understanding mind is indeed very interesting, but the unexamined notion of
    "theory" in relation to this kind of work undermines its value. Better, I
    would think, to call it a "model of mind", i.e. roughly, a useful,
    tractable fiction employed as a heuristic convenience. The deferral of
    promise is more subtle than in the early days, for example in the article
    he did with Herbert Simon, "Heuristic Problem Solving: The Next Advance in
    Operations Research", Operations Research 6.1 (Jan-Feb 1958): 1-10 -- the
    article is in JSTOR. I quote:

    "We are now poised for a great advance that will bring the digital computer
    and the tools of mathematics and the behavioural sciences to bear on the
    very core of managerial activity--on the exercise of judgment and
    intuition; on the process of making complex decisions.... Even while
    operations research is solving well-structured problems, fundamental
    research is dissolving the mystery of how humans solve ill-structured
    problems. Moreover, we have begun to learn how to use computers to solve
    these problems.... And we now know, at least in a limited area, not only
    how to program computers to perform such problem-solving activities
    successfully; we also know how to program computers to *learn* to do these
    things.... Intuition, insight, and learning are no longer exclusive
    possessions of humans: any large high-speed computer can be programmed to
    exhibit them also." (p 6)

    A number of predictions follow: that within the next 10 years (they
    specifically set the date at 1 January 1968), a computer "will be the
    world's chess champion... will discover and prove an important new
    mathematical theorem... will write music that will be accepted by critics
    as possessing considerable aesthetic value... [and that] most theories in
    psychology will take the form of computer programs, or of qualitative
    statements about the characteristics of such programs" (pp. 7-8). I also
    direct your attention to their reply to criticisms in Operations Research
    6.3, pp. 449-50, which digs the hole deeper still. A year before the set
    date, Marvin Minsky (the brain-is-a-meat-machine man), in Computation:
    Finite and Infinite Machines (1967), said somewhat more cautiously that
    fulfilment would happen quite soon.

    Given Simon and Newell's pioneering work, which (if I am not wrong) began
    in managerial science, Terry Winograd's observation at the beginning of his
    article, made about 20 years after Simon and Newell's line in the sands of
    time, has a particularly accurate bite: "Indeed, artificial intelligence
    has not achieved creativity, insight and judgment. But its shortcomings are
    far more mundane: we have not yet been able to construct a machine with
    even a modicum of common sense or one that can converse on everyday topics
    in ordinary language.... '[A]rtificial intelligence' ... can usefully be
    likened to bureaucracy in its rigidity, obtuseness, and inability to adapt
    to changing circumstances. The weakness comes not from insufficient
    development of the technology but from the inadequacy of the basic tenets"
    (pp 198-9) -- by which he means essentially philosophical tenets that
    largely still prevail. One is reminded of John F Sowa's statement in
    Knowledge Representation: Logical, Philosophical, and Computational
    Foundations (2000): "Perhaps there are some kinds of knowledge that cannot
    be expressed in logic." (p. 12).

    Michael Williams' point, in Problems of Knowledge (2001), is worth
    recalling: "Demarcational projects use epistemological criteria to sort
    areas of discourse into factual and non-factual, truth-seeking and merely
    expressive, and, at the extreme, meaningful and meaningless. Such projects
    amount to proposals for a map of culture: a guide to what forms of
    discourse are 'serious' and what are not. Disputes about demarcation... are
    disputes about that shape of our culture and so, in the end, of our lives"
    (p. 12).

    The debate is ongoing and important, and as it goes on it gets, as Winograd
    says, more complex. Putting our debate about computing into the broader
    context of humans, animals and machines shows us just how important it is.

    Comments?

    Yours,
    WM



    This archive was generated by hypermail 2b30 : Tue May 14 2002 - 01:32:07 EDT