21.390 machines don't care

From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty_at_kcl.ac.uk>
Date: Tue, 4 Dec 2007 08:13:37 +0000

               Humanist Discussion Group, Vol. 21, No. 390.
       Centre for Computing in the Humanities, King's College London
  www.kcl.ac.uk/schools/humanities/cch/research/publications/humanist.html
                        www.princeton.edu/humanist/
                     Submit to: humanist_at_princeton.edu

         Date: Tue, 04 Dec 2007 08:04:22 +0000
         From: "Rabkin, Eric" <esrabkin_at_umich.edu>
         Subject: RE: 21.387 machines don't care

Dear Willard,

We often seek to interact with entities that "don't care" regardless of
whether or not those entities are machines. Physicians supposedly
"don't care" about our bodies the way our lovers might so we are allowed
to believe that our exposing our bodies to physicians risks no personal
judgment (about beauty, say, or desirability) on the part of the
physician. Policemen supposedly don't care if we're stupid or foolish
when we ask for help. Soldiers in combat don't care that the enemy are
human beings against whom they have no personal animus. Not caring is
crucial for facilitating some functions important to humans and so we
create social categories that theoretically obviate caring. In that
sense, machines may be perfect exemplars of roles we developed
collectively long before even the idea of those machines existed. A
robot surgeon will never be misguided by its feelings; a robot policeman
will never unwittingly insult someone standing across the street from
the location sought. A robot bombardier will not suffer remorse in
releasing its payload and become subsequently less effective. In the
U.S., judges and juries are asked to consider demonstrable facts with no
weight given to any possible emotional responses the facts or witnesses
may occasion. Banking functions are among those we usually want to be
uninfluenced by personal concerns such as impatience or favoritism, just
as we want bankers to perform mathematical calculations without any
possibility of the influence of fatigue. While your revised reading of
"machines don't care" may suggest a useful revision of the Turing Test,
I think more broadly your rereading highlights the fact that, despite
the undeniable centrality of caring as a function of most people's ideas
of human beings, humans themselves often "steel themselves" (become
machine-like) in order to perform better the very functions human beings
need, and this denial of a "natural" caring is often considered
virtuous, as would any other discipline adopted at personal cost to lead
to socially desired ends. The problem arises, of course, if the
discipline so influences us that we don't care when we should, if the
surgeon always sees people only as mechanical problems or the policemen
always sees people only as a job or the soldier always sees The Other
only as a target. We want those who adopt roles that "don't care" to
maintain a dual cognition with the role as the conscious overlay, an
overlay we want to believe can be put aside. Why? Because we want to
believe that at bottom it is only a role, at bottom the surgeon wants to
heal us, the policeman to guide us, the soldier to protect us. We want
caring to be the fundamental condition of these people, while not caring
is the role. The disturbance one feels in having to revise your first
reading of the advertisement, I think, comes from having to acknowledge
that sometimes not caring is so important that we may be willing to give
up the notion that that is only a role and accept it as the fundamental
identity. We would not want to do that with people because that would
dehumanize them and, by extension, suggest that we too could become
dehumanized and lose our identities. But we don't mind inhuman machines
failing to be human. It just takes a moment sometimes to accept the
notion that the inhuman may serve human desires better than humans can.

All best,

Eric

-------------------------------------------------
Eric S. Rabkin 734-764-2553 (Office)
Dept of English 734-764-6330 (Dept)
Univ of Michigan 734-763-3128 (Fax)
Ann Arbor MI 48109-1003 esrabkin_at_umich.edu
http://www-personal.umich.edu/~esrabkin/

>-----Original Message-----
>From: Humanist Discussion Group [mailto:humanist_at_Princeton.EDU] On
>Behalf Of Humanist Discussion Group (by way of Willard McCarty
><willard.mccarty_at_kcl.ac.uk>)
>Sent: Monday, December 03, 2007 01:28
>To: humanist_at_Princeton.EDU
> >
> Humanist Discussion Group, Vol. 21, No. 387.
> Centre for Computing in the Humanities, King's College London
>
>www.kcl.ac.uk/schools/humanities/cch/research/publications/humanist.htm
l
> www.princeton.edu/humanist/
> Submit to: humanist_at_princeton.edu
>
>
>
> Date: Sun, 02 Dec 2007 20:36:52 +0000
> From: Willard McCarty <willard.mccarty_at_kcl.ac.uk>
> Subject: machines don't care
>
>Recently, in my role as lexicographical scout and collector for the
>Dictionary of Words in the Wild (http://dictionary.mcmaster.ca/), I
>spotted a new advert from a telephone-only banking service in the UK,
>First Direct. The slogan on this advert is, "machines don't care".
>Until I figured out the job that this slogan was supposed to be
>doing, I thought it a negative statement, and so began musing about
>getting one's fingers caught in meshing gears, being suddenly trapped
>in some bureaucratic process and so forth. Then I realised that the
>intent was quite the opposite -- the appeal of an automated banking
>service being, I assume, that the machine doesn't care whether you're
>spending too much and so forth. That brought me to the moral claim
>made first by Freud, I think, that the great scientific discoveries
>(Copernicus', Darwin's, Freud's own etc) were all blows to human
>vanity, with the clear implication that the strength of science lies
>in its liberation of the human spirit from vanity, its striving toward
>an
>incorruptible programme of research. If memory serves Galileo
>himself made such a statement. He would have observed much
>corruption up close.
>
>All that brought me further to the realisation that "machines don't
>care", like "merely engineering", is an epistemologically useful
>statement about the difference between what we build and who we are.
>And that in turn to wondering whether Mr Turing's test should not at
>some point be revised to determine if the unknown operant is a
>feeling human being or a machine simulating emotion. Or perhaps the
>problem is the separation in that test and elsewhere of thinking and
>feeling, as I think is in effect what Antonio Damasio argues?
>
>Comments?
>
>Yours,
>WM
>
>Willard McCarty | Professor of Humanities Computing | Centre for
>Computing in the Humanities | King's College London |
>http://staff.cch.kcl.ac.uk/~wmccarty/. Et sic in infinitum (Fludd 1617,
>p. 26).
Received on Tue Dec 04 2007 - 03:37:29 EST

This archive was generated by hypermail 2.2.0 : Tue Dec 04 2007 - 03:37:31 EST