19.539 (critical) thinking and button pushing

From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty_at_kcl.ac.uk>
Date: Thu, 5 Jan 2006 06:18:33 +0000

               Humanist Discussion Group, Vol. 19, No. 539.
       Centre for Computing in the Humanities, King's College London
                   www.kcl.ac.uk/humanities/cch/humanist/
                        www.princeton.edu/humanist/
                     Submit to: humanist_at_princeton.edu

   [1] From: Paul Oppenheimer <paul_at_peoppenheimer.org> (67)
         Subject: Re: 19.531 (critical) thinking and button-pushing

   [2] From: Charles Ess <cmess_at_drury.edu> (191)
         Subject: Re: 19.531 (critical) thinking and button-pushing

--[1]------------------------------------------------------------------
         Date: Thu, 05 Jan 2006 06:05:50 +0000
         From: Paul Oppenheimer <paul_at_peoppenheimer.org>
         Subject: Re: 19.531 (critical) thinking and button-pushing

As Tolkien has Gandalf say, "Perilous to us all are the devices of an
art deeper than we possess ourselves."

On Dec 29, 2005, at 1:21 AM, Humanist Discussion Group (by way of
Willard McCarty <willard.mccarty_at_kcl.ac.uk>) wrote:

> Humanist Discussion Group, Vol. 19, No. 531.
> Centre for Computing in the Humanities, King's College London
> www.kcl.ac.uk/humanities/cch/humanist/
> www.princeton.edu/humanist/
> Submit to: humanist_at_princeton.edu
>
>
>
> Date: Thu, 29 Dec 2005 08:05:43 +0000
> From: Willard McCarty <willard.mccarty_at_kcl.ac.uk>
> >
>In Rewriting the Soul: Multiple Personality and the Sciences of
>Memory (Princeton, 1995), Ian Hacking takes a close look at the
>process by which often unquestioning practices of measurement have
>legitimated multiple personality and turned it into an object of
>knowledge. Speaking of our modern tools, he observes that,
>
> >We have long had a multitude of highly sophisticated statistical
> >procedures. We now have many statistical software packages. Their
> >power is incredible, but the pioneers of statistical inference would
> >have mixed feelings, for they always insisted that people think
> >before using a routine. In the old days routines took endless hours
> >to apply, so one had to spend a lot of time thinking in order to
> >justify using a routine. Now one enters data and presses a button.
> >One result is that people seem to be cowed into not asking silly
> >questions, such as: What hypothesis are you testing? What
> >distribution is it that you say is not normal? What population are
> >you talking about? Where did this base rate come from? Most
> >important of all: Whose judgments do you use to calibrate scores on
> >your questionnaires? Are those judgments generally agreed to by the
> >qualified experts in the entire community? (p. 111)
>
>In building our marvelous tools, do we not run a similar risk in
>proportion to their complexity? In cases where fundamental
>intellectual decisions have been made at root level, then in effect
>hidden away by higher-level processes, this would seem clearly the
>case. Thus I recall an historian once remarking that she never used
>databases constructed by other people because she had found too many
>critical decisions had been made below the level of manipulation. She
>may have been wrong in particular instances not to have trusted good
>work, but it seems to me that her point is well taken. What do we do
>to answer it?
>
>Hacking is, however, talking more about the power of distraction than
>the effects of concealment or the consequences of effective
>inaccessibility. It is perhaps for his reason that some of us, with
>tongue not entirely in cheek, have praised the user-hostile
>interface: at least a person must think before reaching for that
>mouse. Again, what can we do to answer his point, made at the
>interface of user and computational artifact?
>
>Comments?
>
>Yours,
>WM
>
>
>
>
>Dr Willard McCarty | Reader in Humanities Computing | Centre for
>Computing in the Humanities | King's College London | Kay House, 7
>Arundel Street | London WC2R 3DX | U.K. | +44 (0)20 7848-2784 fax:
>-2980 || willard.mccarty_at_kcl.ac.uk www.kcl.ac.uk/humanities/cch/wlm/

--[2]------------------------------------------------------------------
         Date: Thu, 05 Jan 2006 06:06:06 +0000
         From: Charles Ess <cmess_at_drury.edu>
         Subject: Re: 19.531 (critical) thinking and button-pushing

Hi Willard,

I appreciated very much your passing on Hacking's comments - which, along
with yours, I find to be spot on.

I would highlight two points - one from Hacking, one from you.

You quote Hacking as saying

>> One result is that people seem to be cowed into not asking silly
>> questions, such as: What hypothesis are you testing? What
>> distribution is it that you say is not normal? What population are
>> you talking about? Where did this base rate come from? Most
>> important of all: Whose judgments do you use to calibrate scores on
>> your questionnaires? Are those judgments generally agreed to by the
>> qualified experts in the entire community? (p. 111)

At the risk of being overly general - one of the very interesting upshots of
what some of us like to call the computational turn in philosophy (as an
umbrella term for the multiple ways in which the emergence of computation
has radically extended and transformed philosophy, at every level from
classical questions of logic, ontology, epistemology, identity, ethics and
politics - in part as computers both provide new models _and_ ways of
modeling and testing both traditional and contemporary views) is the way in
which the _limits_ of the machinery help highlight specific human
capacities. A broad example would be Dreyfus's extensive critique of strong
AI, one that, by drawing on phenomenological descriptions of how we as
humans learn, know, and act, sharpens our appreciation for the roles of
tacit knowledge (i.e., knowledge that, according to Polanyi, can _not_ be
made fully articulate in formal ways), of embodiment and kinesthetic forms
of knowledge (as these are - currently, at least - out of the reach of our
computational devices) - and, nicely enough, judgment.

Indeed (and more specifically), as many of us are already aware, judgment -
especially Aristotle's notion of _phronesis_, a _practical_ capacity for
ethical and political judgments that precisely requires informing and tuning
through experience (including the experience of failure) - has experienced
something of a renaissance in contemporary ethics. This is in part because
of heightened attention to virtue ethics (both Aristotelian and feminist)
more generally - but also because "judgment" at least in some forms appears
to be beyond the reach of algorithmic computation. That is: especially as
judgment includes the capacity to discern which _general_ principles are
most appropriate to apply to the specifics of a given instance - such
judgment seems necessary prior to the beginning of an algorithmic procedure,
as the latter must assume as a starting point, in Aristotle's terms,
specific first principles in order to make any sort of start.

I would want to know more about what Hacking means by "judgment" in this
context, in order to avoid equivocating. But on the face of it, his use of
"judgment" squares with Aristotle's (so far as I understand it). If this is
the case, then his point serves as another important articulation of the
significance of judgment in human knowing and _being_ - where judgment may
remain (for the time being, at least) a distinctively human capacity.
(Please don't misunderstand this point. I'm no longer very interested in
debating whether or not machines will ever fully replicate human beings -
i.e., one of the central questions driving the now dated debates regarding
AI. Briefly, I think these debates, along with other problems,
misunderstood the stakes - i.e., whether or not important human dignity and
ethical respect could be maintained if we are indeed reducible to mechanism.
But that's another story for another day.)

In any event - it is also one of the (sadly) forgotten points of recent
history that Norbert Wiener's use of the term "cybernetics", to denote a
self-correcting informational system comes from the Greek _cybernetes_, a
steersman or pilot - where the steersman is taken up by Plato in the
Republic as an analogue of _ethical_ judgment (what became _phronesis_ in
Aristotle), i.e., precisely the capacity to find our way in the face of
competing and conflicting demands - a judgment that is certainly prone to
error, but one that is also capable of self-correction when errors are made
(i.e., we learn from experience, including the experience of failure).
In this light, it may not be too much of a stretch to say that Plato was the
father of humanistic (specifically, ethical) cybernetics - if not quite
humanities computing (smile)?

You, dear Willard, then raise the second point I would like to highlight:

> In building our marvellous tools, do we not run a similar risk in
> proportion to their complexity? In cases where fundamental
> intellectual decisions have been made at root level, then in effect
> hidden away by higher-level processes, this would seem clearly the
> case. Thus I recall an historian once remarking that she never used
> databases constructed by other people because she had found too many
> critical decisions had been made below the level of manipulation. She
> may have been wrong in particular instances not to have trusted good
> work, but it seems to me that her point is well taken. What do we do
> to answer it?

I'm not sure what sort of answer you might think is needed here - I take
your colleague's point as well, and would reinforce it by way of reference
to the now classic work of Albert Borgmann (Technology and the Character of
Contemporary Life, 1984 - ! - as well as his more recent book, _Holding on
to Reality_). One of Borgmann's central points is that contemporary
technology "works" precisely by making things easier for us at a surface
level (at the interface, it is fair to say) - but at the cost of an
increasing complexity underneath these surfaces, so that underlying
machinery becomes less and less accessible / intelligible to us. A raft of
examples can be alluded to here - such as automobiles and Pirsig's
motorcycle (which, in the 1970s, could still be repaired, according to
Pirsig, in a zen-like way by using shims crafted on the spot from cola cans)
that have now become much more efficient, long-lasting, and enjoyable to
drive, but at the cost of a dependency on computational devices that take
even basic repairs out of the provenance of even an accomplished shade-tree
mechanic.

So I'm with you when you comment:

> Hacking is, however, talking more about the power of distraction than
> the effects of concealment or the consequences of effective
> inaccessibility. It is perhaps for his reason that some of us, with
> tongue not entirely in cheek, have praised the user-hostile
> interface: at least a person must think before reaching for that
> mouse.

Indeed, Borgmann makes this same point in his wonderful description
(_Holding on to Reality_) about building and using his first "desktop"
computer in the 1970s, which was programmed by flipping switches...

> Again, what can we do to answer his point, made at the
> interface of user and computational artifact?
>
I'm not sure I have answers, per se - but certainly some more comments.

One, along with Borgmann and others, I am deeply concerned about these sorts
of consequences of contemporary technology - to perhaps exaggerate a bit,
about our increased dependency on technologies we don't understand, as these
foster and reinforce doing things the easy way, and thereby (it appears)
render us less and less capable of taking up what is difficult and complex.
If human and humane life - especially in a contemporary, democratic society
- were easy, this would be no problem. But...

An anecdote: not too long ago at my grocery store, grapefruit were for sale
for 2 for $1.00. I picked up four (I like grapefruit, and that was a good
price). Unlike all the other goodies in my basket, the grapefruit had no
price code for the cashier to scan. The unfortunate young cashier at the
register was literally powerless when confronted with four grapefruit. The
cashier repeatedly asked the price, which I repeatedly several times. (I
wasn't trying to be sadistic or malicious, but I was genuinely curious to
see what would happen.) Again, 2 for $1.00. How many did I have? Four. And
they're 2 for $1.00? Yes. This went on for quite some time. When it was
finally clear that the cashier was unable to calculate that four grapefruit
would cost $2.00, I offered the calculation, and life moved on again.

Of course, I've no idea how representative, if at all, the befuddled young
cashier might be of younger folk. But like Plato's _cybernetes_, the
cashier serves in my mind as an example of something larger - namely, how
we, in the U.S. at least, seem largely bent on making ourselves more
intellectually inept as we make our lives, on a superficial level at least,
more convenient - precisely through increasing dependency on ever more
complex technologies.

What worries me is not simply that the arithmetically illiterate will have
trouble getting by in a world of money and numbers. More importantly: how
will such minds be able to take up what I (and, I think, most of us who
would call ourselves humanists) take to be the core concepts and questions
of how to live full and meaningful human lives - including central ethical
and political understandings? In particular: how will such minds be able to
take up and incorporate what many of us take to be a core intuition/argument
of democratic communities - that human beings are most centrally free
beings; that if free beings are to remain free, they must exercise consent
over those events and institutions that affect them (I'm paraphrasing Locke,
Jefferson, Cady Stanton, and Martin Luther King, Jr.); and so, to deny them
voice and consent is to thus deny them of their core human identity as free
beings? (And all that goes with that - including, from Socrates through
Martin Luther King, Jr., the ability to discern the difference between just
and unjust laws, and to be willing to disobey unjust laws, no matter the
consequences?)

In sum, I think your worries about the interface between the user and the
computational artifact are, again, spot on. If I had my way ... just as I'd
like to see less dependency on the computer as calculator (is it really too
much to hope that a young adult can deal with 4 grapefruit @ 2 for a
dollar?), I'd like to see my students less dependent on the computer (and
the Internet) as researcher, writer, and communicator.
How to do this, and in the right measure, is, of course, the rub.

And beyond that, a larger question. While some of us may be inclined and
fortunate enough to be able to play with Linux and command lines (or, gasp,
even program!) rather than be restricted to point-and-click GUIs; while some
of us may still insist on repairing our machines, plumbing, etc., as we can;
while some of us delight in the play of the mind that comes with poetry and
philosophy, along with the hard work of thinking things through -
modern democracies and liberal/secular societies depend, in the end, on the
hope that most of us can be philosophers and humanists enough to understand
what self-governance entails, how we can meaningfully engage in democratic
debate, deliberation, and choice, etc.

By contrast, as Nietzsche reiterates, Plato and the Greeks famously believed
that only the few were so inclined and capable. My cashier worries me -
Plato, in end, may be right, however disastrous (in my view), the political
consequences of that correctness will be.

I hate to end on such a dark note - but I hope these comments are at least
helpful in some way.

Thanks, Willard, as always, for such helpful and provocative notes! Indeed:
deepest thanks, and heartiest congratulations on your shepherding HUMANIST
so productively and enjoyable for lo! these twenty years!
And all best wishes for the new year,

Yours,

Charles Ess

Distinguished Research Professor,
Interdisciplinary Studies <http://www.drury.edu/gp21>
Drury University
900 N. Benton Ave. Voice: 417-873-7230
Springfield, MO 65802 USA FAX: 417-873-7435
Home page: http://www.drury.edu/ess/ess.html

Co-chair, CATaC'06: http://www.catacconference.org
Co-chair, ECAP'06: http://www.eu-cap.org

Professor II, Globalization and Applied Ethics Programmes
Norwegian University of Science and Technology
NO-7491 Trondheim, Norway
http://www.anvendtetikk.ntnu.no/pres/bridgingcultures.php

Exemplary persons seek harmony, not sameness. -- Analects 13.23
Received on Thu Jan 05 2006 - 01:43:31 EST

This archive was generated by hypermail 2.2.0 : Thu Jan 05 2006 - 01:43:32 EST