10.0672 do we want technological progress?

WILLARD MCCARTY (willard.mccarty@kcl.ac.uk)
Thu, 6 Feb 1997 22:35:24 +0000 (GMT)

Humanist Discussion Group, Vol. 10, No. 672.
Center for Electronic Texts in the Humanities (Princeton/Rutgers)
Centre for Computing in the Humanities, King's College London
Information at http://www.princeton.edu/~mccarty/humanist/

[1] From: Willard McCarty <Willard.McCarty@kcl.ac.uk> (162)
Subject: technological improvement

[The following was extracted, at his suggestion, from a much longer message
sent along to Humanist by Wendell Piez. --WM]

>Date: Thu, 6 Feb 1997 10:03:00 -0500 (EST)
>From: Wendell Piez <piez@rci.rutgers.edu>
>>X-Status:

---------- Forwarded message ----------
>Date: Wed, 5 Feb 1997 18:46:51 -0500
>From: Steve Talbott <stevet@ora.com>
>Subject: NETFUTURE #40

NETFUTURE

Technology and Human Responsibility

--------------------------------------------------------------------------
Issue #40 Copyright O'Reilly & Associates February 5, 1997
--------------------------------------------------------------------------
Opinions expressed here belong to the authors, not O'Reilly & Associates.

Editor: Stephen L. Talbott

NETFUTURE on the Web: http://www.ora.com/people/staff/stevet/netfuture/
You may redistribute this newsletter for noncommercial purposes.

CONTENTS:
*** Editor's Note
*** Quotes and Provocations
Chinese Cookies
Looking up to Government
Businesses That Grow Unprincipled
David Kline on SLT on David Kline
*** Is Technological Improvement What We Want? (Part 2) (Steve Talbott)
The Worm Was Already in the APL
*** About this newsletter

--------------------------------------------------------------------------
[...material deleted...]
--------------------------------------------------------------------------

*** Is Technological Improvement What We Want? (Part 2) (129 lines)

>From Steve Talbott <stevet@ora.com>

In Part 1 (NF #38) of this series I tried to show how technical
improvements in the intelligent machinery around us tend to represent a
deepened threat in the very areas we began by trying to improve. This, so
long as we do not recognize it, is the Great Deceit of intelligent
machinery. The opportunity to make software more friendly is also an
opportunity to make it unfriendly at a more decisive level. I illustrated
this by citing:

* telephone answering systems (improved voice recognition software will
remove some of the present klutziness, but will enable a company to
turn more of its callers' important business over to software agents);

* speed and memory improvements (the accelerated obsolescence resulting
from these improvements suggests that our frustrations in dealing with
equipment that is too slow, dated, and awkward will deepen in direct
proportion to the rates of improvement. Call this, if you like,
Talbott's Law, and add it as a footnote to Moore's Law. Then remember
that all the human significance is in the footnote);

* information management tools (the technological arms race between
information generators and information managers is an endless one, and
the more it heats up, the harder we must work to preserve threads of
meaning amid the churning data and the proliferating tools that are
doing the churning).

The underlying problem, I suggested, was a mismatch between the
technically conceived improvements and the level at which our real
problems occur. There are many other places to look in order to
illustrate this mismatch. But in this installment, I have chosen to
inquire whether the problem is reflected in programming languages
themselves.

IS TECHNOLOGICAL IMPROVEMENT WHAT WE WANT?

The Worm Was Already in the APL

My point has been that a technical advance typically sharpens the
challenge that was presented to us by the original technical limitation.
It is not that our situation *must* worsen. But our predilections toward
abuse of the technology, as expressed in the earlier problem, must now be
reversed in the face of much greater temptation. Where we were failing
with the easier challenge, we must succeed with a harder one. The company
in possession of a new generation of telephone-answering software must
look to its mission statement with redoubled seriousness.

But the best intentions are difficult to execute when the Great Deceit is
built into the software itself. We need to recognize the deceit, not only
in the various software applications, but in the essence of the software
enterprise. Software, of course, is what drives all intelligent
machinery, and it is created through the use of programming languages.
Perhaps the greatest single advance in programming occurred with the
switch from low-level to high-level languages. Did this switch amount to
progress, pure and simple, or can we recognize the Deceit here at the very
root of the modern technological thrust?

The lowest-level machine language consists of numbers, representing
immediate instructions to the computer: "carry out such-and-such an
internal operation." It's not easy, of course, for programmers to look at
thousands of numbers on a page and get much of a conceptual grip on what's
going down. But through a series of steps, higher-level languages were
created, finally allowing program code that looks like this:

do myexit(1) unless $password;
if (crypt($password, $salt) ne $oldpassword) {
print "Sorry.";
do myexit(1);
}

Each line of such code typically represents -- and finally gets translated
into -- a large mass of machine code. These more powerful lines may still
look like Greek to you, but to the programmer who has struggled with low-
level languages, they convey, with read-my-lips clarity, assurance of a
drastic slash in the mental taxation of program writing.

Obviously, high-level languages enhance the programmer's technical power.
It is far easier to write code when you can employ the concepts and
terminology of the human domain within which the program will function.
But this heightened technical power dramatically increases the risks. The
more easily we can verbally leap from a human domain to a set of
computational techniques, the more easily we fall into the now more
effectively camouflaged gap between the two. The telephone company
programmer who writes a block of code under the label, "answer_inquiry",
is all too ready to assume that the customer's concern has been answered,
even if the likelihood is that it has not even been addressed.

The risk here is far from obvious in all its forms. It derailed the
entire discipline of cognitive science, whose whole purpose is to
understand the relation between the human and the computational. The
derailment finally produced one of the classic papers of the discipline,
entitled "Artificial Intelligence Meets Natural Stupidity." In it Drew
McDermott bemoaned the use of "wishful mnemonics" like UNDERSTAND and GOAL
in computer programs. It would be better, he suggested, to revert to
names more reminiscent of machine code -- say, G0034. Then the programmer
might be forced to consider the actual relationship between the human
being and the logical structures of the code.

As AI progresses (at least in terms of money spent), this malady gets
worse. We have lived so long with the conviction that robots are
possible, even just around the corner, that we can't help hastening
their arrival with magic incantations. Winograd...explored some of the
complexity of language in sophisticated detail; and now everyone takes
"natural-language interfaces" for granted, though none has been
written. Charniak...pointed out some approaches to understanding
stories, and now the OWL interpreter includes a "story-understanding
module." (And, God help us, a top-level "ego loop.")

McDermott wrote those words in 1976. But while the problem is now almost
universally acknowledged, it remains endemic to the discipline, subtly
eluding even the efforts by more philosophically minded practitioners to
impose conceptual rigor upon the field.

There is no question that high-level languages represent technical
progress. The programmer gains vastly greater power to *program*. But
this power arises from an ever more illusory match-up between the routine
"speech" of the programmer and the terms of real life. As I have begun to
suggest already and will argue further, more and more of human existence
disappears into the abyss hidden beneath the illusion. As we adapt to the
programmatic structuring of our phone calls, we get better at reconceiving
our business according to the predefined categories of the answering
system; at the same time, we learn not to bother with nonconforming calls.
Our world shapes itself to the software. Eventually, the programmer's
"answer_inquiry" becomes what it *means* to answer an inquiry.

--------------------------------------------------------------------------
*** About this newsletter (29 lines)

NETFUTURE is a newsletter concerning technology and human responsibility.
Publication occurs roughly once per week. Editor of the newsletter is
Steve Talbott, a senior editor at O'Reilly & Associates. Where rights are
not explicitly reserved, you may redistribute this newsletter for
noncommercial purposes.

Current and past issues of NETFUTURE are available on the Web:

http://www.ora.com/people/staff/stevet/netfuture/

To subscribe to NETFUTURE, send an email message like this:

To: listproc@online.ora.com

subscribe netfuture yourfirstname yourlastname

No Subject: line is needed. To unsubscribe, the second line shown above
should read instead:

unsubscribe netfuture

To submit material to the editor for publication in the forum, place the
material in an email message and address it to:

netfuture@online.ora.com

Send general inquiries to netfuture-owner@online.ora.com.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Dr. Willard McCarty, Senior Lecturer, King's College London
voice: +44 (0)171 873 2784 fax: +44 (0)171 873 5801
e-mail: Willard.McCarty@kcl.ac.uk
http://www.kcl.ac.uk/kis/schools/hums/ruhc/wlm/