21.400 capacious memories & sufficient organizing ability

From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty_at_kcl.ac.uk>
Date: Sat, 8 Dec 2007 08:45:52 +0000

               Humanist Discussion Group, Vol. 21, No. 400.
       Centre for Computing in the Humanities, King's College London
  www.kcl.ac.uk/schools/humanities/cch/research/publications/humanist.html
                        www.princeton.edu/humanist/
                     Submit to: humanist_at_princeton.edu

         Date: Sat, 08 Dec 2007 08:41:55 +0000
         From: amsler_at_cs.utexas.edu
         Subject: Re: 21.397 capacious memories & sufficient organizing ability

We now have computers proving mathematical theorems in which the proof is so
voluminious when written out that it cannot be followed by human beings.

Whereas before a proof had to be "elegant" by elimination of enumeration as a
step, we now can have computers use enumeration of thousands of special cases
and their individual proofs well beyond what human beings would consider a
successful methodology.

In a larger sense, the problem is that the proofs will become inaccessible to
humans as a means of studying the growth of knowledge. They will have to be
accepted.

There was a similar dilemma facing early artificial intelligence investigators
at MIT. I believe it was Samuel's Checker playing software which could beat
human beings--but whose logic for making moves was inaccessible becasue it
wasn't based on a theory that could be explained. This approach was rejected by
Marvin Minsky because he noted that it did not reveal to us anything about the
game of Checkers.

This is the larger dilemma facing us. If we build computer software
that can use
pure statistical modeling to predict results we may know the answers without
being able to understand why those are the answers.

Quoting "Humanist Discussion Group (by way of Willard McCarty
<willard.mccarty_at_kcl.ac.uk>)" <willard_at_LISTS.VILLAGE.VIRGINIA.EDU>:

> Humanist Discussion Group, Vol. 21, No. 397.
> Centre for Computing in the Humanities, King's College London
> www.kcl.ac.uk/schools/humanities/cch/research/publications/humanist.html
> www.princeton.edu/humanist/
> Submit to: humanist_at_princeton.edu
>
>
>
> Date: Fri, 07 Dec 2007 06:12:08 +0000
> From: Willard McCarty <willard.mccarty_at_kcl.ac.uk>
> >
> In his A System of Logic Ratiocinative and Inductive (1843),
> obtainable in its entirely from The Online Library of Liberty
> (http://oll.libertyfund.org/), John Stuart Mill, arguing against
> Aristotle, writes that the successive general propositions of a
> syllogism are not steps in reasoning nor intermediate links in a
> chain of inference but mechanisms we require because of the natural
> constraints of the minds we have. He goes on:
>
> >If we had sufficiently capacious memories, and a sufficient power of
> >maintaining order among a huge mass of details, the reasoning could go
> >on without any general propositions; they are mere formulae for inferring
> >particulars from particulars. (II.iv.3)
>
> Now that we have "sufficiently capacious memories, and a sufficient
> power of maintaining order among a huge mass of details", though
> artificial, where are we in respect of this argument? What has
> happened to these inferential formulae, and how has it happened?
>
> Comments?
>
> Yours,
> WM
>
>
> Willard McCarty | Professor of Humanities Computing | Centre for
> Computing in the Humanities | King's College London |
> http://staff.cch.kcl.ac.uk/~wmccarty/. Et sic in infinitum (Fludd
> 1617, p. 26).
>
Received on Sat Dec 08 2007 - 03:59:27 EST

This archive was generated by hypermail 2.2.0 : Sat Dec 08 2007 - 03:59:29 EST