3.1056 e-critical editions; bib of SGML/hypertext (196)

Willard McCarty (MCCARTY@vm.epas.utoronto.ca)
Thu, 15 Feb 90 19:56:18 EST

Humanist Discussion Group, Vol. 3, No. 1056. Thursday, 15 Feb 1990.

Date: Thu, 15 Feb 90 17:32:48 CST
From: "Robin C. Cover" <ZRCC1001@SMUVM1>
Subject: RE: ELECTRONIC CRITICAL EDITIONS


Re: Faulhaber on "electronic critical editions" (Vol 3 No. 1041)

Responses by Dominik Wujastyk, Michael Sperberg-McQueen and Malcolm
Brown (Vol 3 No. 1050) on the relationship between SGML and Hypertext
have addressed the key element in the original posting: SGML and
Hypertext are very different things, even if part of the same menu. I
might add that the relationship between these two "great tastes"
(DeRose) is under discussion in several forums, including ISO; I
append at the end of this posting a listing of several bibliographic
sources known to me.

The matter of encoding "electronic critical editions" cited in
Faulhaber's subject line is quite another matter. As for the
credentials specified by Faulhaber, I am neither "knowledgeable" nor
"strongly opinionated" (I hope), but I am working on the problem as a
member of the Text Representation Subcommittee for the TEI (Text
Encoding Initiative). I would welcome assistance and interaction from
any HUMANISTS who wish to contribute to a private discussion in
connection with this TEI effort. The Text Representation Subcommittee
will discuss the matter of "encoding textual variants" at a meeting in
Oxford later this month; more information may be available at that
time. (Check back)

Briefly: I feel a lot of work remains to be done before we are
prepared to assess how we may best represent knowledge about "textual
variation" (textual evolution, textual parallels) using SGML markup
languages or other "portable" formalisms. In the simplest textual
arenas, or in the event that someone wishes to represent in electronic
format JUST what is visible on a printed page of a critical edition,
the challenge may not be too difficult. Several schemes are currently
in use by scholarly editing and text-processing systems which can be
expressed in an SGML language. By "simple" textual arenas, I refer
to: (a) cases in which all textual witnesses are written in the same
language and the same "scripts" (= one level within a stratified
orthographic system); (b) cases in which the witnesses can be seen in
close genetic/stemmatic relationship, not as products of complex
textual evolution through heavy recensional/editorial activity; (c)
cases in which the number of witnesses and amount of necessary textual
commentary represents a small body of information; (d) cases in which
one is not concerned about paleographic information and other
character-level annotations or codicolgical information.

But I think the assumptions above will not pertain to the work of a
significant number of humanities scholars. The goal of encoding "JUST
what is visible on a printed page" (a traditional apparatus criticus,
for example) might constitute an important and economical step in the
creation of a text-critical database, if assumptions (b) and (c) and
(d) were also germane. But when the textual data and published
knowledge about that "textual" data become very rich, the standard
critical apparatus represents (increasingly) a concession to the
limitations of the traditional paper medium: both physical space and
the ability of a reader to absorb (synthesize, evaluate) large amounts
of textual information in complex relationships. In these more
complex situations (biblical studies, for example), the paper app crit
will contain a selection of data, not all the data (excluding
orthographic variants, for instance, which may be important for
historical linguistics); it will indicate THAT a certain manuscript or
manuscript tradition bears testimony to a certain reading, but will
not indicate the steps of principled evaluation which were used to
make this judgment (language retroversions, for example); it will tell
you THAT a certain manuscript tradition (e.g., "Syriac" in support of
a certain variant of the Hebrew Bible) supports a given reading, but
not which manuscripts exactly, or where, precisely (machine-readable
terms) these Syriac readings may be found.

It is my opinion, then, that to model the "electronic critical
editions" of the 21st century (Faulhaber's quest) after paper editions
would, in some cases, represent a short-sighted goal. Rather than
just "encoding" or "marking up" modern critical editions (a necessary
or desirable step, perhaps), we need to think rather about
representation of the knowledge about textual variation, held in
critical editions, to be sure, but also in textual commentaries and in
fully-encoded manuscripts (primary documents) which themselves
constitute the primary data. In short: we need the encoding of ALL
the human knowledge about physical texts, textual "variants" AND the
scholarly judgments about processes of textual evolution. "Hypertext"
and "SGML-based" encoding can then be put to work in applications
software which allows us to study the text with multiple views, even
hypothetical documents created with the aid of an SQL/FQL and the
text-critical database. We may then dispense with the static
(sometimes overly-selective, sometimes overfull, sometimes inaccurate)
app crits and instead enjoy dynamic user-specified app crits
containing particular classes of text-critical information we wish to
see at a given moment; we may have several different app-crits on the
screen, simultaneously. We will be able to do simulations and test
hypotheses by dynamically querying hypothetical texts reconstructed
from an FQL expression.

It is also my judgment that we are quite a distance away from knowing
how to encode knowledge about textual relationships in which
inter-dependencies are complex ("variants," recensions, parallels,
allusions, quotations, evolutionary factors, hermeneutical-
translational factors). But I think SGML embodies one indispensable
ingredient in getting there: encouraging us to assign unique names to
objects in our textual universe, and to other properties of text and
textual relationships. Our conceptions about these textual (literary,
linguistic) objects will inevitably prove to be crude approximations,
but by coding our current understanding about them in syntactically-
rigorous ways (using SGML-based languages), we at least contribute to
a legacy of preserving the text and our understanding of it. This
conception of encoding that is self-documenting represents an advance
upon the less thoughtful processes of antiquity (and in some modern
conceptions of text), which were usually self-destructing.

Robin Cover
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Selected bibliography on SGML and HYPERTEXT

(Annotations are removed which sometimes provided the basis for the
collocation of the two terms).

_______ "Hypertext and SGML." <cit>EPSIG News</cit> 1/3 (May 1988) 4.
[Note on Owl International's IDEX product, which reads SGML-structured
documents. See also "Product and Services Update. Owl." <cit>SGML
Users' Group Newsletter</cit> 10 (November 1988) 12; "News from
Vendors. Owl International Incorporated." <cit>SGML Users' Group
Newsletter</cit> 8 (April 1988) 5.]

_______ "Hypertext Standards Committee Formed." <cit><TAG></cit> 10
(July 1989) 18.

_______ "Hypertext Standards Working Group." <cit>SGML Users' Group
Newsletter</cit> 13 (August 1989) 14.

_______ "Other ISO News. Hypertext." <cit><TAG></cit> 14 (October
1989) 3. [Brief mention of a NWI (New Work Item) out for ballot in SC
2, Character Sets and Information Coding, pertaining to "Multimedia
and Hypermedia Information Coded Representation."]

_______ "Publications and Articles. Programmed Hypertext and SGML."
<cit>SGML Users' Group Newsletter</cit> 13 (August 1989) 9-10.
[Discusion of a paper ("Programmed Hypertext and SGML") by Tim Niblett
and Arthur van Hoff, both of the Turing Institute in Glasgow.
Address: The Turing Institute; 36 North Hanover Street; Glasgow G1 2AD
UNITED KINGDOM; tel 44 31 552 6400.]

Andrew, K. "Electronic Publishing Futures: Organizational and
Technical Issues." <cit>Electro/88 Conference Record</cit> [10-12 May
1988 Boston, MA]. Pp. 15/1/1-4. Los Angeles: Electron. Conventions
Manage, 1988.

Barnard, David T.; Crawford, Robert G; Logan, George M. "Text Mark-Up
and Editing. Creation and Use of a Complex SGML-Tagged Text:
Hayakawa's Synonymy." Pp. 65-67 in <cit>The Dynamic Text. Conference
Guide</cit> </cit> [6-9 June 1989 Toronto]. Toronto, Ontario: Centre
for Computing in the Humanities, 1989.

Burnard, Lou D.; Corns, Thomas N.; Flannagan, Roy. "Text Mark-up and
Editing. A Milton Database: Descriptive Markup, Multiple Manuscript
Versions, and the Use of Hypertext." Pp. 67-68 in <cit>The Dynamic
Text. Conference Guide</cit> [6-9 June 1989 Toronto]. Toronto,
Ontario: Centre for Computing in the Humanities, 1989. [Describes a
project underway to create a machine-readable of

Hickey, Thomas B. "Using SGML and TeX for an Interactive Chemical
Encyclopaedia." Pp. 187-195 in <cit>National Online Meeting
Proceedings of the Tenth National Online Meeting</cit> [9-11 May 1989
New York, NY]. Medford, NJ: Learned Information, 1989. [ISBN
0-938734-34-2]

Rubinsky, Yuri. "Standards for Hypertext Interchange Need Not Come
out of Thin Air." <cit><TAG></cit> 11 (October 1989) 4-5.

Rubinsky, Yuri. "Comments on an SGML Application for Hyper- and Multi
Media Interchange. Informal Report from the GCA Hypertext/Hypermedia
Standards Forum." <cit><TAG></cit> 11 (October 1989) 5-6. [Report on
the GCA-sponsored one day workshop, July 25, 1989, in Boston.]

Tompa, Frank Wm; Raymond, Darrell R. "Database Design for a Dynamic
Dictionary." Technical Report OED-85-05, University of Waterloo
Centre for the New Oxford English Dictionary, June 1989. 16 pages.

submitted by:

Robin Cover
3909 Swiss Avenue
Dallas, TX 75204
(214) 296-1783/841-3657
BITNET: zrcc1001@smuvm1
INTERNET: robin@txsil.lonestar.org
UUCP: attctc!utafll!robin
UUCP: texbell!txsil.robin