5.0007 Copyright III (1/174)
Elaine Brennan & Allen Renear (EDITORS@BROWNVM.BITNET)
Tue, 7 May 91 23:35:52 EDT
Humanist Discussion Group, Vol. 5, No. 0007. Tuesday, 7 May 1991.
Date: Fri, 3 May 91 15:57:58 CDT
From: robin@utafll.uta.edu (Robin Cover)
Subject: Allen, part 3
I have suggested that the Free Software Foundation (GNU) 'copyleft' scheme may
supply a model for a legal instrument that encourages free distribution,
unrestricted global communication and democratic access to published
humanistic research. I have also argued that competition for access to
knowledge is not in the best interest of researchers. What justification can
be given for traditional "copyright" of these same electronic academic
resources? Copyright has traditionally been used to ensure that the creator
received proper recognition and respect for an intellectual contribution: both
can be accomplished just as easily through 'copyleft.' Copyright is used to
ensure that one party does not appropriate intellectual property of another,
establish ownership and then commercially benefit from the sale and
distribution: 'copyleft' will work just as well. Copyright may be useful, we
are sometimes told, in conjunction with centralized distribution, to ensure
that the doctrinal or technical purity of the resource: if electronic data can
be legally obtained only through a single source or through legally-approved
sources, and if it cannot legally be changed, the public interest in purity is
protected. And copyright is used to ensure that the creator ALONE may collect
net revenues from production and distribution of the resource. I should now
like to briefly address these latter two justifications.
Using copyright to protect electronic data against conscious but undesirable
changes--changes "undesirable" in the view of the copyright holder--is highly
questionable in our modern age. Even if this authoritarian measure be
necessary to protect the dogmas of religious fundamentalism, it is neither
intellectually nor technically necessary in the domain of humanistic research.
Within the domain of literary texts, for example, there may be "standard"
critical editions which compete for authority, but textually variant (via
unintentional corruption) representations and editorially/recensionally
variant representations are extant for all literature, forming the basis of
the most interesting research. To be sure, some groups of "textual
fundamentalists" masquerading as scholars would press into service medieval
arguments about textual standards ("necessary for dialogue and necessary for
coordinated textual research"), and would attempt to defend promulgation of
these "standardized" texts ostensibly for the purposes of text linguistics. We
recognize such agendas for what they are. The electronic age permits and even
encourages the researcher to alter texts according to individual text critical
theory and literary critical theory without adversely affecting the archival
copies or anyone else's research. We should welcome the conscious enhancement
(alias "alteration") of electronic text for scientific purposes.
What about accidental changes or corruptions in data? On one level, we can
never prevent humans from mis-copying or mis-labelling data. We cannot
guarantee that a quotation of the Loeb edition of a Latin text--taken from
paper or electronic medium--is in fact faithful to the Loeb edition. In the
electronic environment, however, with proper authoring tools, it is possible
to programmatically and automatically check the author's quotation against an
archival copy of the Loeb edition in Cambridge, MA. But neither copyright nor
central distribution have anything to do with technical purity of (electronic)
data any longer. The computer virus craze has popularized many public domain
schemes for wrapping data within an electronic "security envelope" which
guarantees beyond any reasonable doubt that the copying, transmission and
receipt of the data is has produced an exact replica.
The same technical points should be understood by scholars who worry about the
role of copyright in traditional paper publishing as guaranteeing an indelible
and permanent copy of scholarly research. We occasionally hear fears that the
ephemeral electronic world of electrostatic charges is insufficient to
guarantee robustness and permanence required for the historical record of
scholarship. Ironically, the "ephemeral" electronic world of "ones and zeros"
is vastly superior to paper: paper deteriorates and the typesetter's lead
plates deteriorate; any replication from lead plates of from photography
necessarily involve a loss of information in the replicated copy. With
electronic media, we may theoretically guarantee copying with 100 percent
fidelity for millions of years. It is precisely BECAUSE the fidelity of
copying can be checked at a level of the discrete electronic bit (check
summed) that it is a far more reliable means of perpetuating an historical
record than paper. Purity of data is a critical issue in textual research,
but it has nothing to do with copyright.
A final argument is that copyright allows the creator of intellectual property
to merchandise writing or research without unapproved competition. Even if a
re-shaping of values leads to diminished privatization and commercialization
of academic research, there will be a long transition period, and some
scholars (with respect to some predictable genres) will not, and perhaps
should not, be asked to relinquish private commercial claims upon their
writings. An infrastructure for university- based and global electronic
publishing will emerge, we hope; see Jerome Yavarkovsky et al., "Coalition for
Networked Information: A University-Based Electronic Publishing Network,"
EDUCOM Review 25/3 (Fall 1990) 14-20. In the interim period, suppose scholars
want to receive royalties for copies of their personal electronic books? How
shall distribution and payment be handled? New models will have to be
designed and tested. "Publishers" may not be necessary at all, though
scholarly editors will be. We are in transition period when the production
and use of electronic books is problematic for many reasons (see below). But
for the interim, why not adopt the "shareware" concept which has worked for
software, and continues in a very strong vein within the academic community.
I know of no precedent, but why not? This is experimental thinking.
The first step is to make sure the publisher of the paper copy does not
demand exclusive copyright on the electronic version (politely tell such a
thief to go to hell, and find another publisher for the hardcopy). Then
work out a simple shareware (suggested) royalty structure for library use,
individual use, and classroom use. Prepare never to be paid by some
predictable percentage of people -- those who (also) use shareware-software
without paying anything for it. Scholarship should not be held up by these
petty offenders, and we should spend no energy trying to catch them unless
there is flagrant violation like open commercial marketing of shareware. I
think the network-access and shareware model could work for electronic
books, at least for now:
*Software software is indeed placed on public file servers in huge quantities
under this quasi-legal arrangement, and does not violate rules governing
non-commercial use of networks (e.g., the BITNET/CREN charter).
*The burden is placed upon the author to nominate a fair suggested price for
use of data: the public judges whether the price is fair or not.
*The shareware fees (royalty structure) should be kept low so that it is clear
that the scholarly public benefits most from the shareware arrangement.
*Network access is preferable to copying and mailing diskettes for many
reasons. Standards for "media cost" should apply to data sent via post.
*One may encourage remote printing of the "book" at low cost if paper print
copy is critical (e.g., a course textbook in e-copy). In the long run,
university-owned (not-for-profit) on-demand print shops will arise.
Finally, we wish to show awareness of some of the obstacles to electronic
publishing. Opponents of 'openness' will rehearse these difficulties as a
means of convincing scholars that nothing is at stake *yet*, and that
electronic publishing is impractical. As we identify the challenges, we may
perhaps increase the resolve to take active steps to solve the problems. In
truth, neither electronic writing nor electronic "reading" are well-supported
with current software. Nor are political structures. But the challenges can
be met if scholars in sufficient numbers will move beyond the "let-others
think-about-it-I'm-too-busy-writing" stage to support data collection and
program development initiatives aimed at creating a pleasant electronic
scholarly research environment. The conceptual models for software tools and
telecommunications delivery are available, it would seem: what we need now is
simply the will to overcome traditional inefficient (even destructive) ways of
working. In this transition period (of unknown duration), we face numerous
challenges:
*Many scholars are not "actively" networked, so putting an electronic book on
a file server is not, by itself, an adequate means of distribution. On the
other hand, low-cost access to pubic networks is feasible even via public
carriers. We should promote the advantages of using academic networks.
*Many scholars won't know what to do with an electronic book, nor understand
why one would want to publish in electronic format, nor perceive what is at
stake in blithely surrendering an (electronic) copyright a traditional
publisher. Scholars need more basic orientation to electronic methods of
research and writing before they will support efforts to promote electronic
publishing and democratic access. We must ensure that writers and
editorial boards understand the jeopardy into which electronic research
comes through traditional publishing practices.
*Most software for browsing electronic books (e.g., hypertext links to
footnotes and cross-references) is still too primitive to make reading an
electronic book convincing. The value of an e-book, for the time being,
must lie in accessibility, reduced cost, convenience and in the additional
information access through electronic searchability, of variable importance
*The policies and facilities for electronic distribution and royalty
collection are ill-defined, or non-existent.
*Scholarly recognition for electronic publication (e.g., of linguistic
databases or other data which cannot be represented in paper format) is
very slim. Administrative and peer-review policies must be modernized.
*New mechanisms for refereeing electronic books must be established. In any
case, communication between editor and referee should be electronic.
*Successful electronic books will also have to be available in paper, with two
important consequences (a) scholarly publishing will move toward publishers
willing to respect the rights of the authors to disseminate books in
electronic format; (b) coordination with the "paper" publisher must be
arranged to ensure commensurate citation systems.
*Graphical data presents a special problem for online reading at this time.
PostScript or other graphics-based copy involves consequent loss of
searchability. Schemes which limit readability to graphical data should be
opposed: they constitute encryption & restrict access to knowledge in text.
*Textual markup is jeopardy: visible structure is necessary (one should by all
means use structural/descriptive markup in the volume preparation), but
today's generic software does not know how to address explicit markup,
which reduces readabiity. The (TEI) should provide clear guidelines for
authors in this respect, and PC-based software should be designed to allow
authors to reduce explicit markup in copies distributed for popular use.
The following submissions represent an attempt to encourage/provoke everyone
to think, discuss, and resolve to do something. HUMANISTS: please contribute
alternate opinions and proposals on these issues. (End Part III)
Robin Cover zrcc1001@smuvm1.BITNET robin@utafll.uta.edu Tel: 214/296-1783