Humanist Discussion Group, Vol. 17, No. 65.
Centre for Computing in the Humanities, King's College London
www.kcl.ac.uk/humanities/cch/humanist/
Submit to: humanist@princeton.edu
Date: Sat, 07 Jun 2003 07:14:26 +0100
From: Mark Stevenson <M.Stevenson@DCS.SHEF.AC.UK>
Subject: Computer Speech and Language Special Issue on Word Sense
Disambiguation
Second Call for Papers:
Journal of Computer Speech and Language
Special Issue on WORD SENSE DISAMBIGUATION
Guest editors:
Judita Preiss, Judita.Preiss@cl.cam.ac.uk
Mark Stevenson, M.Stevenson@dcs.shef.ac.uk
The process of automatically determining the meanings of words, word
sense disambiguation (WSD), is an important stage in language
understanding. It has been shown to be useful for many natural language
processing applications including machine translation, information
retrieval (mono- and cross-lingual), corpus analysis, summarization and
document navigation.
The usefulness of WSD has been acknowledged since the 1950's and the
field has recently enjoyed a resurgence of interest including the
creation of SENSEVAL, an evaluation exercise allowing a basic
precision/recall comparison of participating systems, which has been run
twice to date. The current availability of large corpora and powerful
computing resources has made the exploration of machine learning and
statistical methods possible. This is in contrast to the majority of
early approaches which relied on hand-crafted disambiguation rules.
This special issue of Computer Speech and Language, due for publication
in 2004, is intended to describe the current state of the
art in word sense disambiguation. Papers are invited on all aspects of
WSD research, and especially on:
* Combinations of methods and knowledge sources.
Which methods or knowledge sources complement each other and which
provide similar disambiguation information? How should they be combined?
Do better disambiguation results justify the extra cost of producing
systems which combine multiple techniques or use multiple knowledge
sources? Can any method or knowledge source be determined to be better
or worse than another?
* Evaluation of WSD systems.
Which metrics are most informative and would new ones be useful? Can WSD
be evaluated in terms of the effect it has on another language
processing task, for example parsing? Can evaluations using different
data sets (corpora and lexical resources) be compared? Can the cost of
producing evaluation data be reduced through the use of automatic
methods?
* Sense distinctions and sense inventories.
How do these affect WSD? How does the granularity of the lexicon affect
the difficulty of the WSD task? Are some types of sense distinction
difficult to distinguish in text? What can be gained from combining
sense inventories and how can this be done?
* The effect of WSD on applications.
To what extent does WSD help applications such as machine translation or
text retrieval? What kind of disambiguation is most useful for these
applications? What is the effect when the disambiguation algorithm makes
mistakes?
* Minimising the need for hand-tagged data.
Hand-tagged text is expensive and difficult to obtain while un-tagged
text is plentiful and, effectively, limitless. What techniques can be
used to make use of un-tagged text, would weakly/semi-supervised
learning algorithms be useful? What use can be made of parallel text?
Can un-tagged text be made as useful as disambiguated text?
Submission Information
Initial Submission Date: 1 October 2003
All submissions will be subject to the normal peer review process for
this journal.
Submissions in electronic form (PDF) are strongly preferred and must
conform to the Computer Speech and Language specifications, which are
available at:
http://authors.elsevier.com/journal/csl
Any initial queries, should be addressed to
Judita.Preiss@cl.cam.ac.uk
This archive was generated by hypermail 2b30 : Sat Jun 07 2003 - 02:55:02 EDT