Semantic Analysis

I recently developed an overview of the tasks in SemEval (the series of semantic evaluations conducted under the auspices of the ACL SIGLEX). The nice thing about this exercise was that it put semantic analysis into a larger perspective, where it becomes clearer where things are lacking. The overview groups the tasks into dictionary issues and issues involving how sentence and textual elements fit together, the fruits of which are then available for application areas. After the first Senseval (the precursor to SemEval) was conducted, with a focus on word-sense disambiguation (WSD), the question was raised as to what purpose WSD served. The same question can be asked about all the other tasks. Attempting to answer this question may help to identify needed further tasks in SemEval, but also may help to identify how the various pieces of information may be used in different application areas. In what follows, I offer some opinions, particularly trying to identify other research that is relevant to the SemEval tasks.

In the area of dictionary issues, there are several holes. The main hole is that dictionary entries still do not contain the necessary information to enable disambiguation among multiple senses. Over the past 20 years, corpus linguistics has become the sine qua non among lexicographers. This has revolutionized the construction of dictionary entries, primarily due to the current reliance of corpus evidence, rather than made-up examples. With this use of corpus evidence, entries also increasingly attempt to characterize the constructions in which the entries appear (see The Oxford Guide to Practical Lexicography, Atkins & Rundell, 2008). However, these characterizations seem only to contain syntactic information and little semantic information. There are some selectional restrictions or selectional preferences, but these are only minimal. The work of Patrick Hanks, in corpus pattern analysis, attempts to rectify this shortcoming, and indicates how much work needs to be done in this area. Corpus evidence is also being used increasingly to characterize the collocational patterns of entries (see particularly DANTE). Now, in saying that this is insufficient to enable disambiguation, the proof of the pudding is that these characterizations do not yet fully incorporate the features that have been found useful in various WSD systems, particularly supervised systems, where investigators have tried a considerable panoply of textual attributes. The work of these researchers has not yet been incorporated into lexical databases. A second major hole in dictionaries is that entries do not contain a representation of the content of a sense that can be used to build a larger representation of a text. This hole leads into the second area in which SemEval tasks are focused.

Many SemEval tasks attempt to characterize how sentence and textual elements fit together, once some initial syntactic processing has been completed. In these tasks, an attempt is made to characterize chunks of text in a sentence and beyond (e.g., in “full texts”). These tasks include such things as semantic role labeling, semantic relation analysis, and coreference resolution. Now, what we’d really like here is a contribution from the lexicon, i.e., a representation that can be plugged into these analyses. There has been some beginning of this task with frame semantics, via the FrameNet project, where many lexical units trigger a frame consisting of frame elements, and where the frame definition and frame elements may be viewed as definitional in form. With its foray into full-text analysis, there is some beginning of intersentential relations. Another useful formalism is the lexicon development environment for use with unification-based linguistic formalisms, e.g, the LKB system, which incorporates lexical items in HPSG systems.  Rhetorical Structure Theory provides another way of examining a text in its totality, but this theory has not been developed much of late. Importantly, as John Sowa points out in The Role of Logic and Ontology in Language and Reasoning,

Forty years of research in logic, linguistics, and AI has not produced a successful implementation: no computer system based on that approach can read one page of a high-school textbook and use the results to answer the questions and solve the problems as well as a B student.

Semantic analysis plays a large role in what may be considered its ultimate application areas, such as information extraction, question answering, document summarization, machine translation, paraphrasing, and recognizing textual entailment (RTE). The contributions of semantic analysis are difficult to assess in these tasks. Each has developed its own methods and there doesn’t seem to be any overarching analysis that identifies the specific contribution of semantic components. In many of these areas, investigators have begun to perform ablation analyses that seek to identify the relative contributions of its components. In RTE, the situation has become somewhat dire, where investigators do not have a clear idea of how results are being achieved. Sammons et al. (2010), in “Ask Not What Textual Entailment Can Do for You”, have proposed a community-wide effort to annotate RTE examples with the inference steps required to reach a decision about the example. This indicates the scale of the effort.

You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

1 Comment »

 
  • Deniz Yuret says:

    You make a distinction between dictionary issues and characterization issues whose definitions were not very clear to me. In my simplified view of the language world there are words, phrases and what they refer to on the one hand, and relations between them on the other. The first involves figuring out named entities, word sense, and co-reference; the second involves figuring out syntax, semantic roles and relations etc. At first I thought your dictionary / characterization dichotomy was similar but I saw co-reference listed in the second part which destroyed my analogy. Could you explain your distinction? Are dictionary issues only WSD and the rest classified under characterization issues?

    It really bothers me when people from other fields question the value of semantic analysis. If the ultimate goal is natural language understanding, semantic analysis is an obvious necessity. Language is always ABOUT something and semantics is the science of understanding what that something is! The way we are doing it right now may be wrong or incompetent. The limited applications of today, which take our current state of technology as given, may be able to make limited use of extra semantic information. These facts should not discourage today’s students of semantics because we really really need it.

    Having said that, I think some research done today under the name of semantics may be misguided. In particular I am suspicious of work that converts one representation to another without adding the
    possibility of any new inferences. Since nobody knows what a good
    semantic representation or a meaning representation looks like, I
    think we should focus instead on what such representations allow us to infer. I like RTE as a representation-agnostic task that tests for inference ability. The problem, as you point out, is it is too loosely defined. Thus my proposel on TARGETED textual entailment tasks:

    http://www.denizyuret.com/2007/06/targeted-textual-entailments-proposal.html

    These would be little tasks that focus on certain competences (like
    understanding of time or space), rather than certain applications
    (like QA). This may be an alternative to the Sammons proposal or
    complement it.

 

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>