47 th Annual Meeting of the Association for Computational Linguistics and

Post on 02-Jan-2016

19 views 0 download

Tags:

description

47 th Annual Meeting of the Association for Computational Linguistics and 4 th International Joint Conference on Natural Language Processing Of the AFNLP. 2-7 Aug 2009. Do Automatic Annotation Techniques Have Any Impact on Supervised Complex Question Answering?. - PowerPoint PPT Presentation

Transcript of 47 th Annual Meeting of the Association for Computational Linguistics and

47th Annual Meeting of the Association for Computational Linguistics

and

4th International Joint Conference onNatural Language ProcessingOf the AFNLP

2-7 Aug 2009

Do Automatic Annotation Techniques Have Any Impact on Supervised Complex Question Answering?

Shafiq R. JotyDept. of Computer ScienceUniversity of British ColumbiaVancouver, BC, Canada

Yllias Chali and Sadid A. HasanDept. of Computer ScienceUniversity of LethbridgeLethbridge, AB, Canada

“Given a complex question (topic description) and a collection of relevant documents, the task is to synthesize a fluent, well-organized 250-word summary of the documents that answers the question(s) in the topic”.

Example: Describe steps taken and worldwide reaction prior to the introduction of the Euro on January 1, 1999. Include predictions and expectations reported in the press.

-Use supervised learning methods.-Use automatic annotation techniques.

Research Problem and Our Approach

Motivation Huge amount of annotated or labeled data is a

prerequisite for supervised training. When humans are employed, the whole process

becomes time consuming and expensive. In order to produce a large set of labeled data we

prefer the automatic annotation strategy.

Automatic Annotation Techniques Using ROUGE Similarity Measures. Basic Element (BE) Overlap Measure. Syntactic Similarity Measure. Semantic Similarity Measure. Extended String Subsequence Kernel (ESSK).

Support Vector Machines (SVM). Conditional Random Fields (CRF). Hidden Markov Models (HMM). Maximum Entropy (MaxEnt).

Supervised Systems

Feature Space Query-related features:

n-gram overlap, Longest Common Subsequence (LCS), Weighted LCS (WLCS), skip-bigram, exact word overlap, synonym overlap, hypernym/hyponym overlap, gloss overlap, Basic Element (BE) overlap and syntactic tree similarity measure

Important features: position of sentences, length of sentences, Named Entity

(NE), cue word match and title match

Experimental Results

Conclusion

Sem annotation is the best for SVM. ESSK works well for HMM, CRF and MaxEnt

systems.