... lexical knowledge methods forwordsense disambiguation. Computational Linguistics. J. Stetina, S. Kurohashi, and M. Nagao. 1998. General wordsensedisambiguationmethod based on a full ... semantically close. For appli- cations such as machine translation, fine grain disambiguation works well but for information extraction and some other applications this is an overkill, and ... knowledge sources forwordsense disambiguation. Com- putational Linguistics, 18(1):1-30. R. Mihalcea and D.I. Moldovan. 1999. An au- tomatic methodfor generating sense tagged corpora. In Proceedings...
... Sessions,pages 73–76, Ann Arbor, June 2005.c2005 Association for Computational LinguisticsSenseRelate::TargetWord – A Generalized Framework for WordSense Disambiguation Siddharth PatwardhanSchool ... sub-tasks, each of which is represented by a separate module. Each of the sequential sub-tasksor stages accepts data from a previous stage, per-forms a transformation on the data, and then passeson ... lexical sample format, which is anXML–based format that has been used for both theSENSEVAL-2 and SENSEVAL-3 exercises. A file inthis format includes a number of instances, each onemade up...
... their aligned translations (and probabil-319 algorithm parameters in machine learning of language.Machine Learning, pages 84–95.I. Dagan and A. Itai. 1994. Wordsense disambiguation using a second ... state-of-the-art systems for all languages, ex-cept for Spanish where the results are very similar.As all steps are run automatically, this multilingualapproach could be an answer for the acquisition ... results com-pared to the best systems that were evaluatedon the SemEval-2010 Cross-Lingual Word SenseDisambiguation task for all five targetlanguages.1 Introduction Word SenseDisambiguation (WSD)...
... thetwo SENSEVAL tasks. This gave a set of 6 nouns for SENSEVAL-2 and 9 nouns for SENSEVAL-3. For each noun, we gathered a maximum of 500parallel text examples as training data, similar towhat ... sampling with incomplete infor-mation. Annals of Mathematical Statistics, 26(4).Yee Seng Chan and Hwee Tou Ng. 200 5a. Scalingup wordsensedisambiguation via parallel texts. InProc. of AAAI05.Yee ... on data whichwas automatically gathered from the Internet. Theauthors reported a 14% improvement in accuracyif they have an accurate estimate of the sense pri-ors in the evaluation data and...
... be available for many examples. The problem of data sparse-ness increases as more knowledge is exploited and this can cause problems for the machine learning algorithms. A final disadvantage ... In-troduction to Machine Translation. Academic Press, Great Britain. Abolfazl K. Lamjiri, Osama El Demerdash, Leila Kos-seim. 2004. Simple features for statistical Word Sense Disambiguation. Proceedings ... 1st_prep_right, back). Rule_2. sense (A, chegar) :- has_rel (A, subj, B), has_bigram (A, today, B), has_bag_trans (A, hoje). Rule_3. sense (A, chegar) :- satisfy_restriction (A, [animal, human], [concrete]);...
... and accuracy improvement is less than1% after all the available WSJ adaptation examples are addedas additional training data. To obtain a clearer picture of theadaptation process, we discard ... in BC andWSJ, average MFS accuracy, average number of BCtraining, and WSJ adaptation examples per noun.data, and the rest of the WSJ examples are desig-nated as in-domain adaptation data. The ... pos-teriori (MAP) estimation, and successfully used it for probabilistic context-free grammar domain adap-tation (Roark and Bacchiani, 2003) and languagemodel adaptation (Bacchiani and Roark, 2003).Count-merging...
... training data so that we can do a fair comparison between the accuracy of the parallel text alignment approach versus the manual sense- tagging approach. After training a WSD classifier for w ... systems and distinguishing senses: New evalua-tion methods forwordsense disambiguation. Natural Language Engineering, 5(2):113-133. David Yarowsky, Silviu Cucerzan, Radu Florian, Charles Schafer, ... However, large-scale, good-quality parallel corpora have recently become available. For ex-ample, six English-Chinese parallel corpora are GIZA++. For two of the corpora, Hong Kong Han-sards and...
... each word, training and test instances taggedwith WordNet senses are provided. There are an av-erage of 7.8 senses per target word type. On average109 training instances per target word are ... Springer-Verlag, New York,1995.David Yarowsky and Radu Florian. Evaluat-ing sensedisambiguation across diverse param-eter spaces. Natural Language Engineering,8(4):293–310, 2002.David Yarowsky, ... and Matsumoto (2001), Isozaki andKazawa (2002), Mayfield et al. (2003)) includingthe wordsensedisambiguation task (e.g., Cabezaset al. (2001)). Given that SVM and KPCA are bothkernel methods,...
... singu- ã lar and plural proper names and we also did not count as an error the adjectival reading of words which are always written capitalized (e.g. American, Russian, Okinawian, etc.). ... normally act as proper names: even if such aword is observed in a document only as a proper name (usually as part of a multi -word proper name), it is still not safe to mark it as a proper name ... Disambiguation of capitalized words in mixed- case texts has hardly received much attention in the natural language processing and infor- mation retrieval communities, but in fact it plays...
... supervised wordsense disam-biguation system that attempts to disam-biguate all content words in a text usingWordNet senses. We evaluate the accu-racy of SENSELEARNER on several stan-dard sense- annotated ... the training datato disambiguate the words in the test data set. As a result, the algorithm does not need a separate classi-fier for each word to be disambiguated, but instead itlearns global ... supervised algorithms is however lim-ited only to those few words for which sense taggeddata is available, and their accuracy is strongly con-nected to the amount of labeled data available at hand.Instead,...