443:
the greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary). A similar approach searches for the shortest path between two words: the second word is iteratively searched among the definitions of every semantic variant of the first word, then among the definitions of every semantic variant of each word in the previous definitions and so on. Finally, the first word is disambiguated by selecting the semantic variant which minimizes the distance from the first to the second word.
259:. WSD systems are normally tested by having their results on a task compared against those of a human. However, while it is relatively easy to assign parts of speech to text, training people to tag senses has been proven to be far more difficult. While users can memorize all of the possible parts of speech a word can take, it is often impossible for individuals to memorize all of the senses a word can take. Moreover, humans do not agree on the task at hand – give a list of senses and sentences, and humans will not always agree on which word belongs in which sense.
640:(i.e., short defining gloss and one or more usage example) using a pre-trained word-embedding model. These centroids are later used to select the word sense with the highest similarity of a target word to its immediately adjacent neighbors (i.e., predecessor and successor words). After all words are annotated and disambiguated, they can be used as a training corpus in any standard word-embedding technique. In its improved version, MSSA can make use of word sense embeddings to repeat its disambiguation process iteratively.
632:) objects as nodes and the relationship between nodes as edges. The relations (edges) in AutoExtend can either express the addition or similarity between its nodes. The former captures the intuition behind the offset calculus, while the latter defines the similarity between two nodes. In MSSA, an unsupervised disambiguation system uses the similarity between word senses in a fixed context window to select the most suitable word sense using a pre-trained word-embedding model and
969:
624:) can also assist unsupervised systems in mapping words and their senses as dictionaries. Some techniques that combine lexical databases and word embeddings are presented in AutoExtend and Most Suitable Sense Annotation (MSSA). In AutoExtend, they present a method that decouples an object input representation into its properties, such as words and their word senses. AutoExtend uses a graph structure to map words (e.g. text) and non-word (e.g.
612:) has become one of the most fundamental blocks in several NLP systems. Even though most of traditional word-embedding techniques conflate words with multiple meanings into a single vector representation, they still can be used to improve WSD. A simple approach to employ pre-computed word embeddings to represent word senses is to compute the centroids of sense clusters. In addition to word-embedding techniques, lexical databases (e.g.,
101:) level is routinely above 90% (as of 2009), with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively.
552:, using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on a successively larger training corpus, until the whole corpus is consumed, or until a given maximum number of iterations is reached.
844:(2007). The objective of the competition is to organize different lectures, preparing and hand-annotating corpus for testing systems, perform a comparative evaluation of WSD systems in several kinds of tasks, including all-words and lexical sample WSD for different languages, and, more recently, new tasks such as
334:– was proposed as a possible solution to the sense discreteness problem. The task consists of providing a substitute for a word in context that preserves the meaning of the original word (potentially, substitutes can be chosen from the full lexicon of the target language, thus overcoming discreteness).
442:
is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with
325:
frequently discover in corpora loose and overlapping word meanings, and standard or conventional meanings extended, modulated, and exploited in a bewildering variety of ways. The art of lexicography is to generalize from the corpus to definitions that evoke and explain the full range of meaning of a
282:
A task-independent sense inventory is not a coherent concept: each task requires its own division of word meaning into senses relevant to the task. Additionally, completely different algorithms might be required by different applications. In machine translation, the problem takes the form of target
239:
Both WSD and part-of-speech tagging involve disambiguating or tagging with words. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word
231:
and sense tagging have proven to be very closely related, with each potentially imposing constraints upon the other. The question whether these tasks should be kept together or decoupled is still not unanimously resolved, but recently scientists incline to test these things separately (e.g. in the
125:
of language examples is also required). WSD task has two variants: "lexical sample" (disambiguating the occurrences of a small sample of target words which were previously selected) and "all words" task (disambiguation of all the words in a running text). "All words" task is generally considered a
755:
Knowledge is a fundamental component of WSD. Knowledge sources provide data which are essential to associate senses with words. They can vary from corpora of texts, either unlabeled or annotated with word senses, to machine-readable dictionaries, thesauri, glossaries, ontologies, etc. They can be
600:
to a set of dictionary senses is not desired, cluster-based evaluations (including measures of entropy and purity) can be performed. Alternatively, word sense induction methods can be tested and compared within an application. For instance, it has been shown that word sense induction improves Web
187:
will provide different divisions of words into senses. Some researchers have suggested choosing a particular dictionary, and using its set of senses to deal with this issue. Generally, however, research results using broad distinctions in senses have been much better than those using narrow ones.
522:
have been shown to be the most successful approaches, to date, probably because they can cope with the high-dimensionality of the feature space. However, these supervised methods are subject to a new knowledge acquisition bottleneck since they rely on substantial amounts of manually sense-tagged
361:
Shallow approaches do not try to understand the text, but instead consider the surrounding words. These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives
321:, and disagreements arise. For example, in Senseval-2, which used fine-grained sense distinctions, human annotators agreed in only 85% of word occurrences. Word meaning is in principle infinitely variable and context-sensitive. It does not divide up easily into distinct or discrete sub-meanings.
357:
and her colleagues, at the
Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very
595:
or discrimination. Then, new occurrences of the word can be classified into the closest induced clusters/senses. Performance has been lower than for the other methods described above, but comparisons are difficult since senses induced must be mapped to a known dictionary of word senses. If a
901:
evaluation task is also focused on WSD across 2 or more languages simultaneously. Unlike the
Multilingual WSD tasks, instead of providing manually sense-annotated examples for each sense of a polysemous noun, the sense inventory is built up on the basis of parallel corpora, e.g. Europarl
539:
was an early example of such an algorithm. It uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation.
126:
more realistic form of evaluation, but the corpus is more expensive to produce because human annotators have to read the definitions for each word in the sequence every time they need to make a tagging judgement, rather than once for a block of instances for the same target word.
547:
approach starts from a small amount of seed data for each word: either manually tagged training examples or a small number of surefire decision rules (e.g., 'play' in the context of 'bass' almost always indicates the musical instrument). The seeds are used to train an initial
812:
Comparing and evaluating different WSD systems is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. Before the organization of specific evaluation campaigns most systems were assessed on in-house, often small-scale,
466:
research of the early days of AI research have been applied with some success. More complex graph-based approaches have been shown to perform almost as well as supervised methods or even outperforming them on specific domains. Recently, it has been reported that simple
240:
may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 96% accuracy or better, as compared to less than 75% accuracy in word sense disambiguation with
89:
is trained for each distinct word on a corpus of manually sense-annotated examples, and completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Among these, supervised learning approaches have been the most successful
889:
Classical WSD for other languages uses their respective WordNet as sense inventories and sense annotated corpora tagged in their respective languages. Often researchers will also tapped on the SemCor corpus and aligned bitexts with
English as its
295:– that is, 'edge of river'). In information retrieval, a sense inventory is not necessarily required, because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is, is unimportant.
912:
as multilingual sense inventory. It evolved from the
Translation WSD evaluation tasks that took place in Senseval-2. A popular approach is to carry out monolingual WSD and then map the source language senses into the corresponding target word
349:. These approaches are generally not considered to be very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside very limited domains. Additionally due to the long tradition in
852:, etc. The systems submitted for evaluation to these competitions usually integrate different techniques and often combine supervised and knowledge-based methods (especially for avoiding bad performance in lack of training examples).
486:
The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate the word bass in "I am cooking basses" (i.e., it's not a musical instrument).
742:
implement simple and robust IR techniques that can successfully mine the Web for information to use in WSD. The historic lack of training data has provoked the appearance of some new algorithms and techniques, as described in
855:
In recent years , the WSD evaluation task choices had grown and the criterion for evaluating WSD has changed drastically depending on the variant of the WSD evaluation task. Below enumerates the variety of WSD tasks:
693:
in Hindi have hindered the performance of supervised models of WSD, while the unsupervised models suffer due to extensive morphology. A possible solution to this problem is the design of a WSD model by means of
166:, semi-supervised and unsupervised corpus-based systems, combinations of different methods, and the return of knowledge-based systems via graph-based methods. Still, supervised systems continue to perform best.
817:. In order to test one's algorithm, developers should spend their time to annotate all word occurrences. And comparing methods even on the same corpus is not eligible if there is different sense inventories.
2184:. Proc. of seventh International Workshop on Semantic Evaluation (SemEval), in the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013), Atlanta, USA, June 14–15th, 2013, pp. 222–231.
2711:. Proc. of the 44th Annual Meeting of the Association for Computational Linguistics joint with the 21st International Conference on Computational Linguistics. Sydney, Australia: COLING-ACL. Archived from
353:, of trying such approaches in terms of coded knowledge and in some cases, it can be hard to distinguish between knowledge involved in linguistic or world knowledge. The first attempt was that by
719:
depend crucially on the existence of manually annotated examples for every word sense, a requisite that can so far be met only for a handful of words for testing purposes, as it is done in the
483:
from
Knowledge to WordNet has been shown to boost simple knowledge-based methods, enabling them to rival the best supervised systems and even outperform them in a domain-specific setting.
134:
WSD was first formulated as a distinct computational task during the early days of machine translation in the 1940s, making it one of the oldest problems in computational linguistics.
283:
word selection. The "senses" are words in the target language, which often correspond to significant meaning distinctions in the source language ("bank" could translate to the French
734:, to acquire lexical information automatically. WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such as
156:(OALD), became available: hand-coding was replaced with knowledge automatically extracted from these resources, but disambiguation was still knowledge-based or dictionary-based.
953:
UKB: Graph Base WSD, a collection of programs for performing graph-based Word Sense
Disambiguation and lexical similarity/relatedness using a pre-existing Lexical Knowledge Base
601:
search result clustering by increasing the quality of result clusters and the degree diversification of result lists. It is hoped that unsupervised learning will overcome the
2557:. Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Rochester, New York: HLT-NAACL.
159:
In the 1990s, the statistical revolution advanced computational linguistics, and WSD became a paradigm problem on which to apply supervised machine learning techniques.
583:
is the greatest challenge for WSD researchers. The underlying assumption is that similar senses occur in similar contexts, and thus senses can be induced from text by
1748:
Proceedings of the 53rd Annual
Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
382:: These make use of a secondary source of knowledge such as a small annotated corpus as seed data in a bootstrapping process, or a word-aligned bilingual corpus.
3276:
149:' preference semantics. However, since WSD systems were at the time largely rule-based and hand-coded they were prone to a knowledge acquisition bottleneck.
2204:. In EACL-2006 Workshop on Making Sense of Sense: Bringing Psycholinguistics and Computational Linguistics Together, pages 33–40, Trento, Italy, April 2006.
2144:. In EACL-2006 Workshop on Making Sense of Sense: Bringing Psycholinguistics and Computational Linguistics Together, pages 33–40, Trento, Italy, April 2006.
3436:
559:
information that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains.
2921:. Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. EMNLP-CoNLL.
864:
As technology evolves, the Word Sense
Disambiguation (WSD) tasks grows in different flavors towards various research directions and for more languages:
2932:. Proc. of the 3rd International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (Senseval-3). Barcelona, Spain. Archived from
562:
Also, an ambiguous word in one language is often translated into different words in a second language depending on the sense of the word. Word-aligned
430:. Graph-based approaches have also gained much attention from the research community, and currently achieve performance close to the state of the art.
394:: These eschew (almost) completely external information and work directly from raw unannotated corpora. These methods are also known under the name of
2887:. Proc. of Semeval-2007 Workshop (SEMEVAL), in the 45th Annual Meeting of the Association for Computational Linguistics. Prague, Czech Republic: ACL.
326:
word, making it seem like words are well-behaved semantically. However, it is not at all clear if these same meaning distinctions are applicable in
207:
sets (e.g. the concept of car is encoded as { car, auto, automobile, machine, motorcar }). Other resources used for disambiguation purposes include
358:
successful, but had strong relationships to later work, especially
Yarowsky's machine learning optimisation of a thesaurus method in the 1990s.
2512:
2217:
1662:
Proceedings of the 2015 Conference of the North
American Chapter of the Association for Computational Linguistics: Human Language Technologies
636:. For each context window, MSSA calculates the centroid of each word sense definition by averaging the word vectors of its words in WordNet's
3414:
950:
WordNet::SenseRelate, a project that includes free, open source systems for word sense disambiguation and lexical sample sense disambiguation
744:
2062:
1161:
916:
506:
are deemed unnecessary). Probably every machine learning algorithm going has been applied to WSD, including associated techniques such as
1441:
145:
In the 1970s, WSD was a subtask of semantic interpretation systems developed within the field of artificial intelligence, starting with
2049:. Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2002.
2029:. Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2004.
1111:
908:
evaluation tasks focused on WSD across 2 or more languages simultaneously, using their respective WordNets as its sense inventories or
2164:. Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions. June 04-04, 2009, Boulder, Colorado.
2022:
1501:
Mikolov, Tomas; Chen, Kai; Corrado, Greg; Dean, Jeffrey (2013-01-16). "Efficient Estimation of Word Representations in Vector Space".
1321:
Diamantini, C.; Mircoli, A.; Potena, D.; Storti, E. (2015-06-01). "Semantic disambiguation in a social information discovery system".
905:
898:
3825:
3269:
2668:
2069:. In International Symposium on Machine Translation, Natural Language Processing and Translation Support Systems, Delhi, India, 2004.
1119:
2927:
274:
distinctions, so this again is why research on coarse-grained distinctions has been put to test in recent WSD evaluation exercises.
1842:
Ruas, Terry; Grosky, William; Aizawa, Akiko (December 2019). "Multi-sense embeddings through a word sense disambiguation process".
327:
2676:. Proc. of the North American Chapter of the Association for Computational Linguistics. Rochester, New York: NAACL. Archived from
3994:
142:(1960) argued that WSD could not be solved by "electronic computer" because of the need in general to model all world knowledge.
2858:
2157:
944:
BabelNet API, a Java API for knowledge-based multilingual Word Sense Disambiguation in 6 different languages using the BabelNet
3016:
Edmonds, Philip; Kilgarriff, Adam (2002). "Introduction to the special issue on evaluating word sense disambiguation systems".
2732:. Proc. of the 2010 Conference on Empirical Methods in Natural Language Processing. MIT Stata Center, Massachusetts, US: EMNLP.
2697:. Proceedings of the 11th Conference on European chapter of the Association for Computational Linguistics. Trento, Italy: EACL.
475:, perform state-of-the-art WSD in the presence of a sufficiently rich lexical knowledge base. Also, automatically transferring
81:
Many techniques have been researched, including dictionary-based methods that use the knowledge encoded in lexical resources,
2084:
1992:
1435:
1338:
1259:
2224:. Proceedings of the 4th International Workshop on Semantic Evaluations, pp. 7–12, June 23–24, 2007, Prague, Czech Republic.
1998:
97:
Accuracy of current algorithms is difficult to state without a host of caveats. In English, accuracy at the coarse-grained (
66:
Given that natural language requires reflection of neurological reality, as shaped by the abilities provided by the brain's
4030:
3735:
3426:
3262:
2434:
Buitelaar, P.; Magnini, B.; Strapparava, C.; Vossen, P. (2006). "Domain-specific WSD". In Agirre, E.; Edmonds, P. (eds.).
498:
methods are based on the assumption that the context can provide enough evidence on its own to disambiguate words (hence,
3989:
2703:
2197:
2137:
702:
has paved way for several Supervised methods which have been proven to produce a higher accuracy in disambiguating nouns.
162:
The 2000s saw supervised techniques reach a plateau in accuracy, and so attention has shifted to coarser-grained senses,
3240:
1635:
4035:
4025:
3596:
3006:
2415:
1742:
Rothe, Sascha; Schütze, Hinrich (2015). "AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes".
2177:
2042:
3750:
3581:
1522:
Pennington, Jeffrey; Socher, Richard; Manning, Christopher (2014). "Glove: Global Vectors for Word Representation".
3521:
1746:. Association for Computational Linguistics and the International Joint Conference on Natural Language Processing.
1002:
841:
153:
56:
2574:
Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone
2529:. Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Barcelona, Spain: EMNLP.
927:
data, consisting of polysemous words and the sentence that they occurred in, then WSD is performed on a different
3938:
3591:
2543:. Proceedings of ACL Workshop on Word Sense Disambiguation: Recent Successes and Future Directions. Philadelphia.
2288:
662:
20:
2364:
1721:
1681:
3586:
3331:
1612:
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
833:
820:
In order to define common evaluation datasets and procedures, public evaluation campaigns have been organized.
715:
rely on knowledge about word senses, which is only sparsely formulated in dictionaries and lexical databases.
3855:
3576:
2724:
2338:
2794:), in the 45th Annual Meeting of the Association for Computational Linguistics. Prague, Czech Republic: ACL.
3548:
1708:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics
459:
2852:. Proceedings of the 2nd Workshop on Scalable Natural Language Understanding Systems in HLT/NAACL. Boston.
2486:
3893:
3878:
3850:
3715:
3710:
3285:
2800:
2632:
982:
763:
468:
369:
346:
330:, as the decisions of lexicographers are usually driven by other considerations. In 2009, a task – named
82:
71:
3193:"Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation"
2866:. Proc. of the 48th Annual Meeting of the Association for Computational Linguistics. ACL. Archived from
2738:
2313:
941:
Babelfy, a unified state-of-the-art system for multilingual Word Sense Disambiguation and Entity Linking
4045:
3630:
3601:
3379:
3156:
1704:"ShotgunWSD: An unsupervised algorithm for global word sense disambiguation inspired by DNA sequencing"
1656:
Bhingardive, Sudha; Singh, Dhirendra; V, Rudramurthy; Redkar, Hanumant; Bhattacharyya, Pushpak (2015).
138:
first introduced the problem in a computational context in his 1949 memorandum on translation. Later,
3326:
395:
350:
67:
829:
566:
corpora have been used to infer cross-lingual sense distinctions, a kind of semi-supervised system.
406:
content words around each word to be disambiguated in the corpus, and statistically analyzing those
3999:
3923:
3655:
3611:
3496:
3394:
3075:
2214:
876:
672:
549:
532:
379:
244:. These figures are typical for English, and may be very different from those for other languages.
86:
27:
2406:
Agirre, E.; Stevenson, M. (2007). "Knowledge sources for WSD". In Agirre, E.; Edmonds, P. (eds.).
2263:
1984:
Constraint-based Grammar Formalisms: Parsing and Type Inference for Natural and Computer Languages
886:
as it sense inventory and the primary classification input is normally based on the SemCor corpus.
3903:
3873:
3540:
2969:
Word-sense disambiguation using statistical models of Roget's categories trained on large corpora
2238:
2193:
Lucia Specia, Maria das Gracas Volpe Nunes, Gabriela Castelo Branco Ribeiro, and Mark Stevenson.
2133:
Lucia Specia, Maria das Gracas Volpe Nunes, Gabriela Castelo Branco Ribeiro, and Mark Stevenson.
2059:
1750:. Stroudsburg, Pennsylvania, USA: Association for Computational Linguistics. pp. 1793–1803.
992:
868:
711:
The knowledge acquisition bottleneck is perhaps the major impediment to solving the WSD problem.
515:
253:
3374:
828:) is an international word sense disambiguation competition, held every three years since 1998:
3760:
3453:
3431:
3421:
3389:
3364:
2801:"Structural Semantic Interconnections: a Knowledge-Based Approach to Word Sense Disambiguation"
2598:. Proceedings of the 2nd Conference on Language Resources and Evaluation. Athens, Greece: LREC.
2453:
845:
617:
423:
411:
228:
52:
48:
1524:
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
1423:
70:, computer science has had a long-term challenge in developing the ability in computers to do
3620:
2580:. Proc. of SIGDOC-86: 5th International Conference on Systems Documentation. Toronto, Canada.
2572:
735:
712:
602:
580:
519:
472:
391:
1107:
3973:
3649:
3625:
3478:
2913:
2585:
Litkowski, K. C. (2005). "Computational lexicons and dictionaries". In Brown, K. R. (ed.).
2447:. Proceedings of the 20th National Conference on Artificial Intelligence. Pittsburgh: AAAI.
2019:
1761:
997:
920:
849:
592:
575:
463:
372:- and knowledge-based methods: These rely primarily on dictionaries, thesauri, and lexical
331:
208:
2677:
2549:
8:
3953:
3883:
3840:
3796:
3568:
3558:
3553:
3441:
3046:
2933:
2494:
1007:
872:
790:
716:
495:
451:
427:
385:
241:
179:
One problem with word sense disambiguation is deciding what the senses are, as different
139:
2739:"An Experimental Study of Graph Connectivity for Unsupervised Word Sense Disambiguation"
2392:"Knowledge-based WSD on Specific Domains: Performing better than Generic Supervised WSD"
1765:
4050:
4040:
3963:
3835:
3700:
3463:
3446:
3304:
3212:
3179:
3134:
3098:
3033:
2831:
2769:
2655:
2506:
2473:
2391:
2058:
Manish Sinha, Mahesh Kumar, Prabhakar Pande, Laxmi Kashyap, and Pushpak Bhattacharyya.
1879:
1851:
1777:
1751:
1711:
1673:
1568:
1537:
1526:. Stroudsburg, PA, USA: Association for Computational Linguistics. pp. 1532–1543.
1502:
1344:
1153:
974:
727:
637:
588:
536:
354:
114:
2454:"Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction"
3968:
3680:
3488:
3399:
3002:
2893:
2879:
2867:
2823:
2782:
2761:
2604:
2521:
2499:
Proc. of ANLP-97 Workshop on Tagging Text with Lexical Semantics: Why, What, and How?
2411:
2154:
1988:
1821:
1588:
1431:
1334:
883:
739:
597:
555:
Other semi-supervised techniques use large quantities of untagged corpora to provide
511:
507:
480:
163:
3216:
3037:
2983:. Proc. of the 33rd Annual Meeting of the Association for Computational Linguistics.
2835:
2659:
1902:
1883:
1781:
1677:
1348:
1157:
665:
language independent NLU combining Patom Theory and RRG (Role and Reference Grammar)
531:
Because of the lack of training data, many word sense disambiguation algorithms use
3845:
3730:
3705:
3506:
3409:
3204:
3171:
3138:
3126:
3102:
3090:
3025:
2967:
2815:
2773:
2753:
2647:
2619:
2477:
2465:
1869:
1861:
1811:
1769:
1665:
1664:. Denver, Colorado: Association for Computational Linguistics. pp. 1238–1243.
1625:
1615:
1578:
1555:
Bojanowski, Piotr; Grave, Edouard; Joulin, Armand; Mikolov, Tomas (December 2017).
1541:
1527:
1326:
1145:
945:
695:
690:
584:
75:
3246:
3224:
Yarowsky, David (2001). "Word sense disambiguation". In Dale; et al. (eds.).
3183:
410:
surrounding words. Two shallow approaches used to train and then disambiguate are
236:
competitions parts of speech are provided as input for the text to disambiguate).
3957:
3918:
3913:
3781:
3511:
3384:
3359:
3341:
2844:
2705:
Meaningful Clustering of Senses Helps Boost Word Sense Disambiguation Performance
2221:
2201:
2181:
2161:
2141:
2066:
2046:
2026:
1982:
1115:
3192:
2942:
2215:
Semeval-2007 task 02: evaluating word sense induction and discrimination systems
1367:
3665:
3645:
3369:
3130:
3111:
2623:
1865:
987:
731:
668:
609:
608:
Representing words considering their context through fixed-size dense vectors (
446:
An alternative to the use of the definitions is to consider general word-sense
439:
419:
373:
342:
There are two main approaches to WSD – deep approaches and shallow approaches.
314:
level (e.g., pen as writing instrument or enclosure), but go down one level to
308:
267:
3254:
3208:
3094:
3029:
2712:
2689:
2651:
2535:
2194:
2134:
1914:
699:
307:" is slippery and controversial. Most people can agree in distinctions at the
219:, a multilingual encyclopedic dictionary, has been used for multilingual WSD.
4019:
3928:
3740:
3720:
3501:
1825:
1592:
1330:
1323:
2015 International Conference on Collaboration Technologies and Systems (CTS)
956:
pyWSD, python implementations of Word Sense Disambiguation (WSD) technologies
871:
evaluation tasks use WordNet as the sense inventory and are largely based on
556:
544:
415:
362:
superior results in practice, due to the computer's limited world knowledge.
135:
3175:
1890:
1066:
1064:
1051:
1049:
1047:
454:
of each pair of word senses based on a given lexical knowledge base such as
3908:
3526:
2978:
2827:
2819:
2765:
2174:
2039:
1620:
924:
499:
322:
315:
271:
189:
146:
122:
2757:
1874:
1773:
1669:
1532:
1211:
3865:
3745:
3458:
3351:
3299:
2469:
1816:
1799:
1583:
1556:
1061:
1044:
891:
800:
784:
447:
263:
2072:
1630:
1133:
266:
for computer performance. Human performance, however, is much better on
3468:
1614:. Berlin, Germany: Association for Computational Linguistics: 897–907.
1607:
1606:
Iacobacci, Ignacio; Pilehvar, Mohammad Taher; Navigli, Roberto (2016).
1223:
769:
304:
199:
as a reference sense inventory for English. WordNet is a computational
180:
110:
40:
2523:
Unsupervised domain relevance estimation for word sense disambiguation
2284:
2020:
Unsupervised sense disambiguation using bilingual probabilistic models
726:
One of the most promising trends in WSD research is using the largest
3336:
2360:
1703:
1149:
563:
503:
476:
311:
212:
184:
98:
91:
60:
3241:
Computational Linguistics Special Issue on Word Sense Disambiguation
2972:. Proc. of the 14th conference on Computational linguistics. COLING.
2860:
Knowledge-rich Word Sense Disambiguation rivaling supervised systems
2115:
2040:
An unsupervised method for word sense tagging using parallel corpora
1657:
1391:
837:
3811:
3791:
3776:
3755:
3725:
3670:
3635:
3516:
1856:
1756:
1716:
1573:
1355:
968:
928:
909:
821:
814:
794:
774:
720:
621:
523:
corpora for training, which are laborious and expensive to create.
318:
256:
216:
118:
2980:
Unsupervised word sense disambiguation rivaling supervised methods
2334:
2103:
1702:
Butnaru, Andrei; Ionescu, Radu Tudor; Hristea, Florentina (2017).
1658:"Unsupervised Most Frequent Sense Detection using Word Embeddings"
1507:
1266:
1108:
Entity Linking meets Word Sense Disambiguation: a Unified Approach
277:
3948:
3806:
3786:
3660:
3404:
3319:
2791:
2433:
1908:
919:
is a combined task evaluation where the sense inventory is first
825:
633:
629:
625:
613:
455:
233:
204:
200:
196:
1950:
1926:
1379:
3314:
3309:
2881:
SemEval-2007 Task 17: English lexical sample, SRL and all words
1938:
1800:"AutoExtend: Combining Word Embeddings with Semantic Resources"
1608:"Embeddings for Word Sense Disambiguation: An Evaluation Study"
2808:
IEEE Transactions on Pattern Analysis and Machine Intelligence
2746:
IEEE Transactions on Pattern Analysis and Machine Intelligence
1290:
4004:
3640:
2309:
1561:
Transactions of the Association for Computational Linguistics
1467:
1320:
1187:
685:
2726:
Inducing Word Senses to Improve Web Search Result Clustering
2175:
SemEval-2013 Task 12: Multilingual Word Sense Disambiguation
2155:
SemEval-2010 task 3: cross-lingual word sense disambiguation
2091:
1554:
1032:
605:
bottleneck because they are not dependent on manual effort.
2784:
SemEval-2007 Task 07: Coarse-Grained English All-Words Task
2602:
2389:
1962:
1920:
1479:
1403:
1373:
1247:
44:
34:
2603:
McCarthy, D.; Koeling, R.; Weeds, J.; Carroll, J. (2007).
1199:
1076:
879:
classification with the manually sense annotated corpora:
433:
388:: These make use of sense-annotated corpora to train from.
345:
Deep approaches presume access to a comprehensive body of
113:
to specify the senses which are to be disambiguated and a
3801:
2877:
2780:
2519:
1896:
1455:
1138:
Wiley Interdisciplinary Reviews: Computational Statistics
1070:
1055:
402:
Almost all these approaches work by defining a window of
3112:"Introduction to the special issue on the Web as corpus"
2878:
Pradhan, S.; Loper, E.; Dligach, D.; Palmer, M. (2007).
2846:
Different sense granularities for different applications
2083:
sfn error: no target: CITEREFKilgarrifGrefenstette2003 (
2018:
Bhattacharya, Indrajit, Lise Getoor, and Yoshua Bengio.
1605:
648:
Other approaches may vary differently in their methods:
195:
Most research in the field of WSD is performed by using
152:
By the 1980s large-scale lexical resources, such as the
2912:
Snow, R.; Prakash, S.; Jurafsky, D.; Ng, A. Y. (2007).
2670:
Using Knowledge for Automatic Word Sense Disambiguation
2445:
Scaling up word sense disambiguation via parallel texts
2259:
1655:
1521:
738:(IR). In this case, however, the reverse is also true:
154:
Oxford Advanced Learner's Dictionary of Current English
26:
For information on Knowledge disambiguation pages, see
3148:
Foundations of Statistical Natural Language Processing
2999:
Word Sense Disambiguation: Algorithms and Applications
2842:
2436:
Word Sense Disambiguation: Algorithms and Applications
2408:
Word Sense Disambiguation: Algorithms and Applications
2234:
1500:
1229:
262:
As human performance serves as the standard, it is an
2911:
2605:"Unsupervised acquisition of predominant word senses"
2078:
1701:
1302:
1278:
1217:
19:"Disambiguation" redirects here. For other uses, see
16:
Identification of which sense of a word is being used
2959:
Electric Words: dictionaries, computers and meanings
2390:
Agirre, E.; Lopez de Lacalle, A.; Soroa, A. (2009).
964:
535:, which allows both labeled and unlabeled data. The
2489:(Tech. note). Brighton, UK: University of Brighton.
1235:
1175:
1088:
174:
3146:Manning, Christopher D.; Schütze, Hinrich (1999).
2956:
2843:Palmer, M.; Babko-Malaya, O.; Dang, H. T. (2004).
2781:Navigli, R.; Litkowski, K.; Hargraves, O. (2007).
2691:Determining word sense dominance using a thesaurus
2551:An information retrieval approach to sense ranking
2520:Gliozzo, A.; Magnini, B.; Strapparava, C. (2004).
1798:Rothe, Sascha; Schütze, Hinrich (September 2017).
1296:
3047:"Word sense disambiguation: The state of the art"
2950:Machine Translation of Languages: Fourteen Essays
2722:
1557:"Enriching Word Vectors with Subword Information"
1473:
1020:
4017:
3487:
1841:
1428:The Oxford Handbook of Computational Linguistics
1134:"Part-of-speech tagging: Part-of-speech tagging"
706:
3284:
2856:
2630:
2593:
2533:
2451:
2405:
2121:
2109:
1956:
1485:
1397:
1272:
380:Semi-supervised or minimally supervised methods
365:There are four conventional approaches to WSD:
278:Sense inventory and algorithms' task-dependency
2798:
1361:
3270:
2997:Agirre, Eneko; Edmonds, Philip, eds. (2007).
2736:
2687:
1932:
1385:
750:
745:Automatic acquisition of sense-tagged corpora
121:data to be disambiguated (in some methods, a
109:Disambiguation requires two strict inputs: a
3228:. New York: Marcel Dekker. pp. 629–654.
2925:
2596:Integrating subject field codes into WordNet
2589:(2nd ed.). Oxford: Elsevier Publishers.
2547:
1944:
1797:
1741:
1258:sfn error: no target: CITEREFKilgarrif1997 (
1193:
917:Word Sense Induction and Disambiguation task
290:
284:
3065:Jurafsky, Daniel; Martin, James H. (2000).
2957:Wilks, Y.; Slator, B.; Guthrie, L. (1996).
1974:
3277:
3263:
2537:Sense discrimination with parallel corpora
2511:: CS1 maint: location missing publisher (
2497:(1997). "Analysis of a handwriting task".
2424:
1038:
526:
3251:by Rada Mihalcea and Ted Pedersen (2005).
3110:Kilgarriff, A.; Grefenstette, G. (2003).
2587:Encyclopaedia of Language and Linguistics
2584:
2561:
2097:
1873:
1855:
1815:
1755:
1715:
1629:
1619:
1582:
1572:
1531:
1506:
1374:Agirre, Lopez de Lacalle & Soroa 2009
1284:
1253:
1120:Association for Computational Linguistics
803:: raw corpora and sense-annotated corpora
337:
298:
222:
3191:Resnik, Philip; Yarowsky, David (2000).
2976:
2965:
2666:
2534:Ide, N.; Erjavec, T.; Tufis, D. (2002).
2493:
2173:R. Navigli, D. A. Jurgens, D. Vannella.
1409:
1181:
1131:
1094:
1082:
3226:Handbook of Natural Language Processing
3018:Journal of Natural Language Engineering
2891:
2701:
2633:"The English Lexical Substitution Task"
2484:
1980:
1897:Gliozzo, Magnini & Strapparava 2004
1461:
1241:
1205:
1056:Navigli, Litkowski & Hargraves 2007
655:Identification of dominant word senses;
569:
434:Dictionary- and knowledge-based methods
247:
4018:
3150:. Cambridge, Massachusetts: MIT Press.
2961:. Cambridge, Massachusetts: MIT Press.
2948:. In Locke, W.N.; Booth, A.D. (eds.).
2940:
2564:Building Large Knowledge-Based Systems
2442:
1968:
1421:
1026:
859:
3258:
3157:"Word Sense Disambiguation: A Survey"
2894:"Automatic word sense discrimination"
2857:Ponzetto, S. P.; Navigli, R. (2010).
2358:
1837:
1835:
1793:
1791:
1496:
1494:
490:
188:Most researchers continue to work on
3736:Simple Knowledge Organization System
2723:Navigli, R.; Crisafulli, G. (2010).
2570:
1308:
1230:Palmer, Babko-Malaya & Dang 2004
376:, without using any corpus evidence.
39:is the process of identifying which
2195:Multilingual versus monolingual WSD
2135:Multilingual versus monolingual WSD
1132:Martinez, Angel R. (January 2012).
643:
426:have shown superior performance in
13:
3248:Word Sense Disambiguation Tutorial
3045:Ide, Nancy; Véronis, Jean (1998).
2989:
2790:. Proc. of Semeval-2007 Workshop (
2631:McCarthy, D.; Navigli, R. (2009).
2594:Magnini, B.; Cavaglià, G. (2000).
2452:Di Marco, A.; Navigli, R. (2013).
1832:
1788:
1491:
1106:A. Moro; A. Raganato; R. Navigli.
678:
591:of context, a task referred to as
14:
4062:
3751:Thesaurus (information retrieval)
3234:
2799:Navigli, R.; Velardi, P. (2005).
2640:Language Resources and Evaluation
2487:"Designing a task for SENSEVAL-2"
2337:. Moin.delph-in.net. 2018-02-05.
2153:Els Lefever and Veronique Hoste.
2079:Kilgarrif & Grefenstette 2003
658:WSD using Cross-Lingual Evidence.
3076:"I don't believe in word senses"
3069:. New Jersey, US: Prentice Hall.
2737:Navigli, R.; Lapata, M. (2010).
2688:Mohammad, S.; Hirst, G. (2006).
1844:Expert Systems with Applications
1424:"13.5.3 Two claims about senses"
1297:Wilks, Slator & Guthrie 1996
1003:Sentence boundary disambiguation
967:
175:Differences between dictionaries
2926:Snyder, B.; Palmer, M. (2004).
2562:Lenat, D.; Guha, R. V. (1989).
2548:Lapata, M.; Keller, F. (2007).
2443:Chan, Y. S.; Ng, H. T. (2005).
2367:from the original on 2018-06-11
2352:
2341:from the original on 2018-03-09
2327:
2316:from the original on 2018-03-12
2302:
2291:from the original on 2018-03-21
2287:. Senserelate.sourceforge.net.
2277:
2266:from the original on 2018-03-22
2252:
2241:from the original on 2014-08-08
2227:
2207:
2187:
2167:
2147:
2127:
2060:Hindi word sense disambiguation
2052:
2038:Diab, Mona, and Philip Resnik.
2032:
2012:
2001:from the original on 2023-07-15
1735:
1724:from the original on 2023-01-21
1695:
1684:from the original on 2023-01-21
1649:
1638:from the original on 2019-10-28
1599:
1548:
1515:
1444:from the original on 2022-02-22
1415:
1314:
1164:from the original on 2023-07-15
289:– that is, 'financial bank' or
169:
21:Disambiguation (disambiguation)
3332:Natural language understanding
3067:Speech and Language Processing
2429:. Reading, MA: Addison-Wesley.
2382:
2335:"Lexical Knowledge Base (LKB)"
2213:Eneko Agirre and Aitor Soroa.
1125:
1100:
510:, parameter optimization, and
63:, it is usually subconscious.
1:
3856:Optical character recognition
2915:Learning to Merge Word Senses
1957:Ide, Erjavec & Tufis 2002
1474:Navigli & Crisafulli 2010
1122:(TACL). 2. pp. 231–244. 2014.
1013:
882:Classic English WSD uses the
807:
764:Machine-readable dictionaries
707:Local impediments and summary
652:Domain-driven disambiguation;
303:Finally, the very notion of "
3549:Multi-document summarization
3197:Natural Language Engineering
1987:. Massachusetts: MIT Press.
587:word occurrences using some
7:
4031:Natural language processing
3879:Latent Dirichlet allocation
3851:Natural language generation
3716:Machine-readable dictionary
3711:Linguistic Linked Open Data
3286:Natural language processing
2952:. Cambridge, MA: MIT Press.
2667:Mihalcea, R. (April 2007).
2122:Magnini & Cavaglià 2000
2110:Agirre & Stevenson 2007
1981:Shieber, Stuart M. (1992).
1486:Di Marco & Navigli 2013
1398:Ponzetto & Navigli 2010
1273:McCarthy & Navigli 2009
983:Controlled natural language
960:
935:
840:(2004), and its successor,
469:graph connectivity measures
104:
83:supervised machine learning
72:natural language processing
10:
4067:
3631:Explicit semantic analysis
3380:Deep linguistic processing
3131:10.1162/089120103322711569
2929:The English all-words task
2752:(4). IEEE Press: 678–692.
2624:10.1162/coli.2007.33.4.553
1866:10.1016/j.eswa.2019.06.026
1362:Navigli & Velardi 2005
751:External knowledge sources
573:
328:computational applications
129:
25:
18:
4036:Computational linguistics
4026:Word-sense disambiguation
3982:
3937:
3892:
3864:
3824:
3769:
3691:
3679:
3610:
3567:
3539:
3474:Word-sense disambiguation
3350:
3327:Computational linguistics
3292:
3209:10.1017/S1351324999002211
3155:Navigli, Roberto (2009).
3119:Computational Linguistics
3054:Computational Linguistics
3030:10.1017/S1351324902002966
2901:Computational Linguistics
2652:10.1007/s10579-009-9084-1
2612:Computational Linguistics
2464:(3). MIT Press: 709–754.
2458:Computational Linguistics
1933:Mohammad & Hirst 2006
1804:Computational Linguistics
1386:Navigli & Lapata 2010
789:Other resources (such as
673:constraint-based grammars
396:word sense discrimination
351:computational linguistics
203:that encodes concepts as
4000:Natural Language Toolkit
3924:Pronunciation assessment
3826:Automatic identification
3656:Latent semantic analysis
3612:Distributional semantics
3497:Compound-term processing
3395:Named-entity recognition
2646:(2). Springer: 139–159.
2427:Language and information
1945:Lapata & Keller 2007
1331:10.1109/CTS.2015.7210442
1194:Snyder & Palmer 2004
533:semi-supervised learning
28:Knowledge:Disambiguation
3904:Automated essay scoring
3874:Document classification
3541:Automatic summarization
3176:10.1145/1459352.1459355
3095:10.1023/A:1000583911091
3074:Kilgarriff, A. (1997).
2941:Weaver, Warren (1949).
2425:Bar-Hillel, Y. (1964).
1422:Mitkov, Ruslan (2004).
993:Judicial interpretation
869:Classic monolingual WSD
756:classified as follows:
527:Semi-supervised methods
516:Support Vector Machines
462:methods reminiscent of
424:support vector machines
412:Naïve Bayes classifiers
3761:Universal Dependencies
3454:Terminology extraction
3437:Semantic decomposition
3432:Semantic role labeling
3422:Part-of-speech tagging
3390:Information extraction
3375:Coreference resolution
3365:Collocation extraction
2820:10.1109/TPAMI.2005.149
2410:. New York: Springer.
2285:"WordNet::SenseRelate"
1118:. Transactions of the
846:semantic role labeling
797:, domain labels, etc.)
698:. The creation of the
418:. In recent research,
338:Approaches and methods
299:Discreteness of senses
291:
285:
229:part-of-speech tagging
223:Part-of-speech tagging
3522:Sentence segmentation
3164:ACM Computing Surveys
2977:Yarowsky, D. (1995).
2966:Yarowsky, D. (1992).
2758:10.1109/TPAMI.2009.36
2438:. New York: Springer.
2310:"UKB: Graph Base WSD"
2124:, pp. 1413–1418.
1971:, pp. 1037–1042.
1909:Buitelaar et al. 2006
1744:Volume 1: Long Papers
1400:, pp. 1522–1531.
1376:, pp. 1501–1506.
1364:, pp. 1063–1074.
1285:Lenat & Guha 1989
1220:, pp. 1005–1014.
785:Collocation resources
736:information retrieval
730:ever accessible, the
603:knowledge acquisition
589:measure of similarity
581:Unsupervised learning
520:memory-based learning
37:-sense disambiguation
3974:Voice user interface
3685:datasets and corpora
3626:Document-term matrix
3479:Word-sense induction
2892:Schütze, H. (1998).
2702:Navigli, R. (2006).
2495:Fellbaum, Christiane
2485:Edmonds, P. (2000).
2470:10.1162/COLI_a_00148
1921:McCarthy et al. 2007
1817:10.1162/coli_a_00294
1621:10.18653/v1/P16-1085
1584:10.1162/tacl_a_00051
1430:. OUP. p. 257.
1325:. pp. 326–333.
998:Semantic unification
850:lexical substitution
791:word frequency lists
713:Unsupervised methods
593:word sense induction
576:Word sense induction
570:Unsupervised methods
464:spreading activation
420:kernel-based methods
392:Unsupervised methods
332:lexical substitution
248:Inter-judge variance
51:or other segment of
3954:Interactive fiction
3884:Pachinko allocation
3841:Speech segmentation
3797:Google Ngram Viewer
3569:Machine translation
3559:Text simplification
3554:Sentence extraction
3442:Semantic similarity
2112:, pp. 217–251.
2100:, pp. 753–761.
2081:, pp. 333–347.
1947:, pp. 348–355.
1935:, pp. 121–128.
1923:, pp. 553–590.
1911:, pp. 275–298.
1899:, pp. 380–387.
1774:10.3115/v1/p15-1173
1766:2015arXiv150701127R
1670:10.3115/v1/N15-1132
1533:10.3115/v1/d14-1162
1412:, pp. 189–196.
1388:, pp. 678–692.
1275:, pp. 139–159.
1208:, pp. 105–112.
1085:, pp. 454–460.
1071:Pradhan et al. 2007
1041:, pp. 174–179.
1008:Syntactic ambiguity
860:Task design choices
452:semantic similarity
450:and to compute the
428:supervised learning
252:Another problem is
242:supervised learning
85:methods in which a
57:language processing
3964:Question answering
3836:Speech recognition
3701:Corpus linguistics
3681:Language resources
3464:Textual entailment
3447:Sentiment analysis
2312:. Ixa2.si.ehu.es.
2220:2013-02-28 at the
2200:2012-04-10 at the
2180:2014-08-08 at the
2160:2010-06-16 at the
2140:2012-04-10 at the
2065:2016-03-04 at the
2045:2016-03-04 at the
2025:2016-01-09 at the
1969:Chan & Ng 2005
1464:, pp. 97–123.
1256:, pp. 91–113.
1114:2014-08-08 at the
975:Linguistics portal
740:web search engines
717:Supervised methods
537:Yarowsky algorithm
491:Supervised methods
481:semantic relations
386:Supervised methods
355:Margaret Masterman
227:In any real test,
4046:Lexical semantics
4013:
4012:
3969:Virtual assistant
3894:Computer-assisted
3820:
3819:
3577:Computer-assisted
3535:
3534:
3527:Word segmentation
3489:Text segmentation
3427:Semantic analysis
3415:Syntactic parsing
3400:Ontology learning
2571:Lesk, M. (1986).
2566:. Addison-Wesley.
2501:. Washington D.C.
1994:978-0-262-19324-5
1959:, pp. 54–60.
1437:978-0-19-927634-9
1340:978-1-4673-7647-1
1311:, pp. 24–26.
1232:, pp. 49–56.
1196:, pp. 41–43.
1073:, pp. 87–92.
1058:, pp. 30–35.
899:Cross-lingual WSD
884:Princeton WordNet
691:lexical resources
512:ensemble learning
508:feature selection
215:. More recently,
209:Roget's Thesaurus
164:domain adaptation
4058:
3990:Formal semantics
3939:Natural language
3846:Speech synthesis
3828:and data capture
3731:Semantic network
3706:Lexical resource
3689:
3688:
3507:Lexical analysis
3485:
3484:
3410:Semantic parsing
3279:
3272:
3265:
3256:
3255:
3229:
3220:
3187:
3161:
3151:
3142:
3116:
3106:
3080:
3070:
3061:
3051:
3041:
3012:
2984:
2973:
2962:
2953:
2947:
2937:
2922:
2920:
2908:
2898:
2888:
2886:
2874:
2872:
2865:
2853:
2851:
2839:
2814:(7): 1075–1086.
2805:
2795:
2789:
2777:
2743:
2733:
2731:
2719:
2717:
2710:
2698:
2696:
2684:
2682:
2675:
2663:
2637:
2627:
2609:
2599:
2590:
2581:
2579:
2567:
2558:
2556:
2544:
2542:
2530:
2528:
2516:
2510:
2502:
2490:
2481:
2448:
2439:
2430:
2421:
2402:
2396:
2376:
2375:
2373:
2372:
2356:
2350:
2349:
2347:
2346:
2331:
2325:
2324:
2322:
2321:
2306:
2300:
2299:
2297:
2296:
2281:
2275:
2274:
2272:
2271:
2262:. Babelnet.org.
2256:
2250:
2249:
2247:
2246:
2231:
2225:
2211:
2205:
2191:
2185:
2171:
2165:
2151:
2145:
2131:
2125:
2119:
2113:
2107:
2101:
2095:
2089:
2088:
2076:
2070:
2056:
2050:
2036:
2030:
2016:
2010:
2009:
2007:
2006:
1978:
1972:
1966:
1960:
1954:
1948:
1942:
1936:
1930:
1924:
1918:
1912:
1906:
1900:
1894:
1888:
1887:
1877:
1859:
1839:
1830:
1829:
1819:
1795:
1786:
1785:
1759:
1739:
1733:
1732:
1730:
1729:
1719:
1699:
1693:
1692:
1690:
1689:
1653:
1647:
1646:
1644:
1643:
1633:
1623:
1603:
1597:
1596:
1586:
1576:
1552:
1546:
1545:
1535:
1519:
1513:
1512:
1510:
1498:
1489:
1483:
1477:
1471:
1465:
1459:
1453:
1452:
1450:
1449:
1419:
1413:
1407:
1401:
1395:
1389:
1383:
1377:
1371:
1365:
1359:
1353:
1352:
1318:
1312:
1306:
1300:
1294:
1288:
1282:
1276:
1270:
1264:
1263:
1251:
1245:
1239:
1233:
1227:
1221:
1218:Snow et al. 2007
1215:
1209:
1203:
1197:
1191:
1185:
1179:
1173:
1172:
1170:
1169:
1150:10.1002/wics.195
1129:
1123:
1104:
1098:
1092:
1086:
1080:
1074:
1068:
1059:
1053:
1042:
1036:
1030:
1024:
977:
972:
971:
946:semantic network
929:testing data set
906:Multilingual WSD
696:parallel corpora
661:WSD solution in
644:Other approaches
294:
288:
76:machine learning
4066:
4065:
4061:
4060:
4059:
4057:
4056:
4055:
4016:
4015:
4014:
4009:
3978:
3958:Syntax guessing
3940:
3933:
3919:Predictive text
3914:Grammar checker
3895:
3888:
3860:
3827:
3816:
3782:Bank of English
3765:
3693:
3684:
3675:
3606:
3563:
3531:
3483:
3385:Distant reading
3360:Argument mining
3346:
3342:Text processing
3288:
3283:
3237:
3232:
3223:
3190:
3159:
3154:
3145:
3114:
3109:
3078:
3073:
3064:
3049:
3044:
3015:
3009:
2996:
2992:
2990:Further reading
2987:
2945:
2918:
2896:
2884:
2870:
2863:
2849:
2803:
2787:
2741:
2729:
2715:
2708:
2694:
2680:
2673:
2635:
2607:
2577:
2554:
2540:
2526:
2504:
2503:
2418:
2394:
2385:
2380:
2379:
2370:
2368:
2357:
2353:
2344:
2342:
2333:
2332:
2328:
2319:
2317:
2308:
2307:
2303:
2294:
2292:
2283:
2282:
2278:
2269:
2267:
2258:
2257:
2253:
2244:
2242:
2233:
2232:
2228:
2222:Wayback Machine
2212:
2208:
2202:Wayback Machine
2192:
2188:
2182:Wayback Machine
2172:
2168:
2162:Wayback Machine
2152:
2148:
2142:Wayback Machine
2132:
2128:
2120:
2116:
2108:
2104:
2096:
2092:
2082:
2077:
2073:
2067:Wayback Machine
2057:
2053:
2047:Wayback Machine
2037:
2033:
2027:Wayback Machine
2017:
2013:
2004:
2002:
1995:
1979:
1975:
1967:
1963:
1955:
1951:
1943:
1939:
1931:
1927:
1919:
1915:
1907:
1903:
1895:
1891:
1840:
1833:
1796:
1789:
1740:
1736:
1727:
1725:
1700:
1696:
1687:
1685:
1654:
1650:
1641:
1639:
1604:
1600:
1553:
1549:
1520:
1516:
1499:
1492:
1484:
1480:
1472:
1468:
1460:
1456:
1447:
1445:
1438:
1420:
1416:
1408:
1404:
1396:
1392:
1384:
1380:
1372:
1368:
1360:
1356:
1341:
1319:
1315:
1307:
1303:
1295:
1291:
1283:
1279:
1271:
1267:
1257:
1252:
1248:
1240:
1236:
1228:
1224:
1216:
1212:
1204:
1200:
1192:
1188:
1180:
1176:
1167:
1165:
1130:
1126:
1116:Wayback Machine
1105:
1101:
1093:
1089:
1081:
1077:
1069:
1062:
1054:
1045:
1039:Bar-Hillel 1964
1037:
1033:
1025:
1021:
1016:
973:
966:
963:
938:
892:source language
877:semi-supervised
862:
810:
753:
709:
681:
679:Other languages
646:
610:word embeddings
578:
572:
529:
493:
479:in the form of
436:
374:knowledge bases
347:world knowledge
340:
301:
280:
250:
225:
177:
172:
132:
123:training corpus
107:
68:neural networks
31:
24:
17:
12:
11:
5:
4064:
4054:
4053:
4048:
4043:
4038:
4033:
4028:
4011:
4010:
4008:
4007:
4002:
3997:
3992:
3986:
3984:
3980:
3979:
3977:
3976:
3971:
3966:
3961:
3951:
3945:
3943:
3941:user interface
3935:
3934:
3932:
3931:
3926:
3921:
3916:
3911:
3906:
3900:
3898:
3890:
3889:
3887:
3886:
3881:
3876:
3870:
3868:
3862:
3861:
3859:
3858:
3853:
3848:
3843:
3838:
3832:
3830:
3822:
3821:
3818:
3817:
3815:
3814:
3809:
3804:
3799:
3794:
3789:
3784:
3779:
3773:
3771:
3767:
3766:
3764:
3763:
3758:
3753:
3748:
3743:
3738:
3733:
3728:
3723:
3718:
3713:
3708:
3703:
3697:
3695:
3686:
3677:
3676:
3674:
3673:
3668:
3666:Word embedding
3663:
3658:
3653:
3646:Language model
3643:
3638:
3633:
3628:
3623:
3617:
3615:
3608:
3607:
3605:
3604:
3599:
3597:Transfer-based
3594:
3589:
3584:
3579:
3573:
3571:
3565:
3564:
3562:
3561:
3556:
3551:
3545:
3543:
3537:
3536:
3533:
3532:
3530:
3529:
3524:
3519:
3514:
3509:
3504:
3499:
3493:
3491:
3482:
3481:
3476:
3471:
3466:
3461:
3456:
3450:
3449:
3444:
3439:
3434:
3429:
3424:
3419:
3418:
3417:
3412:
3402:
3397:
3392:
3387:
3382:
3377:
3372:
3370:Concept mining
3367:
3362:
3356:
3354:
3348:
3347:
3345:
3344:
3339:
3334:
3329:
3324:
3323:
3322:
3317:
3307:
3302:
3296:
3294:
3290:
3289:
3282:
3281:
3274:
3267:
3259:
3253:
3252:
3244:
3236:
3235:External links
3233:
3231:
3230:
3221:
3203:(2): 113–133.
3188:
3152:
3143:
3125:(3): 333–347.
3107:
3071:
3062:
3042:
3024:(4): 279–291.
3013:
3008:978-1402068706
3007:
2993:
2991:
2988:
2986:
2985:
2974:
2963:
2954:
2938:
2936:on 2011-06-29.
2923:
2909:
2889:
2875:
2873:on 2011-09-30.
2854:
2840:
2796:
2778:
2734:
2720:
2718:on 2011-06-29.
2699:
2685:
2683:on 2008-07-24.
2664:
2628:
2618:(4): 553–590.
2600:
2591:
2582:
2568:
2559:
2545:
2531:
2517:
2491:
2482:
2449:
2440:
2431:
2422:
2417:978-1402068706
2416:
2403:
2399:Proc. of IJCAI
2386:
2384:
2381:
2378:
2377:
2363:. Github.com.
2351:
2326:
2301:
2276:
2260:"BabelNet API"
2251:
2226:
2206:
2186:
2166:
2146:
2126:
2114:
2102:
2098:Litkowski 2005
2090:
2071:
2051:
2031:
2011:
1993:
1973:
1961:
1949:
1937:
1925:
1913:
1901:
1889:
1875:2027.42/145475
1831:
1810:(3): 593–617.
1787:
1734:
1694:
1648:
1598:
1547:
1514:
1490:
1478:
1466:
1454:
1436:
1414:
1402:
1390:
1378:
1366:
1354:
1339:
1313:
1301:
1289:
1277:
1265:
1254:Kilgarrif 1997
1246:
1234:
1222:
1210:
1198:
1186:
1174:
1144:(1): 107–113.
1124:
1099:
1087:
1075:
1060:
1043:
1031:
1018:
1017:
1015:
1012:
1011:
1010:
1005:
1000:
995:
990:
988:Entity linking
985:
979:
978:
962:
959:
958:
957:
954:
951:
948:
942:
937:
934:
933:
932:
914:
903:
896:
895:
894:
887:
861:
858:
809:
806:
805:
804:
798:
787:
780:Unstructured:
778:
777:
772:
767:
752:
749:
732:World Wide Web
708:
705:
704:
703:
680:
677:
676:
675:
669:Type inference
666:
659:
656:
653:
645:
642:
574:Main article:
571:
568:
528:
525:
492:
489:
440:Lesk algorithm
435:
432:
416:decision trees
400:
399:
389:
383:
377:
339:
336:
323:Lexicographers
309:coarse-grained
300:
297:
279:
276:
268:coarse-grained
249:
246:
224:
221:
176:
173:
171:
168:
131:
128:
106:
103:
47:is meant in a
15:
9:
6:
4:
3:
2:
4063:
4052:
4049:
4047:
4044:
4042:
4039:
4037:
4034:
4032:
4029:
4027:
4024:
4023:
4021:
4006:
4003:
4001:
3998:
3996:
3995:Hallucination
3993:
3991:
3988:
3987:
3985:
3981:
3975:
3972:
3970:
3967:
3965:
3962:
3959:
3955:
3952:
3950:
3947:
3946:
3944:
3942:
3936:
3930:
3929:Spell checker
3927:
3925:
3922:
3920:
3917:
3915:
3912:
3910:
3907:
3905:
3902:
3901:
3899:
3897:
3891:
3885:
3882:
3880:
3877:
3875:
3872:
3871:
3869:
3867:
3863:
3857:
3854:
3852:
3849:
3847:
3844:
3842:
3839:
3837:
3834:
3833:
3831:
3829:
3823:
3813:
3810:
3808:
3805:
3803:
3800:
3798:
3795:
3793:
3790:
3788:
3785:
3783:
3780:
3778:
3775:
3774:
3772:
3768:
3762:
3759:
3757:
3754:
3752:
3749:
3747:
3744:
3742:
3741:Speech corpus
3739:
3737:
3734:
3732:
3729:
3727:
3724:
3722:
3721:Parallel text
3719:
3717:
3714:
3712:
3709:
3707:
3704:
3702:
3699:
3698:
3696:
3690:
3687:
3682:
3678:
3672:
3669:
3667:
3664:
3662:
3659:
3657:
3654:
3651:
3647:
3644:
3642:
3639:
3637:
3634:
3632:
3629:
3627:
3624:
3622:
3619:
3618:
3616:
3613:
3609:
3603:
3600:
3598:
3595:
3593:
3590:
3588:
3585:
3583:
3582:Example-based
3580:
3578:
3575:
3574:
3572:
3570:
3566:
3560:
3557:
3555:
3552:
3550:
3547:
3546:
3544:
3542:
3538:
3528:
3525:
3523:
3520:
3518:
3515:
3513:
3512:Text chunking
3510:
3508:
3505:
3503:
3502:Lemmatisation
3500:
3498:
3495:
3494:
3492:
3490:
3486:
3480:
3477:
3475:
3472:
3470:
3467:
3465:
3462:
3460:
3457:
3455:
3452:
3451:
3448:
3445:
3443:
3440:
3438:
3435:
3433:
3430:
3428:
3425:
3423:
3420:
3416:
3413:
3411:
3408:
3407:
3406:
3403:
3401:
3398:
3396:
3393:
3391:
3388:
3386:
3383:
3381:
3378:
3376:
3373:
3371:
3368:
3366:
3363:
3361:
3358:
3357:
3355:
3353:
3352:Text analysis
3349:
3343:
3340:
3338:
3335:
3333:
3330:
3328:
3325:
3321:
3318:
3316:
3313:
3312:
3311:
3308:
3306:
3303:
3301:
3298:
3297:
3295:
3293:General terms
3291:
3287:
3280:
3275:
3273:
3268:
3266:
3261:
3260:
3257:
3250:
3249:
3245:
3242:
3239:
3238:
3227:
3222:
3218:
3214:
3210:
3206:
3202:
3198:
3194:
3189:
3185:
3181:
3177:
3173:
3169:
3165:
3158:
3153:
3149:
3144:
3140:
3136:
3132:
3128:
3124:
3120:
3113:
3108:
3104:
3100:
3096:
3092:
3089:(2): 91–113.
3088:
3084:
3083:Comput. Human
3077:
3072:
3068:
3063:
3059:
3055:
3048:
3043:
3039:
3035:
3031:
3027:
3023:
3019:
3014:
3010:
3004:
3000:
2995:
2994:
2982:
2981:
2975:
2971:
2970:
2964:
2960:
2955:
2951:
2944:
2943:"Translation"
2939:
2935:
2931:
2930:
2924:
2917:
2916:
2910:
2906:
2902:
2895:
2890:
2883:
2882:
2876:
2869:
2862:
2861:
2855:
2848:
2847:
2841:
2837:
2833:
2829:
2825:
2821:
2817:
2813:
2809:
2802:
2797:
2793:
2786:
2785:
2779:
2775:
2771:
2767:
2763:
2759:
2755:
2751:
2747:
2740:
2735:
2728:
2727:
2721:
2714:
2707:
2706:
2700:
2693:
2692:
2686:
2679:
2672:
2671:
2665:
2661:
2657:
2653:
2649:
2645:
2641:
2634:
2629:
2625:
2621:
2617:
2613:
2606:
2601:
2597:
2592:
2588:
2583:
2576:
2575:
2569:
2565:
2560:
2553:
2552:
2546:
2539:
2538:
2532:
2525:
2524:
2518:
2514:
2508:
2500:
2496:
2492:
2488:
2483:
2479:
2475:
2471:
2467:
2463:
2459:
2455:
2450:
2446:
2441:
2437:
2432:
2428:
2423:
2419:
2413:
2409:
2404:
2400:
2393:
2388:
2387:
2366:
2362:
2355:
2340:
2336:
2330:
2315:
2311:
2305:
2290:
2286:
2280:
2265:
2261:
2255:
2240:
2236:
2230:
2223:
2219:
2216:
2210:
2203:
2199:
2196:
2190:
2183:
2179:
2176:
2170:
2163:
2159:
2156:
2150:
2143:
2139:
2136:
2130:
2123:
2118:
2111:
2106:
2099:
2094:
2086:
2080:
2075:
2068:
2064:
2061:
2055:
2048:
2044:
2041:
2035:
2028:
2024:
2021:
2015:
2000:
1996:
1990:
1986:
1985:
1977:
1970:
1965:
1958:
1953:
1946:
1941:
1934:
1929:
1922:
1917:
1910:
1905:
1898:
1893:
1885:
1881:
1876:
1871:
1867:
1863:
1858:
1853:
1849:
1845:
1838:
1836:
1827:
1823:
1818:
1813:
1809:
1805:
1801:
1794:
1792:
1783:
1779:
1775:
1771:
1767:
1763:
1758:
1753:
1749:
1745:
1738:
1723:
1718:
1713:
1709:
1705:
1698:
1683:
1679:
1675:
1671:
1667:
1663:
1659:
1652:
1637:
1632:
1627:
1622:
1617:
1613:
1609:
1602:
1594:
1590:
1585:
1580:
1575:
1570:
1566:
1562:
1558:
1551:
1543:
1539:
1534:
1529:
1525:
1518:
1509:
1504:
1497:
1495:
1487:
1482:
1475:
1470:
1463:
1458:
1443:
1439:
1433:
1429:
1425:
1418:
1411:
1410:Yarowsky 1995
1406:
1399:
1394:
1387:
1382:
1375:
1370:
1363:
1358:
1350:
1346:
1342:
1336:
1332:
1328:
1324:
1317:
1310:
1305:
1298:
1293:
1286:
1281:
1274:
1269:
1261:
1255:
1250:
1243:
1238:
1231:
1226:
1219:
1214:
1207:
1202:
1195:
1190:
1183:
1182:Fellbaum 1997
1178:
1163:
1159:
1155:
1151:
1147:
1143:
1139:
1135:
1128:
1121:
1117:
1113:
1109:
1103:
1096:
1095:Mihalcea 2007
1091:
1084:
1083:Yarowsky 1992
1079:
1072:
1067:
1065:
1057:
1052:
1050:
1048:
1040:
1035:
1028:
1023:
1019:
1009:
1006:
1004:
1001:
999:
996:
994:
991:
989:
986:
984:
981:
980:
976:
970:
965:
955:
952:
949:
947:
943:
940:
939:
930:
926:
923:from a fixed
922:
918:
915:
913:translations.
911:
907:
904:
900:
897:
893:
888:
885:
881:
880:
878:
874:
870:
867:
866:
865:
857:
853:
851:
848:, gloss WSD,
847:
843:
839:
835:
831:
827:
824:(now renamed
823:
818:
816:
802:
799:
796:
792:
788:
786:
783:
782:
781:
776:
773:
771:
768:
765:
762:
761:
760:
757:
748:
746:
741:
737:
733:
729:
724:
722:
718:
714:
701:
700:Hindi WordNet
697:
692:
688:
687:
683:
682:
674:
670:
667:
664:
660:
657:
654:
651:
650:
649:
641:
639:
635:
631:
627:
623:
619:
615:
611:
606:
604:
599:
594:
590:
586:
582:
577:
567:
565:
560:
558:
557:co-occurrence
553:
551:
546:
545:bootstrapping
541:
538:
534:
524:
521:
517:
513:
509:
505:
501:
497:
488:
484:
482:
478:
474:
470:
465:
461:
457:
453:
449:
444:
441:
431:
429:
425:
421:
417:
413:
409:
405:
397:
393:
390:
387:
384:
381:
378:
375:
371:
368:
367:
366:
363:
359:
356:
352:
348:
343:
335:
333:
329:
324:
320:
317:
313:
310:
306:
296:
293:
287:
275:
273:
269:
265:
260:
258:
255:
245:
243:
237:
235:
230:
220:
218:
214:
210:
206:
202:
198:
193:
191:
186:
182:
167:
165:
160:
157:
155:
150:
148:
143:
141:
137:
136:Warren Weaver
127:
124:
120:
116:
112:
102:
100:
95:
93:
88:
84:
79:
77:
73:
69:
64:
62:
58:
54:
50:
46:
42:
38:
36:
29:
22:
3909:Concordancer
3473:
3305:Bag-of-words
3247:
3225:
3200:
3196:
3167:
3163:
3147:
3122:
3118:
3086:
3082:
3066:
3057:
3053:
3021:
3017:
3001:. Springer.
2998:
2979:
2968:
2958:
2949:
2934:the original
2928:
2914:
2907:(1): 97–123.
2904:
2900:
2880:
2868:the original
2859:
2845:
2811:
2807:
2783:
2749:
2745:
2725:
2713:the original
2704:
2690:
2678:the original
2669:
2643:
2639:
2615:
2611:
2595:
2586:
2573:
2563:
2550:
2536:
2522:
2498:
2461:
2457:
2444:
2435:
2426:
2407:
2398:
2369:. Retrieved
2354:
2343:. Retrieved
2329:
2318:. Retrieved
2304:
2293:. Retrieved
2279:
2268:. Retrieved
2254:
2243:. Retrieved
2229:
2209:
2189:
2169:
2149:
2129:
2117:
2105:
2093:
2074:
2054:
2034:
2014:
2003:. Retrieved
1983:
1976:
1964:
1952:
1940:
1928:
1916:
1904:
1892:
1847:
1843:
1807:
1803:
1747:
1743:
1737:
1726:. Retrieved
1707:
1697:
1686:. Retrieved
1661:
1651:
1640:. Retrieved
1631:11573/936571
1611:
1601:
1564:
1560:
1550:
1523:
1517:
1481:
1469:
1462:Schütze 1998
1457:
1446:. Retrieved
1427:
1417:
1405:
1393:
1381:
1369:
1357:
1322:
1316:
1304:
1292:
1280:
1268:
1249:
1242:Edmonds 2000
1237:
1225:
1213:
1206:Navigli 2006
1201:
1189:
1177:
1166:. Retrieved
1141:
1137:
1127:
1102:
1090:
1078:
1034:
1022:
925:training set
863:
854:
819:
811:
779:
759:Structured:
758:
754:
725:
710:
684:
647:
607:
579:
561:
554:
542:
530:
500:common sense
494:
485:
445:
437:
407:
403:
401:
364:
360:
344:
341:
316:fine-grained
302:
281:
272:fine-grained
261:
251:
238:
226:
194:
190:fine-grained
181:dictionaries
178:
170:Difficulties
161:
158:
151:
144:
133:
108:
96:
80:
65:
33:
32:
3866:Topic model
3746:Text corpus
3592:Statistical
3459:Text mining
3300:AI-complete
3170:(2): 1–69.
2383:Works cited
2359:alvations.
2237:. Babelfy.
1850:: 288–303.
1710:: 916–926.
1567:: 135–146.
1027:Weaver 1949
723:exercises.
663:John Ball's
460:Graph-based
448:relatedness
264:upper bound
254:inter-judge
185:thesauruses
55:. In human
4020:Categories
3587:Rule-based
3469:Truecasing
3337:Stop words
3060:(1): 1–40.
2371:2018-03-22
2345:2018-03-22
2320:2018-03-22
2295:2018-03-22
2270:2018-03-22
2245:2018-03-22
2005:2018-12-23
1857:2101.08700
1757:1507.01127
1728:2023-01-21
1717:1707.08084
1688:2023-01-21
1642:2019-10-28
1574:1607.04606
1448:2022-02-22
1168:2021-04-01
1014:References
873:supervised
838:Senseval-3
834:Senseval-2
830:Senseval-1
808:Evaluation
770:Ontologies
689:: Lack of
618:ConceptNet
585:clustering
550:classifier
496:Supervised
471:, such as
370:Dictionary
305:word sense
140:Bar-Hillel
111:dictionary
92:algorithms
87:classifier
4051:Ambiguity
4041:Semantics
3896:reviewing
3694:standards
3692:Types and
2507:cite book
2235:"Babelfy"
1826:0891-2017
1593:2307-387X
1508:1301.3781
1309:Lesk 1986
815:data sets
795:stoplists
564:bilingual
504:reasoning
477:knowledge
312:homograph
232:Senseval/
213:Knowledge
99:homograph
94:to date.
61:cognition
3812:Wikidata
3792:FrameNet
3777:BabelNet
3756:Treebank
3726:PropBank
3671:Word2vec
3636:fastText
3517:Stemming
3217:19915022
3038:17866880
2836:12898695
2828:16013755
2766:20224123
2660:16888516
2365:Archived
2339:Archived
2314:Archived
2289:Archived
2264:Archived
2239:Archived
2218:Archived
2198:Archived
2178:Archived
2158:Archived
2138:Archived
2063:Archived
2043:Archived
2023:Archived
1999:Archived
1884:52225306
1782:15687295
1722:Archived
1682:Archived
1678:10778029
1636:Archived
1442:Archived
1349:13260353
1162:Archived
1158:62672734
1112:Archived
961:See also
936:Software
910:BabelNet
836:(2001),
832:(1998),
822:Senseval
775:Thesauri
721:Senseval
622:BabelNet
422:such as
319:polysemy
257:variance
217:BabelNet
119:language
105:Variants
49:sentence
3983:Related
3949:Chatbot
3807:WordNet
3787:DBpedia
3661:Seq2seq
3405:Parsing
3320:Trigram
3139:2649448
3103:3265361
2792:SemEval
2774:1454904
2478:1775181
2361:"pyWSD"
1762:Bibcode
1542:1957433
921:induced
902:corpus.
842:SemEval
826:SemEval
801:Corpora
638:glosses
634:WordNet
630:WordNet
626:synsets
614:WordNet
598:mapping
456:WordNet
234:SemEval
205:synonym
201:lexicon
197:WordNet
130:History
53:context
3956:(c.f.
3614:models
3602:Neural
3315:Bigram
3310:n-gram
3243:(1998)
3215:
3184:461624
3182:
3137:
3101:
3036:
3005:
2834:
2826:
2772:
2764:
2658:
2476:
2414:
1991:
1882:
1824:
1780:
1676:
1591:
1540:
1434:
1347:
1337:
1156:
766:(MRDs)
728:corpus
473:degree
286:banque
115:corpus
4005:spaCy
3650:large
3641:GloVe
3213:S2CID
3180:S2CID
3160:(PDF)
3135:S2CID
3115:(PDF)
3099:S2CID
3079:(PDF)
3050:(PDF)
3034:S2CID
2946:(PDF)
2919:(PDF)
2897:(PDF)
2885:(PDF)
2871:(PDF)
2864:(PDF)
2850:(PDF)
2832:S2CID
2804:(PDF)
2788:(PDF)
2770:S2CID
2742:(PDF)
2730:(PDF)
2716:(PDF)
2709:(PDF)
2695:(PDF)
2681:(PDF)
2674:(PDF)
2656:S2CID
2636:(PDF)
2608:(PDF)
2578:(PDF)
2555:(PDF)
2541:(PDF)
2527:(PDF)
2474:S2CID
2395:(PDF)
1880:S2CID
1852:arXiv
1778:S2CID
1752:arXiv
1712:arXiv
1674:S2CID
1569:arXiv
1538:S2CID
1503:arXiv
1345:S2CID
1154:S2CID
686:Hindi
270:than
192:WSD.
147:Wilks
43:of a
41:sense
3770:Data
3621:BERT
3003:ISBN
2824:PMID
2762:PMID
2513:link
2412:ISBN
2085:help
1989:ISBN
1822:ISSN
1589:ISSN
1432:ISBN
1335:ISBN
1260:help
543:The
518:and
502:and
438:The
414:and
292:rive
211:and
183:and
74:and
59:and
45:word
35:Word
3802:UBY
3205:doi
3172:doi
3127:doi
3091:doi
3026:doi
2816:doi
2754:doi
2648:doi
2620:doi
2466:doi
1870:hdl
1862:doi
1848:136
1812:doi
1770:doi
1666:doi
1626:hdl
1616:doi
1579:doi
1528:doi
1327:doi
1146:doi
671:in
628:in
117:of
4022::
3211:.
3199:.
3195:.
3178:.
3168:41
3166:.
3162:.
3133:.
3123:29
3121:.
3117:.
3097:.
3087:31
3085:.
3081:.
3058:24
3056:.
3052:.
3032:.
3020:.
2905:24
2903:.
2899:.
2830:.
2822:.
2812:27
2810:.
2806:.
2768:.
2760:.
2750:32
2748:.
2744:.
2654:.
2644:43
2642:.
2638:.
2616:33
2614:.
2610:.
2509:}}
2505:{{
2472:.
2462:39
2460:.
2456:.
2397:.
1997:.
1878:.
1868:.
1860:.
1846:.
1834:^
1820:.
1808:43
1806:.
1802:.
1790:^
1776:.
1768:.
1760:.
1720:.
1706:.
1680:.
1672:.
1660:.
1634:.
1624:.
1610:.
1587:.
1577:.
1563:.
1559:.
1536:.
1493:^
1440:.
1426:.
1343:.
1333:.
1160:.
1152:.
1140:.
1136:.
1110:.
1063:^
1046:^
793:,
747:.
620:,
616:,
514:.
458:.
78:.
3960:)
3683:,
3652:)
3648:(
3278:e
3271:t
3264:v
3219:.
3207::
3201:5
3186:.
3174::
3141:.
3129::
3105:.
3093::
3040:.
3028::
3022:8
3011:.
2838:.
2818::
2776:.
2756::
2662:.
2650::
2626:.
2622::
2515:)
2480:.
2468::
2420:.
2401:.
2374:.
2348:.
2323:.
2298:.
2273:.
2248:.
2087:)
2008:.
1886:.
1872::
1864::
1854::
1828:.
1814::
1784:.
1772::
1764::
1754::
1731:.
1714::
1691:.
1668::
1645:.
1628::
1618::
1595:.
1581::
1571::
1565:5
1544:.
1530::
1511:.
1505::
1488:.
1476:.
1451:.
1351:.
1329::
1299:.
1287:.
1262:)
1244:.
1184:.
1171:.
1148::
1142:4
1097:.
1029:.
931:.
875:/
408:n
404:n
398:.
30:.
23:.
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.