Unsupervised measure of word similarity: How to outperform co-occurrence and vector cosine in VSMs

Enrico Santus, Tin Shing Chiu, Qin Lu, Alessandro Lenci, Chu-ren Huang

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

10 Citations (Scopus)

Abstract

In this paper, we claim that vector cosine - which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models - can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words. To prove it, we describe and evaluate APSyn, a variant of the Average Precision that, without any optimization, outperforms the vector cosine and the co-occurrence on the standard ESL test set, with an improvement ranging between +9.00% and +17.98%, depending on the number of chosen top contexts.
Original languageEnglish
Title of host publication30th AAAI Conference on Artificial Intelligence, AAAI 2016
PublisherAAAI press
Pages4260-4261
Number of pages2
ISBN (Electronic)9781577357605
Publication statusPublished - 1 Jan 2016
Event30th AAAI Conference on Artificial Intelligence, AAAI 2016 - Phoenix Convention Center, Phoenix, United States
Duration: 12 Feb 201617 Feb 2016

Conference

Conference30th AAAI Conference on Artificial Intelligence, AAAI 2016
Country/TerritoryUnited States
CityPhoenix
Period12/02/1617/02/16

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Unsupervised measure of word similarity: How to outperform co-occurrence and vector cosine in VSMs'. Together they form a unique fingerprint.

Cite this