Representations of multiword expressions (MWE) are currently learned either from context external to MWEs based on the distributional hypothesis or from the representations of component words based on some composition functions using the compositional hypothesis. However, a distributional method treats MWEs as a non-divisible unit without consideration of component words. Distributional methods also have the data sparseness problem, especially for MWEs. On the other hand, a compositional method can fail if a MWE is non-compositional. In this paper, we propose a hybrid method to learn the representation of MWEs from their external context and component words with a compositionality constraint. This method can make use of both the external context and component words. Instead of simply combining the two kinds of information, we use compositionality measure from lexical semantics to serve as the constraint. The main idea is to learn MWE representations based on a weighted linear combination of both external context and component words, where the weight is based on the compositionality of MWEs. Evaluation on three datasets shows that the performance of this hybrid method is more robust and can improve the representation.
|Name||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Conference||10th International Conference on Knowledge Science, Engineering and Management, KSEM 2017|
|Period||19/08/17 → 20/08/17|
- Theoretical Computer Science
- Computer Science(all)