A Non-negative Tensor Factorization Model for Selectional Preference Induction - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Article Dans Une Revue Natural Language Engineering Année : 2010

A Non-negative Tensor Factorization Model for Selectional Preference Induction

Résumé

Distributional similarity methods have proven to be a valuable tool for the induction of semantic similarity. Until now, most algorithms use two-way co-occurrence data to compute the meaning of words. Co-occurrence frequencies, however, need not be pairwise. One can easily imagine situations where it is desirable to investigate co-occurrence frequencies of three modes and beyond. This paper will investigate tensor factorization methods to build a model of three-way co-occurrences. The approach is applied to the problem of selectional preference induction, and automatically evaluated in a pseudo-disambiguation task. The results show that tensor factorization, and non-negative tensor factorization in particular, is a promising tool for NLP.

Dates et versions

inria-00546045 , version 1 (13-12-2010)

Identifiants

Citer

Tim van de Cruys. A Non-negative Tensor Factorization Model for Selectional Preference Induction. Natural Language Engineering, 2010, 16 (4), pp.417-437. ⟨10.1017/S1351324910000148⟩. ⟨inria-00546045⟩
32 Consultations
2 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More