您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(理学版)》

《山东大学学报(理学版)》 ›› 2019, Vol. 54 ›› Issue (7): 100-105.doi: 10.6040/j.issn.1671-9352.1.2018.104

• • 上一篇    

基于点互信息的全局词向量模型

李万理1,唐婧尧1,薛云1,2*,胡晓晖1,张涛3   

  1. 1.华南师范大学物理与电信工程学院, 广东 广州 510006;2.广东省数据科学工程技术研究中心, 广东 广州 510006;3.广东中建普联科技股份有限公司, 广东 广州 510640
  • 发布日期:2019-06-27
  • 作者简介:李万理(1993— ),男,硕士研究生,研究方向为自然语言处理、情感分析、信息检索. E-mail: wanli.li@m.scnu.edu.cn*通信作者简介:薛云(1975— ),男,博士,教授,研究方向为自然语言处理、情感分析、个性化推荐. E-mail: xueyun@scnu.edu.cn
  • 基金资助:
    全国统计科学研究资助项目(2016LY98);广东省科技计划资助项目(2016A010101020,2016A010101021,2016A010101022);深圳市科创委基础研究资助项目(JCYJ20160527172144272);广东省数据科学工程技术研究中心课题(2016KF09,2016KFl0);广东科学技术职业学院科研项目(XJSC2016206);广州市科技计划资助项目(201802010033)

A global word vector model based on pointwise mutual information

LI Wan-li1, TANG Jing-yao1, XUE Yun1,2*, HU Xiao-hui1, ZHANG Tao3   

  1. 1. School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou 510006, Guangdong, China;
    2. Guangdong Provincial Engineering Technology Research Center for Data Science, Guangzhou 510006, Guangdong, China;
    3. Guangdong CON-COM Technology CO., LTD, Guangzhou 510640, Guangdong, China
  • Published:2019-06-27

摘要: 提出了一种基于点互信息的全局词向量训练模型。该模型为了避免GloVe词向量模型中使用条件概率刻画词语关系时所产生的缺点,使用了另一种相关信息——联合概率与边际概率乘积的比值——来刻画词语间的关系。为了验证模型的有效性,在相同条件下,利用GloVe模型和我们的模型训练词向量,然后使用这2种词向量分别进行了word analogy以及similarity的实验。实验表明,模型的准确率在word analogy的Semantic问题中比GloVe模型表现更好,分别在100维、200维、300维的词向量实验中,准确率提升了10.50%、4.43%、1.02%,而在similarity的实验中,模型准确率提升也达5%~6%。结果表明,模型可以更有效地捕捉词语的语义。

关键词: 点互信息, 词向量, GloVe

Abstract: A global word vector training model based on pointwise mutual information was presented. The model used another correlation information, the ratio of the joint probability and the product of the marginal probability, to depict the relationship between words and avoid the shortcoming of conditional probability. In order to verify the validity of our model, we trained word embedding by GloVe and our model in the same situation and then carried out experiments with word analogy and word similarity separately using these two word embeddings. Experiments showed our model has achieved 10.50%, 4.43%, 1.02% better accuracy rate than the GloVe model does in sematic experiments of word analogy at 100 dimensionality, 200 dimensionality, 300 dimensionality respectively. Accuracy rate has also gained 5%-6% rise in word similarity experiments. The results show that our model can capture semantics of words more effectively.

Key words: pointwise mutual information, word vector, GloVe

中图分类号: 

  • TP391
[1] HINTON G E. Learning distributed representations of concepts[C] //Proceedings of the Eighth Annual Conference of the Cognitive Science Society. Amherst, Mass: Erlbaum, 1986:1-12.
[2] BENGIO Y, DUCHARME R, VINCENT P, et al. A neural probabilistic language model[J]. Journal of Machine Learning Research, 2003, 3: 1137-1155.
[3] MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations in vector space[J]. arXiv,2013, arXiv:1301.3781.
[4] PENNINGTON J, SOCHER R, MANNING C. Glove: global vectors for word representation[C] //Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing(EMNLP). USA: ACL, 2014: 1532-1543.
[5] DEERWESTER S, DUMAIS S T, FURNAS G W, et al. Indexing by latent semantic analysis[J]. Journal of the American Society for Information Science, 1990, 41(6): 391-407.
[6] LEVY O, GOLDBERG Y. Neural word embedding as implicit matrix factorization[C] //Advances in Neural Information Processing Systems. USA: Curran Associates, Inc, 2014: 2177-2185.
[7] LEVY O, GOLDBERG Y, DAGAN I. Improving distributional similarity with lessons learned from word embeddings[J]. Transactions of the Association for Computational Linguistics, 2015, 3(1): 211-225.
[8] JAMEEL S, BOURAOUI Z, SCHOCKAERT S. Unsupervised learning of distributional relation vectors[C] //Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics(Long Papers).Melbourne: ACL, 2018:23-33.
[9] ARORA S, LI Y, LIANG Y, et al. A latent variable model approach to pmi-based word embeddings[J]. Transactions of the Association for Computational Linguistics, 2016, 4(1): 385-399.
[10] MORIN F, BENGIO Y. Hierarchical probabilistic neural network language model[C] //Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics. Aistats. The Society for Artificial Intelligence and Statistics, 2005, 246-252.
[11] HARRIS Z S. Distributional structure[J]. Word, 1954, 10(2/3): 146-162.
[1] 黄栋,徐博,许侃,林鸿飞,杨志豪. 基于词向量和EMD距离的短文本聚类[J]. 山东大学学报(理学版), 2017, 52(7): 66-72.
[2] 杜漫,徐学可,杜慧,伍大勇,刘悦,程学旗. 面向情绪分类的情绪词向量学习[J]. 山东大学学报(理学版), 2017, 52(7): 52-58.
[3] 姚亮,洪宇,刘昊,刘乐,姚建民. 基于语义分布相似度的翻译模型领域自适应研究[J]. 山东大学学报(理学版), 2016, 51(7): 43-50.
[4] 桑乐园, 徐新峰, 张婧, 黄德根. 基于广义Jaccard系数的微博情感新词判定[J]. 山东大学学报(理学版), 2015, 50(07): 71-75.
[5] 杨阳, 刘龙飞, 魏现辉, 林鸿飞. 基于词向量的情感新词发现方法[J]. 山东大学学报(理学版), 2014, 49(11): 51-58.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!