山东大学学报(理学版) ›› 2018, Vol. 53 ›› Issue (3): 30-35.doi: 10.6040/j.issn.1671-9352.1.2017.012
庞博*,刘远超
PANG Bo*, LIU Yuan-chao
摘要: 智能问答是让信息获取变得更加智能和便捷的重要途径,其中面向智能问答的篇章排序,对于准确把握用户查询意图,提升用户体验及答案反馈精度都有着十分重要意义。使用深度学习技术来捕获问题及篇章的语义信息,并以此构建到标签的映射模型,然后用训练好的模型来预测新的问题与篇章间的相关度,最后利用预测得到的篇章和问题的相关度指标来对同一问题对应的多个答案篇章进行排序。实验表明该方法在DCG@3指标上可以达到3.979,DCG@5达到5.396。
中图分类号:
[1] NALLAPATI R. Discriminative models for information retrieval[C] // Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval. USA: ACM, 2004: 64-71. [2] HERBRICH R, GRAEPEL T, OBERMAYER K. Support vector learning for ordinal regression[C] // Artificial Neural Networks, 1999. ICANN 99. Ninth International Conference on. Ottawa: 2002: 97-102. [3] CAO Zhe, QIN Tao, LIU Tieyan, et al. Learning to rank: from pairwise approach to listwise approach[C] // Proceedings of the 24th international conference on Machine learning. Corvallis: ICML, 2007: 129-136. [4] QIN Tao, ZHANG Xudong, TSAI Mingfeng, et al. Query-level loss functions for information retrieval[J]. Information Processing & Management, 2008, 44(2):838-855. [5] ALMARWANI N, DIAB M. GW_QA at SemEval-2017 Task 3: question answer re-ranking on arabic fora[C] // Proceedings of the 11th International Workshop on Semantic Evaluation(SemEval-2017). Vancouver: ACL, 2017: 344-348. [6] YU Lei, HERMANN K M, BLUNSOM P, et al. Deep learning for answer sentence selection[J]. Computer Science, 2014, 2014:1-10 [7] ARAKI J, CALLAN J. An annotation similarity model in passage ranking for historical fact validation[C] // Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. USA: ACM, 2014: 1111-1114. [8] SEVERYN A, MOSCHITTI A. Learning to rank short text pairs with convolutional deep neural networks[C] // Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. USA: ACM, 2015: 373-382. [9] ZHOU Xiaoqiang, HU Baotian, CHEN Qingcai, et al. Answer sequence learning with neural networks for answer selection in community question answering[J]. In Proceedings of the ACL-IJCNLP. Beijing: ACL, 2015: 713-718. [10] FU Jian, QIU Xipeng, HUANG Xuanjing. Convolutional deep neural networks for document-based question answering[C] // International Conference on Computer Processing of Oriental Languages. Springer International Publishing. Berlin: Springer, Cham, 2016: 790-797. [11] TYMOSHENKO K, BONADIMAN D, MOSCHITTI A. Learning to rank non-factoid answers: Comment selection in web forums[C] // Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. USA: ACM, 2016: 2049-2052. [12] WANG Bingning, LIU Kang, ZHAO Jun. Inner attention based recurrent neural networks for answer selection[C] // Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin: ACL, 2016: 1288-1297. [13] SEVERYN A, MOSCHITTI A. Modeling relational information in question-answer pairs with convolutional neural networks[J]. USA, ARXIV, 2016, 2016: 1-10. [14] DERIU J M, CIELIEBAK M. SwissAlps at SemEval-2017 task 3: Attention-based convolutional neural network for community question answering[C] // Proceedings of the 11th International Workshop on Semantic Evaluation. Vancouver, Canada, SemEval. 2017, 17: 334-338. [15] NIE Yuanping, HAN Yi, HUANG Jiuming, et al. Attention-based encoder-decoder model for answer selection in question answering[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(4):535-544. [16] TAY Y, PHAN M C, TUAN L A, et al. Learning to rank question answer pairs with holographic dual LSTM architecture[C] // In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. Tokyo: ACM 2017, 695-704. [17] OTHMAN N, FAIZ R. A multi-lingual approach to improve passage retrieval for automatic question answering[C] // International Conference on Applications of Natural Language to Information Systems. Berlin: Springer International Publishing, 2016: 127-139. [18] RAFFEL C, ELLIS D P W. Feed-forward networks with attention can solve some long-term memory problems[J]. arXiv, 2016(2016):1-6. [19] SRIVASTAVA N, HINTON G E, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15(1):1929-1958. |
[1] | 刘明明,张敏情,刘佳,高培贤. 一种基于浅层卷积神经网络的隐写分析方法[J]. 山东大学学报(理学版), 2018, 53(3): 63-70. |
[2] | 刘铭, 昝红英, 原慧斌. 基于SVM与RNN的文本情感关键句判定与抽取[J]. 山东大学学报(理学版), 2014, 49(11): 68-73. |
|