JOURNAL OF SHANDONG UNIVERSITY(NATURAL SCIENCE) ›› 2021, Vol. 56 ›› Issue (11): 24-30.doi: 10.6040/j.issn.1671-9352.1.2020.043
TANG Guang-yuan1,2 , GUO Jun-jun1,2 , YU Zheng-tao1,2, ZHANG Ya-fei 1,2,GAO Sheng-xiang1,2
CLC Number:
[1] LAUDERDALE B E, CLARK T S. The supreme courts many median justices[J]. American Political Science Review, 2012, 106(4):847-866. [2] SEGAL J A. Predicting supreme court cases probabilistic cally: the search and seizure cases, 1962-1981[J]. American Political Science Review, 1984, 78(4): 891-900. [3] ALETRAS N, TSARAPATSANIS D, PREOTIUC-PIETRO D, et al. Predicting judicial decisions of the European court of human rights: a natural language processing perspective[J]. Peer J Computer Science, 2016. https://peerj.com/articles/cs-93/. [4] LIU Y H, CHEN Y L, HO W L. Predicting as sociated statutesfor legal problems[J]. Information Processing & Management, 2015, 51(1):194-211. [5] LONG W, TANG Y, TIAN Y. Investor sentiment identification based on the universum SVM[J]. Neural Computing and Applications, 2018, 30(2): 661-670. [6] LUO B F, FENG Y S, XU J B, et al. Learning to predict charges for criminal cases with legal basis[C] //Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen: ACL, 2017: 2727- 2736. [7] JIANG X, YE H, LUO Z C, et al. Interpretable rationale augmented charge prediction system[C] //Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations. Santa Fe: ACL, 2018: 146-151. [8] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[EB/OL].(2018-10-11)[2020-05-18]. https://arxiv.org/pdf/1810.04805. [9] CHEN Q, ZHUO Z, WANG W. BERT for joint intent classification and slot filling[EB/OL].(2019-02-28)[2020-05-18]. https://arxiv.org/pdf/1902.10909. [10] ADHIKARI A, RAM A, TANG R, et al. DocBERT: BERT for document classification[EB/OL].(2019-04-17)[2020-05-18]. https://arxiv.org/abs/1904.08398?context=cs. [11] ALBERTI C, LEE K, COLLINS M. A BERT baseline for the natural questions[EB/OL].(2019-01-24)[2020-05-18]. https://arxiv.org/abs/1901.08634. [12] JI Z C, WEI Q, XU H. Bert-based ranking for biomedical entity normalization[EB/OL].(2019-08-09)[2020-05-18]. https://arxiv.org/abs/1908.03548. [13] MAO J, LIU W. Factuality classification using the pre-trained language representation model BERT[C] //Proceedings of the Iberian Languages Evaluation Forum(IberLEF 2019). Bilbao: CEUR Workshop Proceedings, 2019: 126-131. [14] LI W, ZHAO J. TextRank algorithm by exploiting Wikipe dia for short text keywords extraction[C] // Proceedings of the 2016 3rd International Conference on Information Science and Control Engineering(ICISCE). Tokyo: IEEE, 2016: 683-686. [15] XIAO C J, ZHONG H X, GUO Z P, et al. CAIL 2018: a large-scale legal dataset for judgment prediction[EB/OL].(2018-07-04)[2020-05-18]. https:// arxiv.org/abs /1807.0247. [16] CAI J, LI J, LI W, et al. Deeplearning model used in text classification[C] // Proceedings of the 2018 15th International Computer Conference on Wavelet Active Media Technology and Information Processing(ICCWAMTIP). Tokyo: IEEE, 2018: 123-126. [17] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C] // Proceedings of the 31st International Conference on Neural Information Processing. Long Beach: ACM, 2017: 6000-6010. |
[1] | BAO Liang, CHEN Zhi-hao, CHEN Wen-zhang, YE Kai, LIAO Xiang-wen. Dual co-matching network with multiway attention for opinion reading comprehension [J]. JOURNAL OF SHANDONG UNIVERSITY(NATURAL SCIENCE), 2021, 56(3): 44-53. |
[2] | Chang-ying HAO,Yan-yan LAN,Hai-nan ZHANG,Jia-feng GUO,Jun XU,Liang PANG,Xue-qi CHENG. Dialogue generation model based on extended keywords information [J]. JOURNAL OF SHANDONG UNIVERSITY(NATURAL SCIENCE), 2019, 54(7): 68-76. |
|