JOURNAL OF SHANDONG UNIVERSITY(NATURAL SCIENCE) ›› 2024, Vol. 59 ›› Issue (7): 1-26.doi: 10.6040/j.issn.1671-9352.1.2023.043

• Review •     Next Articles

Research on self-supervised pre-training for recommender systems

Jiyuan YANG1(),Muyang MA1,Pengjie REN1,*(),Zhumin CHEN1,Zhaochun REN1,Xin XIN1,Fei CAI2,Jun MA1   

  1. 1. School of Computer Science and Technology, Shandong University, Qingdao 266237, Shandong, China
    2. School of Systems Engineering, National University of Defense Technology, Changsha 410015, Hunan, China
  • Received:2023-10-18 Online:2024-07-20 Published:2024-07-15
  • Contact: Pengjie REN E-mail:jiyuan.yang@mail.sdu.edu.cn;renpengjie@sdu.edu.cn

Abstract:

Plenty of recent studies explores the application of pre-training techniques within the context of recommendation scenarios and the design of pre-training tasks in order to enhance the overall performance of recommendation. This paper extensively reviews the progress in research of recommendation models based on pre-training, classifies and compares different pre-training methods, and conducts extensive experiments and analyses on some representative models using three benchmark datasets for recommendation systems. The datasets and codes have been made open source. Finally, we summarize and prospect the future development trend of recommendation models based on pre-training.

Key words: recommendation system, survey, pre-training model, self-supervised learning

CLC Number: 

  • TP391

Table 1

Rating prediction recommendation task(predicting user ratings for non interactive items based on their previous ratings for the item)"

用户 物品1 物品2 物品3 物品4
用户1 3.0 4.0 5.0
用户2 1.0 2.0 3.0
用户3 3.0 4.0

Fig.1

Social recommendation task (using user social relationships for item recommendation)"

Fig.2

Sequence recommendation task(predicting theitems that the user may click on next time based on their historical item interaction records)"

Fig.3

Self-supervised learning process"

Table 2

Summary of pretrained recommendation models"

行号 自监督方法 推荐任务 模型名称 编码器 自监督辅助任务 侧信息
1 生成式 序列推荐 SASRec[49] Transformer 下个物品预测
2 生成式 序列推荐 AsRep[51] Transformer 逆序序列生成
3 生成式 序列推荐 GC-SAN[93] GCN & Self-attention layer 下个物品预测
4 生成式 序列推荐 SQN-SAC[41] 不限 下个物品预测
5 生成式 用户属性预测 Conure[109] TCN 物品推荐和用户属性分类
6 生成式 序列推荐 BERT4Rec[50] Transformer 掩码物品预测和下个物品预测
7 生成式 评分预测 PMGT[110] Transformer 图结构重构和掩码节点特征预测
8 生成式 序列推荐 UPRec[111] BERT 掩码物品预测、用户属性预测和社交关系检测
9 生成式 评分预测 U-Bert[112] BERT 掩码评论预测和观点打分预测
10 生成式 序列推荐 IERT[113] BERT 下个单词/句子预测
11 生成式 序列推荐和用户属性预测 PeterRec[114] NextITNet 下个物品预测
12 生成式 用户表征学习 ShopperBERT[28] BERT 下个物品预测
13 生成式 物品推荐 CHEST[33] Heterogeneous Subgraph Transformer 掩码节点/边预测和元图类型预测
14 生成式 序列推荐 MrTransformer (PE)[25] Transformer 兴趣分离和重构
15 对比式 物品推荐 MSSL[115] DNN 剪裁/掩码物品特征
16 对比式 物品推荐 CLRec[116] 不限
17 对比式 序列推荐 CL4SRec[52] Transformer 剪裁/掩码/重新排序物品特征
18 对比式 序列推荐 DuoRec[43] Transformer 采样具有相同目标物品的序列作为正例
19 对比式 物品推荐 SGL[53] GCN 边/节点随机删除、随机游走
20 对比式 社交推荐 SEPT[84] GCN 构建不同视图的图
21 对比式 序列推荐 S3-Rec[42] Transformer 学习物品特征、物品、子序列以及序列之间的关系
22 对比式 序列推荐 DHCN[54] GCN 构建多视图超图
23 对比式 序列推荐 Disentangled[117] Transformer 学习序列中前半部分和后半部分之间的联系
24 对比式 社交推荐 S2-MHCN[85] GCN 构建多视图超图

Fig.4

Autoregressive model (using SASRec[49] as an example)"

Fig.5

Self coded model (using BERT4Rec[50] as an example)"

Fig.6

Compares the relationship between different user interaction records"

Fig.7

Compares the local and global relationships maximizing the mutual information of user interaction records"

Table 3

Statistical characteristics of the dataset"

数据集 用户个数 物品个数 属性个数 交互个数 最小长度 最大长度 平均长度 平均属性 稠密度/%
Amazon-Beauty 22 364 12 102 2 230 194 687 5 50 8.7 3.93 0.07
MovieLens-1M 6 040 3 352 18 269 721 17 50 44.6 1.70 1.33
Yelp 22 845 16 552 1 158 237 004 5 50 10.3 4.92 0.06

Table 4

The experimental results on the Amazon Beauty dataset"

Models Recall@5 Recall@10 Recall@20 NDCG@5 NDCG@10 NDCG@20 MRR@5 MRR@10 MRR@20
SASRec 36.58 46.54 58.67 27.91 31.12 34.17 25.79 26.36 27.19
BERT4Rec 39.85 49.20 61.02 30.49 33.50 36.48 27.39 28.62 29.44
S3-Rec 46.15 56.98 68.76 34.53 38.03 41.00 31.94 32.13 32.94
ASRep 37.71 46.97 58.42 29.05 32.04 34.92 26.18 27.41 28.19
CL4SRec 41.45 51.10 62.52 32.34 35.45 38.32 28.48 28.70 30.49
MrTransformer (PE) 42.28 52.10 63.86 32.68 35.85 38.82 29.50 30.81 31.62
MrTransformer 40.15 50.19 62.25 30.80 34.04 37.08 27.70 29.04 29.86
SGL 38.14 49.31 60.63 27.34 30.97 33.82 23.79 25.29 26.07
SGL-p 36.10 47.00 59.09 25.99 29.52 32.57 22.66 24.12 24.95
DHCN 43.62 52.83 63.33 34.26 37.24 39.88 31.17 32.29 33.12
DHCN-p 43.45 52.53 62.82 34.06 37.02 39.59 30.98 32.20 32.90

Table 5

The experimental results on the Yelp dataset"

Models Recall@5 Recall@10 Recall@20 NDCG@5 NDCG@10 NDCG@20 MRR@5 MRR@10 MRR@20
SASRec 57.66 78.59 94.12 40.57 47.34 51.33 35.19 37.74 38.88
BERT4Rec 63.32 81.81 93.68 45.65 51.64 54.69 39.82 42.30 43.17
S3-Rec 64.24 83.51 96.60 46.15 52.40 55.77 42.54 42.78 43.74
ASReP 63.55 81.17 91.28 45.05 50.77 53.37 38.95 41.33 42.06
CL4SRec 63.78 81.14 92.22 46.44 52.08 54.93 40.71 43.06 43.86
MrTransformer (PE) 64.80 81.23 92.21 47.74 53.07 55.78 42.11 44.31 45.07
MrTransformer 63.97 82.18 93.90 45.99 51.90 54.92 40.05 42.51 43.36
SGL 66.17 82.60 92.23 48.67 54.01 56.49 42.89 45.11 45.82
SGL-p 62.57 80.39 92.20 45.64 51.43 54.46 42.89 45.11 45.82
DHCN 64.04 80.00 94.86 48.15 53.69 57.20 42.93 45.21 46.20
DHCN-p 64.28 82.35 94.68 48.48 54.01 57.42 43.29 45.57 46.53

Table 6

The experimental results on the MovieLens-1M dataset"

Models Recall@5 Recall@10 Recall@20 NDCG@5 NDCG@10 NDCG@20 MRR@5 MRR@10 MRR@20
SASRec 77.14 87.24 93.26 59.50 62.81 64.36 54.86 55.00 55.45
BERT4Rec 77.83 87.31 93.05 61.12 64.23 65.68 55.55 56.85 57.25
S3-Rec 75.28 86.21 93.03 56.15 59.73 61.46 49.78 51.29 51.76
ASReP 77.98 87.53 93.36 60.87 64.00 64.48 55.16 56.47 56.89
CL4SRec 73.84 84.16 91.54 56.61 59.98 61.86 50.87 52.28 52.80
MrTransformer (PE) 78.34 87.22 93.03 62.05 64.94 66.42 56.62 57.83 58.24
MrTransformer 77.72 85.98 92.28 61.67 64.50 66.19 56.60 57.78 58.26
SGL 61.61 76.82 87.62 43.01 47.92 50.68 36.86 38.89 39.66
SGL-p 59.06 76.29 88.21 40.60 46.19 49.23 34.52 36.83 37.68
DHCN 72.89 84.10 91.69 55.61 59.18 61.13 49.95 51.45 51.99
DHCN-p 72.39 83.37 91.70 55.30 58.97 60.98 49.63 51.16 51.72

Table 7

Statistics on the proportion of sequences with different lengths in different datasets"

Length Amazon-Beauty MovieLens-1M Yelp
(0, 20] 21 228(94.92%) 177(2.93%) 20 744(90.80%)
(20, 30] 655(2.92%) 684(11.32%) 1 094(4.78%)
(30, 40] 231(1.03%) 543(8.99%) 511(2.23%)
(40, 50] 250(1.11%) 4 636(76.75%) 496(2.17%)

Fig.8

Recall@10 performance with different length sequences on the Amazon Beauty dataset"

Fig.9

NDCG@10 performance with different length sequences on the Amazon Beauty dataset"

Fig.10

Recall@10 performance with different length sequences on the MovieLens-1M dataset"

Fig.11

NDCG@10 performance with different length sequences on the MovieLens-1M dataset"

Fig.12

Recall@10 performance with different length sequences on the Yelp dataset"

Fig.13

NDCG@10 performance with different length sequences on the Yelp dataset"

Table 8

Parameter tuning analysis of SGL model on the MovieLens-1M dataset"

Models Recall@5 Recall@10 Recall@20 NDCG@5 NDCG@10 NDCG@20 MRR@5 MRR@10 MRR@20
lr=0.001, layers=3, ssl_weight=0.02 61.51 76.87 88.28 39.66 47.92 50.83 36.77 38.85 39.66
lr=0.001, layers=4, ssl_weight=0.02 62.05 76.89 88.03 43.61 48.44 51.27 37.52 39.53 40.31
lr=0.001, layers=5, ssl_weight=0.02 61.71 77.45 87.81 43.17 48.31 50.94 37.06 39.21 39.47
lr=0.001, layers=4, ssl_weight=0.03 61.77 77.15 88.36 43.11 48.12 50.98 36.94 39.03 39.83
lr=0.001, layers=4, ssl_weight=0.04 61.23 77.05 87.67 43.08 48.23 50.95 37.07 39.22 39.98
lr=0.001, layers=4, ssl_weight=0.05 61.18 76.71 87.24 43.19 48.25 50.93 37.24 39.34 40.09
lr=0.002, layers=4, ssl_weight=0.02 61.29 76.77 87.37 43.22 48.27 50.97 37.23 39.34 40.10
1 刘建伟, 刘媛, 罗雄麟. 深度学习研究进展[J]. 计算机应用研究, 2014, 31 (7): 1921-1930, 1942.
LIU Jianwei , LIU Yuan , LUO Xionglin . Research and development on deep learning[J]. Application Research of Computers, 2014, 31 (7): 1921-1930, 1942.
2 FANG Hui , ZHANG Danning , SHU Yiheng , et al. Deep learning for sequential recommendation: algorithms, influential factors, and evaluations[J]. ACM Transactions on Information Systems, 2020, 39 (1): 10.
3 WANG Shoujin , CAO Longbing , WANG Yan , et al. A survey on session-based recommender systems[J]. ACM Computing Surveys, 2021, 54 (7): 154.
4 ZHENG Lin, GUO Naicheng, CHEN Weihao, et al. Sentiment-guided sequential recommendation[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 1957-1960.
5 KANG W C, CHENG D Z, YAO T S, et al. Learning to embed categorical features without embedding tables for recommendation[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. New York: ACM, 2021: 840-850.
6 GUO Huifeng, CHEN Bo, TANG Ruiming, et al. An embedding learning framework for numerical features in CTR prediction[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. New York: ACM, 2021: 2910-2918.
7 WANG Pengfei, FAN Yu, XIA Long, et al. KERL: a knowledge-guided reinforcement learning model for sequential recommendation[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 209-218.
8 CHO S M, PARK E, YOO S. MEANTIME: mixture of attention mechanisms with multi-temporal embeddings for sequential recommendation[C]//Proceedings of the ACM Conference on Recommender Systems, New York: ACM, 2020: 515-520.
9 YUAN Fajie, HE Xiangnan, JIANG Haochuan, et al. Future data helps training: modeling future contexts for session-based recommendation[C]//Proceedings of the Web Conference 2020. Taipei: ACM, 2020: 303-313.
10 刘睿珩, 叶霞, 岳增营. 面向自然语言处理任务的预训练模型综述[J]. 计算机应用, 2021, 41 (5): 1236- 1246.
LIU Ruiheng , YE Xia , YUE Zengying . Review of pre-trained models for natural language processing tasks[J]. Journal of Computer Applications, 2021, 41 (5): 1236- 1246.
11 WANG Meng , FU Weijia , HE Xiangnan , et al. A survey on large-scale machine learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2022, 34 (6): 2574- 2594.
12 QIU Xipeng , SUN Tianxiang , XU Yige , et al. Pre-trained models for natural language processing: a survey[J]. Science China Technological Sciences, 2020, 63 (10): 1872- 1897.
doi: 10.1007/s11431-020-1647-3
13 LAN Z Z, CHEN M D, GOODMAN S, et al. ALBERT: a lite BERT for self-supervised learning of language representations[C/OL]//Proceedings of the International Conference on Learning Representations, 2020: 1-17. https://openreview.net/pdf?id=H1eA7AEtvS.
14 LEWIS M, LIU Y H, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2020: 7871-7880.
15 岳增营, 叶霞, 刘睿珩. 基于语言模型的预训练技术研究综述[J]. 中文信息学报, 2021, 35 (9): 15- 29.
YUE Zengying , YE Xia , LIU Ruiheng . A survey of language model based pre-training technology[J]. Journal of Chinese Information Processing, 2021, 35 (9): 15- 29.
16 HAN Xu , ZHANG Zhengyan , DING Ning , et al. Pre-trained models: past, present and future[J]. AI Open, 2021, 2, 225- 250.
doi: 10.1016/j.aiopen.2021.08.002
17 DEVLIN J, CHANG M W, LEE K, et al. Bert: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the North American Chapter of the Association for Computational Linguistics. New Orleans: Association for Computational Linguistics, 2018: 4171-4186.
18 GILLIOZ A, CASAS J, MUGELLINI E, et al. Overview of the transformer-based models for NLP tasks[C]//2020 15th Conference on Computer Science and Information Systems (FedCSIS). Sofia: IEEE, 2020: 179-183.
19 ZHANG J Q, ZHAO Y, MOHAMMAD S, et al. Pegasus: pre-training with extracted gap-sentences for abstractive summarization[C/OL]//Proceedings of the 37th International Conference on Machine Learning, 2020: 11328-11339. https://dl.acm.org/doi/pdf/10.5555/3524938.3525989.
20 LIU Y H , GU J T , GOYAL N , et al. Multilingual denoising pre-training for neural machine translation[J]. Transactions of the Association for Computational Linguistics, 2020, 8, 726- 742.
doi: 10.1162/tacl_a_00343
21 VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: ACM, 2017: 6000-6010.
22 SUN Peijie, WU Le, ZHANG Kun, et al. Dual learning for explainable recommendation: towards unifying user preference prediction and review generation[C]//Proceedings of the Web Conference 2020. Taipei: ACM, 2020: 837-847.
23 LI Chenliang, NIU Xichuan, LUO Xiangyang, et al. A review-driven neural model for sequential recommendation[C]//Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. Macao: International Joint Conferences on Artificial Intelligence, 2019: 2866-2872.
24 CHEN Xusong, LIU Dong, LEI Chenyi, et al. BERT4SessRec: content-based video relevance prediction with bidirectional encoder representations from transformer[C]//Proceedings of the 27th ACM International Conference on Multimedia. Nice: ACM, 2019: 2597-2601.
25 MA Muyang, REN Pengjie, CHEN Zhumin, et al. Improving transformer-based sequential recommenders through preference editing[EB/OL]. (2021-06-23)[2023-10-18]. http://arxiv.org/abs/2106.12120.
26 GUO Qingyu , ZHUANG Fuzhen , QIN Chuan , et al. A survey on knowledge graph-based recommender systems[J]. Scientia Sinica Informationis, 2020, 50 (7): 937- 956.
doi: 10.1360/SSI-2019-0274
27 LAKE T, WILLIAMSON S A, HAWK A T, et al. Large-scale collaborative filtering with product embeddings[EB/OL]. (2019-01-11)[2023-10-18]. http://arxiv.org/abs/1901.04321.
28 SHIN K, KWAK H, KIM K M, et al. One4all user representation for recommender systems in E-commerce[EB/OL]. (2021-05-24)[2023-10-18]. http://arxiv.org/abs/2106.00573.
29 ZENG Zheni , XIAO Chaojun , YAO Yuan , et al. Knowledge transfer via pre-training for recommendation: a review and prospect[J]. Frontiers in Big Data, 2021, 4, 602071.
doi: 10.3389/fdata.2021.602071
30 DE SOUZA PEREIRA MOREIRA G, RABHI S, LEE J M, et al. Transformers4Rec: bridging the gap between NLP and sequential/session-based recommendation[C]//Proceedings of the 15th ACM Conference on Recommender Systems. Amsterdam: ACM, 2021: 143-153.
31 GUO Qingyu, ZHUANG Fuzhen, QIN Chuan, et al. A survey on knowledge graph-based recommender systems[EB/OL]. (2020-02-28)[2023-10-18]. https://arxiv.org/abs/2003.00911.
32 GUO Lei , WEN Yufei , WANG Xinhua . Exploiting pre-trained network embeddings for recommendations in social networks[J]. Journal of Computer Science and Technology, 2018, 33 (4): 682- 696.
doi: 10.1007/s11390-018-1849-9
33 WANG H, ZHOU K, ZHAO W X, et al. Curriculum pre-training heterogeneous subgraph transformer for top-N recommendation[EB/OL]. (2021-06-12)[2023-10-18]. http://arxiv.org/abs/2106.06722.
34 YUAN Xu, CHEN Hongshen, SONG Yonghao, et al. Improving sequential recommendation consistency with self-supervised imitation[C]//Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. Montreal: International Joint Conferences on Artificial Intelligence, 2021: 3321-3327.
35 ZHOU Xin, SUN Aixin, LIU Yong, et al. SelfCF: a simple framework for self-supervised collaborative filtering[EB/OL]. (2021-07-07)[2023-10-18]. http://arxiv.org/abs/2107.03019.
36 HUANG J, ZHAO W X, DOU H J, et al. Improving sequential recommendation with knowledge-enhanced memory networks[C]//The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. Ann Arbor: ACM, 2018: 505-514.
37 WANG Hongwei, ZHANG Fuzheng, HOU Min, et al. SHINE: signed heterogeneous information network embedding for sentiment link prediction[C]//Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. Marina Del Rey: ACM, 2018: 592-600.
38 CAO Yixin, WANG Xiang, HE Xiangnan, et al. Unifying knowledge graph learning and recommendation: towards a better understanding of user preferences[C]//The World Wide Web Conference. San Francisco: ACM, 2019: 151-161.
39 ZHENG L, NOROOZI V, YU P S. Joint deep modeling of users and items using reviews for recommendation[C]//Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. Cambridge: ACM, 2017: 425-434.
40 NATARAJAN S , VAIRAVASUNDARAM S , NATARAJAN S , et al. Resolving data sparsity and cold start problem in collaborative filtering recommender system using linked open data[J]. Expert Systems with Applications, 2020, 149, 113248.
doi: 10.1016/j.eswa.2020.113248
41 XIN X, KARATZOGLOU A, ARAPAKIS I, et al. Self-supervised reinforcement learning for recommender systems[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 931-940.
42 ZHOU K, WANG H, ZHAO W X, et al. S3-rec: self-supervised learning for sequential recommendation with mutual information maximization[C]//Proceedings of the 29th ACM International Conference on Information & Knowledge Management. New York: ACM, 2020: 1893-1902.
43 QIU Ruihong, HUANG Zi, YIN Hongzhi, et al. Contrastive learning for representation degeneration problem in sequential recommendation[C]//Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. New York: ACM, 2022: 813-823.
44 LIU Zhuang, MA Yunpu, OUYANG Yuanxin, et al. Contrastive learning for recommender system[EB/OL]. (2021-01-05)[2023-10-18]. http://arxiv.org/abs/2101.01317.
45 LIU Zhiwei, CHEN Yongjun, LI Jia, et al. Contrastive self-supervised sequential recommendation with robust augmentation[EB/OL]. (2021-08-14)[2023-10-18]. http://arxiv.org/abs/2108.06479.
46 ZHANG Junwei, GAO Min, YU Junliang, et al. Double-scale self-supervised hypergraph learning for group recommendation[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Queensland: ACM, 2021: 2557-2567.
47 WEI Yinwei, WANG Xiang, LI Qi, et al. Contrastive learning for cold-start recommendation[C]//Proceedings of the 29th ACM International Conference on Multimedia. New York: ACM, 2021: 5382-5390.
48 LIU Xiao , ZHANG Fanjin , HOU Zhenyu , et al. Self-supervised learning: generative or contrastive[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35 (1): 857- 876.
49 KANG W C, MCAULEY J. Self-attentive sequential recommendation[C]//2018 IEEE International Conference on Data Mining (ICDM). Singapore: IEEE, 2018: 197-206.
50 SUN Fei, LIU Jin, WU Jian, et al. BERT4Rec: sequential recommendation with bidirectional encoder representations from transformer[C]//Proceedings of the 28th ACM International Conference on Information and Knowledge Management. Beijing: ACM, 2019: 1441-1450.
51 LIU Zhiwei, FAN Ziwei, WANG Yu, et al. Augmenting sequential recommendation with pseudo-prior items via reversely pre-training transformer[C]//Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2021: 1608-1612.
52 XIE Xu, SUN Fei, LIU Zhaoyang, et al. Contrastive learning for sequential recommendation[EB/OL]. (2020-10-27)[2023-10-18]. http://arxiv.org/abs/2010.14395.
53 WU Jiancan, WANG Xiang, FENG Fuli, et al. Self-supervised graph learning for recommendation[C]//Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2021: 726-735.
54 XIA Xin , YIN Hongzhi , YU Junliang , et al. Self-supervised hypergraph convolutional networks for session-based recommendation[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35 (5): 4503- 4511.
doi: 10.1609/aaai.v35i5.16578
55 ZENG Zhe'ni, XIAO Chaojun, YAO Yuan, et al. Knowledge transfer via pre-training for recommendation: a review and prospect[EB/OL]. (2020-09-19)[2023-10-18]. https://arxiv.org/abs/2009.09226.
56 YU Junliang, YIN Hongzhi, XIA Xin, et al. Self-supervised learning for recommender systems: a survey[EB/OL]. (2022-03-29)[2023-10-18]. http://arxiv.org/abs/2203.15876.
57 王国霞, 刘贺平. 个性化推荐系统综述[J]. 计算机工程与应用, 2012, 48 (7): 66- 76.
WANG Guoxia , LIU Heping . Survey of personalized recommendation system[J]. Computer Engineering and Applications, 2012, 48 (7): 66- 76.
58 刘建国, 周涛, 汪秉宏. 个性化推荐系统的研究进展[J]. 自然科学进展, 2009, 19 (1): 1- 15.
LIU Jianguo , ZHOU Tao , WANG Binghong . Research progress of personalized recommendation system[J]. Progress in Natural Science, 2009, 19 (1): 1- 15.
59 黎星星, 黄小琴, 朱庆生. 电子商务推荐系统研究[J]. 计算机工程与科学, 2004, 26 (5): 7- 10.
LI Xingxing , HUANG Xiaoqin , ZHU Qingsheng . An exploration of the recommender systems in E-commerce[J]. Computer Engineering & Science, 2004, 26 (5): 7- 10.
60 BOCK J R , MAEWAL A . Adversarial learning for product recommendation[J]. AI, 2020, 1 (3): 376- 388.
doi: 10.3390/ai1030025
61 孟祥武, 胡勋, 王立才, 等. 移动推荐系统及其应用[J]. 软件学报, 2013, 24 (1): 91- 108.
MENG Xiangwu , HU Xun , WANG Licai , et al. Mobile recommender systems and their applications[J]. Journal of Software, 2013, 24 (1): 91- 108.
62 项亮. 推荐系统实践[M]. 北京: 人民邮电出版社, 2012.
XIANG Liang . Recommended system practice[M]. Beijing: The People's Posts and Telecommunications Press, 2012.
63 KOREN Y. The BellKor solution to the netflix grand prize[EB/OL]. Netflix prize documentation. 2009(2023-10-18). https://www.asc.ohio-state.edu/statistics/dmsl/GrandPrize2009_BPC_BellKor.pdf.
64 TÖSCHER A, JAHRER M, BELL R M. The bigchaos solution to the netflix grand prize[EB/OL]. Netflix prize documentation, 2009(2023-10-18). https://studylib.net/doc/18905717/the-bigchaos-solution-to-the-netflix-grand-prize.
65 MARTIN P, CHABBERT M. The pragmatic theory solution to the netflix grand prize[EB/OL]. Netflix prize documentation, 2009(2023-10-18). https://www.asc.ohio-state.edu/statistics/statgen/joul_aut2009/PragmaticTheory.pdf.
66 ÇANO E , MORISIO M . Hybrid recommender systems: a systematic literature review[J]. Intelligent Data Analysis, 2017, 21 (6): 1487- 1524.
doi: 10.3233/IDA-163209
67 BURKE R . Hybrid recommender systems: survey and experiments[J]. User Modeling and User-Adapted Interaction, 2002, 12 (4): 331- 370.
doi: 10.1023/A:1021240730564
68 TANG Jiaxi, WANG Ke. Personalized top-N sequential recommendation via convolutional sequence embedding[C]//Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. Marina Del Rey: ACM, 2018: 565-573.
69 LI Jing, REN Pengjie, CHEN Zhumin, et al. Neural attentive session-based recommendation[C]//Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. Singapore: ACM, 2017: 1419-1428.
70 SEO S, HUANG J, YANG H, et al. Interpretable convolutional neural networks with dual local and global attention for review rating prediction[C]//Proceedings of the Eleventh ACM Conference on Recommender Systems. Como: ACM, 2017: 297-305.
71 HUANG Chunli , JIANG Wenjun , WU Jie , et al. Personalized review recommendation based on users' aspect sentiment[J]. ACM Transactions on Internet Technology, 2020, 20 (4): 42.
72 LIU Donghua, LI Jing, DU Bo, et al. DAML: dual attention mutual learning between ratings and reviews for item recommendation[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Anchorage: ACM, 2019: 344-352.
73 WU Shu , TANG Yuyuan , ZHU Yanqiao , et al. Session-based recommendation with graph neural networks[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33 (1): 346- 353.
doi: 10.1609/aaai.v33i01.3301346
74 WANG Hongwei, ZHANG Fuzheng, WANG Jialin, et al. Ripplenet: propagating user preferences on the knowledge graph for recommender systems[C]//Proceedings of the ACM International Conference on Information and Knowledge Management, Turin: ACM, 2018: 417-426.
75 WANG Hongwei, ZHAO Miao, XIE Xing, et al. Knowledge graph convolutional networks for recommender systems[C]//The World Wide Web Conference. San Francisco: ACM, 2019: 3307-3313.
76 HUANG J, REN Z C, ZHAO W X, et al. Taxonomy-aware multi-hop reasoning networks for sequential recommendation[C]//Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. Melbourne: ACM, 2019: 573-581.
77 HAO Bowen, ZHANG Jing, YIN Hongzhi, et al. Pre-training graph neural networks for cold-start users and items representation[C]//Proceedings of the 14th ACM International Conference on Web Search and Data Mining. New York: ACM, 2021: 265-273.
78 ABEL F , HERDER E , HOUBEN G J , et al. Cross-system user modeling and personalization on the social web[J]. User Modeling and User-Adapted Interaction, 2013, 23 (2/3): 169- 209.
79 HERLOCKER J L, KONSTAN J A, RIEDL J. Explaining collaborative filtering recommendations[C]//Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work. Philadelphia: ACM, 2000: 241-250.
80 QUIJANO-SANCHEZ L , SAUER C , RECIO-GARCIA J A , et al. Make it personal: a social explanation system applied to group recommendations[J]. Expert Systems with Applications, 2017, 76, 36- 48.
doi: 10.1016/j.eswa.2017.01.045
81 MA H, KING I, LYU M R. Learning to recommend with social trust ensemble[C]//Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Boston: ACM, 2009: 203-210.
82 MA H, LYU M R, KING I. Learning to recommend with trust and distrust relationships[C]//Proceedings of the Third ACM Conference on Recommender Systems. New York: ACM, 2009: 189-196.
83 张永锋. 个性化推荐的可解释性研究[M]. 北京: 清华大学出版社, 2019.
ZHANG Yongfeng . Research on the interpretability of personalized recommendation[M]. Beijing: Tsinghua University Press, 2019.
84 YU Junliang, YIN Hongzhi, GAO Min, et al. Socially-aware self-supervised tri-training for recommendation[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. New York: ACM, 2021: 2084-2092.
85 YU Junliang, YIN Hongzhi, LI Jundong, et al. Self-supervised multi-channel hypergraph convolutional network for social recommendation[C]//Proceedings of the Web Conference 2021. Ljubljana: ACM, 2021: 413-424.
86 WANG Xiang, HE Xiangnan, NIE Liqiang, et al. Item silk road: recommending items from information domains to social users[C]//Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. Tokyo: ACM, 2017: 185-194.
87 SONG Weiping, XIAO Zhiping, WANG Yifan, et al. Session-based social recommendation via dynamic graph attention networks[C]//Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. Melbourne: ACM, 2019: 555-563.
88 CHEN T W, WONG R C W. An efficient and effective framework for session-based social recommendation[C]//Proceedings of the 14th ACM International Conference on Web Search and Data Mining. New York: ACM, 2021: 400-408.
89 PAN Zhiqiang, CAI Fei, LING Yanxiang, et al. Rethinking item importance in session-based recommendation[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 1837-1840.
90 ZIMDARS A, CHICKERING D M, MEEK C. Using temporal data for making recommendations[C]//Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence. Seattle: ACM, 2001: 580-588.
91 SHANI G , HECKERMAN D , BRAFMAN R I . An MDP-based recommender system[J]. Journal of Machine Learning Research, 2005, (6): 1265- 1295.
92 HIDASI B, KARATZOGLOU A, BALTRUNAS L, et al. Session-based recommendations with recurrent neural networks[C/OL]//Proceedings of the International Conference on Learning Representations, 2015: 1-10. https://arxiv.org/pdf/1511.06939v4.
93 XU Chengfeng, ZHAO Pengpeng, LIU Yanchi, et al. Graph contextualized self-attention network for session-based recommendation[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao: ACM, 2019: 3940-3946.
94 LIU Qiang, WU Shu, WANG Diyi, et al. Context-aware sequential recommendation[C]//2016 IEEE 16th International Conference on Data Mining (ICDM). Barcelona: IEEE, 2016: 1053-1058.
95 QUADRANA M, KARATZOGLOU A, HIDASI B, et al. Personalizing session-based recommendations with hierarchical recurrent neural networks[C]//Proceedings of the Eleventh ACM Conference on Recommender Systems. Como: ACM, 2017: 130-137.
96 HIDASI B, QUADRANA M, KARATZOGLOU A, et al. Parallel recurrent neural network architectures for feature-rich session-based recommendations[C]//Proceedings of the 10th ACM Conference on Recommender Systems. Boston: ACM, 2016: 241-248.
97 BOGINA V, KUFLIK T. Incorporating dwell time in session-based recommendations with recurrent neural networks[C]//Proceedings of the ACM Conference on Recommender Systems, Como: ACM, 2017: 57-59.
98 SUN Shiming , TANG Yuanhe , DAI Zemei , et al. Self-attention network for session-based recommendation with streaming data input[J]. IEEE Access, 2019, 7, 110499- 110509.
doi: 10.1109/ACCESS.2019.2931945
99 ZHAI X H, OLIVER A, KOLESNIKOV A, et al. S4L: self-supervised semi-supervised learning[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul: IEEE, 2019: 1476-1485.
100 JING Longlong , TIAN Yingli . Self-supervised visual feature learning with deep neural networks: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 43 (11): 4037- 4058.
101 CHEN T, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[EB/OL]. (2020-02-13)[2023-10-18]. http://arxiv.org/abs/2002.05709.
102 JAISWAL A , BABU A R , ZADEH M Z , et al. A survey on contrastive self-supervised learning[J]. Technologies, 2020, 9 (1): 2.
doi: 10.3390/technologies9010002
103 MA Jianxin, ZHOU Chang, CUI Peng, et al. Learning disentangled representations for recommendation[EB/OL]. (2019-10-31)[2023-10-18]. https://arxiv.org/abs/1910.14238.
104 CHEN X L, FAN H Q, GIRSHICK R, et al. Improved baselines with momentum contrastive learning[EB/OL]. (2020-03-09)[2023-10-18]. http://arxiv.org/abs/2003.04297.
105 VAN DEN OORD A, LI Y Z, VINYALS O. Representation learning with contrastive predictive coding[EB/OL]. (2018-07-10)[2023-10-18]. http://arxiv.org/abs/1807.03748.
106 DOERSCH C, GUPTA A, EFROS A A. Unsupervised visual representation learning by context prediction[C]//2015 IEEE International Conference on Computer Vision (ICCV). Santiago: IEEE, 2015: 1422-1430.
107 WEI Chen, XIE Lingxi, REN Xutong, et al. Iterative reorganization with weak spatial constraints: solving arbitrary jigsaw puzzles for unsupervised representation learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 1910-1919.
108 KONG L P, DE MASSON D'AUTUME C, LING W, et al. A mutual information maximization perspective of language representation learning[C/OL]//Proceedings of the International Conference on Learning Representations, 2019: 1-11. https://openreview.net/forum?id=Syx79eBKwr.
109 YUAN F J, ZHANG G X, KARATZOGLOU A, et al. One person, one model, one world: learning continual user representation without forgetting[C]//Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2021: 696-705.
110 LIU Yong, YANG Susen, LEI Chenyi, et al. Pre-training graph transformer with multimodal side information for recommendation[C]//Proceedings of the 29th ACM International Conference on Multimedia. New York: ACM, 2021: 2853-2861.
111 XIAO Chaojun, XIE Ruobing, YAO Yuan, et al. UPRec: user-aware pre-training for recommender systems[EB/OL]. (2021-02-22)[2023-10-18]. http://arxiv.org/abs/2102.10989.
112 QIU Zhaopeng , WU Xian , GAO Jingyue , et al. U-BERT: pre-training user representations for improved recommendation[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35 (5): 4320- 4327.
doi: 10.1609/aaai.v35i5.16557
113 YANG Jingxuan, XU Jun, TONG Jianzhuo, et al. Pre-training of context-aware item representation for next basket recommendation[EB/OL]. (2019-04-14)[2023-10-18]. http://arxiv.org/abs/1904.12604.
114 YUAN F J, HE X N, KARATZOGLOU A, et al. Parameter-efficient transfer from sequential behaviors for user modeling and recommendation[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 1469-1478.
115 YAO T S, YI X Y, CHENG D Z, et al. Self-supervised learning for large-scale item recommendations[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Queensland: ACM, 2021: 4321-4330.
116 ZHOU Chang, MA Jianxin, ZHANG Jianwei, et al. Contrastive learning for debiased candidate generation in large-scale recommender systems[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. New York: ACM, 2021: 3985-3995.
117 MA Jianxin, ZHOU Chang, YANG Hongxia, et al. Disentangled self-supervision in sequential recommenders[C]//Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York: ACM, 2020: 183-491.
118 ZHAN Qi, LI Jingjie, JIA Qinglin, et al. Unbert: user-news matching bert for news recommendation[C]//Proceedings of the International Joint Conference on Artificial Intelligence. Montreal: Curran Association, Inc., 2021: 3356-3362.
119 SHANG Junyuan, MA Tenegfei, XIAO Cao, et al. Pre-training of graph augmented transformers for medication recommendation[C]//Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. Macao: International Joint Conferences on Artificial Intelligence, 2019: 5953-5959.
120 JIANG JY, LUO Y T, BOUM J B, et al. Sequential recommendation with bidirectional chronological augmentation of transformer[EB/OL]. (2021-12-13)[2023-10-18]. http://arxiv.org/abs/2112.06460.
121 WU Chunhan, WU Fangzhao, QI Tao, et al. PTUM: pre-training user model from unlabeled user behaviors via self-supervision[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, Stroudsburg: Association for Computational Linguistics, 2020: 1939-1944.
122 WANG H, ZHOU K, ZHAO W X, et al. Curriculum pre-training heterogeneous subgraph transformer for top-N recommendation[EB/OL]. (2021-06-12)[2023-10-18]. http://arxiv.org/abs/2106.06722.
123 LEE D H, KANG S, JU H, et al. Bootstrapping user and item representations for one-class collaborative filtering[C]//Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2021: 1513-1522.
124 CHENG Mingyue, YUAN Fajie, LIU Qi, et al. Learning transferable user representations with sequential behaviors via contrastive pre-training[C]//2021 IEEE International Conference on Data Mining (ICDM). Auckland: IEEE, 2021: 51-60.
125 LIU Haochen, TANG Da, YANG Ji, et al. Self-supervised learning for alleviating selection bias in recommendation systems[C]//IRS 2021. New York: ACM, 2021.
126 TAO Yinghui, GAO Min, YU Junliang, et al. Predictive and contrastive: dual-auxiliary learning for recommendation[EB/OL]. (2022-03-08)[2023-10-18]. http://arxiv.org/abs/2203.03982.
127 BIAN S Q, ZHAO W X, ZHOU K, et al. Contrastive curriculum learning for sequential user behavior modeling via data augmentation[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Queensland: ACM, 2021: 3737-3746.
128 XIA Xin, YIN Hongzhi, YU Junliang, et al. Self-supervised graph co-training for session-based recommendation[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Queensland: ACM, 2021: 2180-2190.
129 HAO Bowen, YIN Hongzhi, ZHANG Jing, et al. A multi-strategy based pre-training method for cold-start recommendation[EB/OL]. (2021-12-04)[2023-10-18]. http://arxiv.org/abs/2112.02275.
130 YANG Yonghui, WU Le, HONG Richang, et al. Enhanced graph learning for collaborative filtering via mutual information maximization[C]//Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2021: 71-80.
131 LI Yicong, CHEN Hongxu, SUN Xiangguo, et al. Hyperbolic hypergraphs for sequential recommendation[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Queensland: ACM, 2021: 988-997.
132 WANG Chenyang , MA Weizhi , CHEN Chong , et al. Sequential recommendation with multiple contrast signals[J]. ACM Transactions on Information Systems, 2023, 41 (1): 1- 27.
133 CAO Jiangxia, LIN Xixun, GUO Shu, et al. Bipartite graph embedding via mutual information maximization[C]//Proceedings of the 14th ACM International Conference on Web Search and Data Mining. New York: ACM, 2021: 635-643.
134 CAI Desheng , QIAN Shengsheng , FANG Quan , et al. Heterogeneous graph contrastive learning network for personalized micro-video recommendation[J]. IEEE Transactions on Multimedia, 2023, 25, 2761- 2773.
doi: 10.1109/TMM.2022.3151026
135 LONG Xiaoliang, HUANG Chao, XU Yong, et al. Social recommendation with self-supervised metagraph informax network[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Queensland: ACM, 2021: 1160-1169.
136 XIE Ruobing, LIU Qi, WANG Liangdong, et al. Contrastive cross-domain recommendation in matching[EB/OL]. (2021-12-02)[2023-10-18]. http://arxiv.org/abs/2112.00999.
137 WANG Chen, LIANG Yueqing, LIU Zhiwei, et al. Pre-training graph neural network for cross domain recommendation[C]//2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI). Atlanta: IEEE, 2021: 140-145.
138 LIN Zihan, TIAN Changxin, HOU Yupeng, et al. Improving graph collaborative filtering with neighborhood-enriched contrastive learning[C]//Proceedings of the ACM Web Conference 2022. Lyon: ACM, 2022: 2320-2329.
139 CHEN Yongjun, LIU Zhiwei, LI Jia, et al. Intent contrastive learning for sequential recommendation[EB/OL]. (2022-02-05)[2023-10-18]. http://arxiv.org/abs/2202.02519.
140 GUO Wei, ZHANG Can, HE Zhicheng, et al. MISS: multi-interest self-supervised learning framework for click-through rate prediction[C]//2022 IEEE 38th International Conference on Data Engineering (ICDE). Kuala Lumpur: IEEE, 2022: 727-740.
141 YU Junliang, YIN Hongzhi, XIA Xin, et al. Graph augmentation-free contrastive learning for recommendation[EB/OL]. (2021-12-26)[2023-10-18]. https://arxiv.org/abs/2112.08679.
142 LIU Zhiwei, CHEN Yongjun, LI Jia, et al. Improving contrastive learning with model augmentation[EB/OL]. (2022-03-25)[2023-10-18]. http://arxiv.org/abs/2203.15508.
143 ZHAO Pengyu, SHUI Tianxiao, ZHANG Yuanxing, et al. Adversarial oracular Seq2seq learning for sequential recommendation[C]//Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. Yokoham: International Joint Conferences on Artificial Intelligence, 2020: 1905-1911.
144 REN Ruiyang, LIU Zhaoyang, LI Yaliang, et al. Sequential recommendation with self-attentive multi-adversarial network[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 89-98.
145 WU Qiong, LIU Yong, MIAO Chunyan, et al. PD-GAN: adversarial learning for personalized diversity-promoting recommendation[C]//Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. Macao: International Joint Conferences on Artificial Intelligence, 2019: 3870-3876.
146 CHEN Xinshi, LI Shuang, LI Hui, et al. Generative adversarial user model for reinforcement learning based recommendation system[EB/OL]. (2018-12-27)[2023-10-18]. http://arxiv.org/abs/1812.10613.
147 BHARADHWAJ H, PARK H, LIM B Y. RecGAN: recurrent generative adversarial networks for recommendation systems[C]//Proceedings of the 12th ACM Conference on Recommender Systems. Vancouver: ACM, 2018: 372-376.
148 MCAULEY J, TARGETT C, SHI Q F, et al. Image-based recommendations on styles and substitutes[C]//Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. Santiago: ACM, 2015: 43-52.
149 ZHANG Tingting, ZHAO Pengpeng, LIU Yanchi, et al. Feature-level deeper self-attention network for sequential recommendation[C]//Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. Macao: International Joint Conferences on Artificial Intelligence, 2019: 4320-4326.
150 MENG Wenjing, YANG Deqing, XIAO Yanghua. Incorporating user micro-behaviors and item knowledge into multi-task learning for session-based recommendation[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 1091-1100.
[1] QI Li-li, SUN Jing-yu*, CHEN Jun-jie. Mean model based IBCF algorithm [J]. J4, 2013, 48(11): 105-110.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] TANG Xiao-hong1, HU Wen-xiao2*, WEI Yan-feng2, JIANG Xi-long2, ZHANG Jing-ying2, SHAO Xue-dong3. Screening and biological characteristics studies of wide wine-making yeasts[J]. JOURNAL OF SHANDONG UNIVERSITY(NATURAL SCIENCE), 2014, 49(03): 12 -17 .
[2] YUAN Rui-qiang,LIU Guan-qun,ZHANG Xian-liang,GAO Hui-wang . Features of hydrogen and oxygen isotopes in groundwater ofthe shallow part of Yellow River Delta[J]. J4, 2006, 41(5): 138 -143 .
[3] ZHANG Jing-you, ZHANG Pei-ai, ZHONG Hai-ping. The application of evolutionary graph theory in the design of knowledge-based enterprises’ organization strucure[J]. J4, 2013, 48(1): 107 -110 .
[4] XIAO Hua . Continuous dependence of the solution of multidimensional reflected backward stochastic differential equations on the parameters[J]. J4, 2007, 42(2): 68 -71 .
[5] LIU Zhan-jie1, MA Ru-ning1, ZOU Guo-ping1, ZHONG Bao-jiang2, DING Jun-di 3. An algorithm for color image segmentation based on region growth[J]. J4, 2010, 45(7): 76 -80 .
[6] . Multiple positive solutions for pLaplacian boundar y value problems[J]. J4, 2009, 44(7): 79 -82 .
[7] . Multiplayer dynamic discrete oligopoly model for Chinese electricity market[J]. J4, 2009, 44(5): 91 -96 .
[8] HE Hai-lun, CHEN Xiu-lan* . Circular dichroism detection of the effects of denaturants and buffers on the conformation of cold-adapted protease MCP-01 and  mesophilic protease BP01[J]. JOURNAL OF SHANDONG UNIVERSITY(NATURAL SCIENCE), 2013, 48(1): 23 -29 .
[9] WANG Bi-yu, CAO Xiao-hong*. The perturbation for the Browder’s theorem of operator matrix#br#[J]. JOURNAL OF SHANDONG UNIVERSITY(NATURAL SCIENCE), 2014, 49(03): 90 -95 .
[10] HU Xuan-zi1, XIE Cun-xi2. A robot local path plan based on artificial immune network[J]. J4, 2010, 45(7): 122 -126 .