您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(理学版)》

《山东大学学报(理学版)》 ›› 2020, Vol. 55 ›› Issue (11): 66-77.doi: 10.6040/j.issn.1671-9352.0.2020.268

• • 上一篇    

区间值决策系统的特定类β分布约简

韩双志1,2,张楠1,2*,张中喜1,2   

  1. 1. 烟台大学数据科学与智能技术山东省高校重点实验室, 山东 烟台 264005;2. 烟台大学计算机与控制工程学院, 山东 烟台 264005
  • 发布日期:2020-11-17
  • 作者简介:韩双志(1996— ),男,硕士研究生,研究方向为粗糙集和数据挖掘. E-mail:hanshuangzhidemail@163.com*通信作者简介:张楠(1979— ),男,副教授,研究方向为粗糙集、人工智能和认知信息学. E-mail:zhangnan0851@163.com
  • 基金资助:
    国家自然科学基金资助项目(11801491);山东省自然科学基金资助项目(ZR2018BA004)

Class-specific β distribution reduction in interval-valued decision systems

HAN Shuang-zhi1,2, ZHANG Nan1,2*, ZHANG Zhong-xi1,2   

  1. 1. Key Laboratory for Data Science and Intelligence Technology of Shandong Higher Education Institutes, Yantai University, Yantai 264005, Shandong, China;
    2. School of Computer and Control Engineering, Yantai University, Yantai 264005, Shandong, China
  • Published:2020-11-17

摘要: 属性约简是粗糙集理论的重要研究方向之一,区间值决策系统的β分布约简保持约简前后对应的β分布不变。在实际需求中,属性约简通常只需要关注某一决策类而非所有的决策类,本文在区间值决策系统中的β分布约简基础上提出了基于特定类的β分布约简理论框架。首先,定义了特定类的β分布约简基本概念,然后构造了特定类的β分布约简差别矩阵,最后提出基于差别矩阵的特定类β分布约简算法。在实验中,采用6组UCI数据集分别在全类算法和特定类算法进行约简结果和约简效率的比较。结果表明,本算法约简结果能保持关于特定类对应的β分布约简前后不变,特定类算法的约简长度小于等于全类算法的约简长度,且算法效率高于全类算法效率。

关键词: 粗糙集, 属性约简, 区间值, 特定类, β分布约简, 差别矩阵

Abstract: Attribute reduction is one of the important research points in rough set theory. The goal of β distribution reduction in interval-valued decision systems is to keep the corresponding β distribution of objects unchanged. In actual needs, attribute reduction usually only needs to focus on specific decision class rather than all decision classes. This paper proposes a theoretical of class-specific β distribution reduction in interval-valued decision systems. First of all, the basic concept of class-specific β distribution reduction is defined, and then the discernibility matrix corresponding to the class-specific β distribution is constructed. Finally, a class-specific β distribution reduction algorithm based on the discernibility matrices is proposed. In experiments, six UCI data sets are used to compare reduction results and reduction efficiency of BRADM algorithm and CSBRADM algorithm. The experiments results show that the reduction results of class-specific algorithm can keep the β distribution for class-specific unchanged, and reduction length of algorithm for specific class is less than or equal to reduction length of algorithm for all classes, and the CSBRADM algorithm efficiency is higher than the BRADM algorithm.

Key words: rough sets, attribute reduction, interval value, class specific, β distribution reduction, discernibility matrix

中图分类号: 

  • TP391
[1] PAWLAK Z. Rough sets[J]. International Journal of Computer & Information Sciences, 1982, 11(5):341-356.
[2] PAWLAK Z. Rough sets: theoretical aspects of reasoning about data[M]. Dordrecht: Kluwer Academic Publishers, 1992.
[3] QIAN Y H, XU H, LIANG J Y, et al. Fusing monotonic decision trees[J]. IEEE Transactions on Knowledge & Data Engineering, 2015, 27(10):2717-2728.
[4] DU W S, HU B Q. A fast heuristic attribute reduction approach to ordered decision systems[J]. European Journal of Operational Research, 2018, 264(2):440-452.
[5] FAN J, JIANG Y L, LIU Y. Quick attribute reduction with generalized indiscernibility models[J]. Information Sciences, 2017, 397:15-36.
[6] SENGUPTA A, PAL T. On comparing interval numbers [J]. European Journal of Operational Research, 2000, 127(1):28-43.
[7] 张楠, 苗夺谦, 岳晓冬. 区间值信息系统的知识约简[J]. 计算机研究与发展, 2010, 47(8):1362-1371. ZHANG Nan, MIAO Duoqian, YUE Xiaodong. Approaches to knowledge reduction in interval-valued information systems[J]. Journal of Computer Research and Development, 2010, 47(8):1362-1371.
[8] YAO Y Y, ZHAO Y. Discernibility matrix simplification for constructing attribute reducts[J]. Information Sciences, 2008, 179(7): 867-882.
[9] SKOWRON A, RAUSZER C. The discernibility matrices and functions in information systems[M] //SLOWONSKI R. Intelligent Decision Support. Dordrecht: Springer, 1992: 331-362.
[10] WEI W, WU X Y, LIANG J Y, et al. Discernibility matrix based incremental attribute reduction for dynamic data[J]. Knowledge-Based Systems, 2018, 140:142-157.
[11] LIU G L, FENG Y B, YANG J T. A common attribute reduction form for information systems[J]. Knowledge-Based Systems, 2020, 193:105446.
[12] LIU Y, ZHENG L D, XIU Y L, et al. Discernibility matrix based incremental feature selection on fused decision tables[J]. International Journal of Approximate Reasoning, 2020, 118:1-26.
[13] ZHENG K, WANG X. Feature selection method with joint maximal information entropy between features and class[J]. Pattern Recognition, 2018, 77:20-29.
[14] QIAN Y H, LIANG J Y, WITOLD P, et al. Positive approximation: an accelerator for attribute reduction in rough set theory[J]. Artificial Intelligence, 2010, 174(9):597-618.
[15] NGUYEN N T, SARTRA W. A new approach for reduction of attributes based on stripped quotient sets[J]. Pattern Recognition, 2020, 97:106999.
[16] MIAO D Q, ZHANG N, YUE X D. Knowledge reduction in interval-valued information systems[C] //Proceedings of the 8th IEEE International Conference on Cognitive Informatics, Hong Kong: IEEE, 2009: 320-327.
[17] DAI J H, HU H, ZHENG G J, et al. Attribute reduction in interval-valued information systems based on information entropies[J]. Frontiers of Information Technology & Electronic Engineering, 2016, 17(9):919-928.
[18] DU W S, HU B Q. Approximate distribution reducts in inconsistent interval-valued ordered decision tables[J]. Information Sciences, 2014, 271:93-114.
[19] DAI J H, WEI B J, ZHANG X H, et al. Uncertainty measurement for incomplete interval-valued information systems based on α-weak similarity[J]. Knowledge-Based Systems, 2017, 136: 159-171.
[20] 杨文静,张楠,童向荣, 等. 基于特定类的区间值决策系统的分布约简[J]. 计算机科学, 2020, 47(3):92-97. YANG Wenjing, ZHANG Nan, TONG Xiangrong, et al. Class-specific distribution preservation reduction in interval-valued decision systems[J]. Computer Science, 2020, 47(3):92-97.
[21] MI J S, WU W Z, ZHANG W X. Approaches to knowledge reduction based on variable precision rough set model[J]. Information Sciences, 2004, 159(3):255-272.
[22] LIU G L. Matrix approaches for variable precision rough approximations[C] //International Conference on Rough Sets and Knowledge Technology, Switzerland: Springer, 2015: 214-221.
[23] LIU G L, HUA Z, ZOU J Y. Local attribute reductions for decision tables[J]. Information Sciences, 2018, 422:204-217.
[24] LEUNG Y, FISCHER M, WU W Z, et al. A rough set approach for the discovery of classification rules in interval-valued information systems[J]. International Journal of Approximate Reasoning, 2008, 47(2):233-246.
[25] ZHANG X, MEI C L, CHEN D G, et al. Multi-confidence rule acquisition and confidence-preserved attribute reduction in interval-valued decision systems[J]. International Journal of Approximate Reasoning, 2014, 55(8):1787-1804.
[1] 林艳丽,刘晓东. 不完备序信息系统下的局部双量化决策粗糙集研究[J]. 《山东大学学报(理学版)》, 2020, 55(3): 89-97.
[2] 万青,马盈仓,魏玲. 基于多粒度的多源数据知识获取[J]. 《山东大学学报(理学版)》, 2020, 55(1): 41-50.
[3] 张海洋,马周明,于佩秋,林梦雷,李进金. 多粒度粗糙集近似集的增量方法[J]. 《山东大学学报(理学版)》, 2020, 55(1): 51-61.
[4] 景运革,景罗希,王宝丽,程妮. 属性值和属性变化的增量属性约简算法[J]. 《山东大学学报(理学版)》, 2020, 55(1): 62-68.
[5] 李金海,吴伟志,邓硕. 形式概念分析的多粒度标记理论[J]. 《山东大学学报(理学版)》, 2019, 54(2): 30-40.
[6] 郑荔平,胡敏杰,杨红和,林耀进. 基于粗糙集的协同过滤算法研究[J]. 《山东大学学报(理学版)》, 2019, 54(2): 41-50.
[7] 张恩胜. 区间集概念格属性约简的组成与结构[J]. 山东大学学报(理学版), 2018, 53(8): 17-24.
[8] 左芝翠,张贤勇,莫智文,冯林. 基于决策分类的分块差别矩阵及其求核算法[J]. 山东大学学报(理学版), 2018, 53(8): 25-33.
[9] 李同军,黄家文,吴伟志. 基于相似关系的不完备形式背景属性约简[J]. 山东大学学报(理学版), 2018, 53(8): 9-16.
[10] 张晓,杨燕燕. 覆盖决策系统的规则提取和置信度保持的属性约简算法[J]. 《山东大学学报(理学版)》, 2018, 53(12): 120-126.
[11] 胡谦,米据生,李磊军. 多粒度模糊粗糙近似算子的信任结构与属性约简[J]. 山东大学学报(理学版), 2017, 52(7): 30-36.
[12] 李丽,管涛,林和. 基于泛系算子的泛系混合并联粗糙集模型[J]. 山东大学学报(理学版), 2017, 52(7): 22-29.
[13] 汪小燕,沈家兰,申元霞. 基于加权粒度和优势关系的程度多粒度粗糙集[J]. 山东大学学报(理学版), 2017, 52(3): 97-104.
[14] 陈雪,魏玲,钱婷. 基于AE-概念格的决策形式背景属性约简[J]. 山东大学学报(理学版), 2017, 52(12): 95-103.
[15] 黄伟婷,赵红,祝峰. 代价敏感属性约简的自适应分治算法[J]. 山东大学学报(理学版), 2016, 51(8): 98-104.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!