[1] 刘荣珍. 基于逻辑回归和机器学习的个人信用风险研究[D]. 兰州: 兰州大学, 2021. [2] 刘荣弟. 基于Logistic回归的信用评分模型研究[D]. 大连: 大连理工大学, 2018. [3] Gang D, Lai K K, Yen J. Credit scorecard based on logistic regression with random coefficients [J]. Procedia Computer Science, 2010, 1(1): 2463-2468. [4] Bian Y, Yang C, Zhao J L, et al. Good drivers pay less: a study of usage-based vehicle insurance models [J]. Transportation Research Part A: Policy and Practice, 2018, 107: 20-34. [5] Huang Y, Meng S. Automobile insurance classification ratemaking based on telematics driving data [J]. Decision Support Systems, 2019, 127(12): 1-11. [6] 边俊源. 基于驾驶行为数据分析的UBI车险定价策略研究[D]. 南京: 南京邮电大学, 2020. [7] Mcmahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data [C]//Artificial Intelligence and Statistics. PMLR, 2017: 1273-1282. [8] 杨强. 联邦学习: 人工智能的最后一公里[J]. 智能系统学报, 2020, 15(1): 183-186. Yang Q. Federated learning: the last kilometer of artificial intelligence [J]. CAAI Transactions on Intelligent Systems, 2020, 15(1): 183-186. (in Chinese) [9] 谭作文, 张连福. 机器学习的隐私保护研究综述[J]. 软件学报, 2020, 31(7): 2127-2156. Tan Z W, Zhang L F. Survey on privacy preserving techniques for machine learning [J]. Journal of Software, 2020, 31(7): 2127-2156. (in Chinese) [10] 刘艺璇, 陈红, 刘宇涵, 等. 联邦学习中的隐私保护技术[J]. 软件学报, 2022, 33(3): 1057-1092. Liu Y X, Chen H, Liu Y H, et al. Privacy-preserving techniques in federated learning [J]. Journal of Software, 2022, 33(3): 1057-1092. (in Chinese) [11] Le T P, Aono Y, Hayashi T, et al. Privacy-preserving deep learning via additively homomorphic encryption [J]. IEEE Transactions on Information Forensics and Security, 2018, 13(5): 1333-1345. [12] Liu Y, Kang Y, Xing C, et al. A secure federated transfer learning framework [J]. IEEE Intelligent Systems, 2020, 35(4): 70-82. [13] Bonawitz K, Ivanov V, Kreuter B, et al. Practical secure aggregation for privacy-preserving machine learning [C]//Proceedings of 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017: 1175-1191. [14] Wei K, Li J, Ding M, et al. Federated learning with differential privacy: algorithms and performance analysis [J]. IEEE Transactions on Information Forensics and Security, 2020, 15: 3454-3469. [15] Naehrig M, Lauter K E, Vaikuntanathan V. Can homomorphic encryption be practical? [C]//Proceedings of the 3rd ACM Cloud Computing Security Workshop. ACM, 2011: 113-124. [16] Melis L, Song C, De Cristofaro E D, et al. Exploiting unintended feature leakage in collaborative learning [C]//2019 IEEE Symposium on Security and Privacy (SP). IEEE, 2019: 691-706. [17] Bagdasaryan E, Poursaeed O, Shmatikov V. Differential privacy has disparate impact on model accuracy [J]. Advances in Neural Information Processing Systems, 2019, 32: 1-10. [18] Mckeen F, Alexandrovich I, Anati I, et al. Intel software guard extensions (Intel SGX) support for dynamic memory management inside an enclave [C]//Hardware & Architectural Support for Security & Privacy. ACM, 2016: 1-9. [19] 郑显义, 李文, 孟丹. TrustZone技术的分析与研究[J]. 计算机学报, 2016, 39(9): 1912-1928. Zheng X Y, Li W, Meng D. Analysis and research on TrustZone technology [J]. Chinese Journal of Computers, 2016, 39(9): 1912-1928. (in Chinese) [20] 董春涛, 沈晴霓, 罗武, 等. SGX应用支持技术研究进展[J]. 软件学报, 2021, 32(1): 137-166. Dong C T, Shen Q N, Luo W, et al. Research progress of SGX application supporting techniques [J]. Journal of Software, 2021, 32(1): 137-166. (in Chinese) [21] 舒俊宜. 一种基于可信执行环境的机密计算框架设计与实现[D]. 北京: 北京大学, 2021. [22] 张英骏, 冯登国, 秦宇, 等. 基于TrustZone的强安全需求环境下可信代码执行方案[J]. 计算机研究与发展, 2015, 52(10): 2224-2238. Zhang Y J, Feng D G, Qin Y, et al. A TrustZone-based trusted code execution with strong security requirements [J]. Journal of Computer Research and Development, 2015, 52(10): 2224- 2238. (in Chinese) [23] 张英骏, 冯登国, 秦宇, 等. 基于TrustZone的开放环境中敏感应用防护方案[J]. 计算机研究与发展, 2017, 54(10): 2268-2283. Zhang Y J, Feng D G, Qin Y, et al. A TrustZone based application protection scheme in highly open scenarios [J]. Journal of Computer Research and Development, 2017, 54(10): 2268-2283. (in Chinese) [24] Yu D, Ran D, Long L, et al. POSTER: rust SGX SDK: towards memory safety in intel SGX enclave [C]//2017 ACM SIGSAC Conference. ACM, 2017: 2491-2493. [25] Chen Y, Luo F, Li T, et al. A training-integrity privacy-preserving federated learning scheme with trusted execution environment [J]. Information Sciences, 2020, 522: 69-79. [26] Zhang X, Li F, Zhang Z, et al. Enabling execution assurance of federated learning at untrusted participants [C]//IEEE INFOCOM 2020- IEEE Conference on Computer Communications. IEEE, 2020: 1877-1886. [27] Mo F, Haddadi H, Katevas K, et al. PPFL: privacy-preserving federated learning with trusted execution environments [C]//Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, 2021: 94-108. [28] 姜建林. 基于可信执行环境的联邦学习模型安全聚合技术研究[D]. 武汉: 武汉大学, 2021. [29] 宋蕾, 马春光, 段广晗, 等. 基于数据纵向分布的隐私保护逻辑回归[J]. 计算机研究与发展, 2019, 56(10): 2243-2249. Song L, Ma C G, Duan G H, et al. Privacy-preserving logistic regression on vertically partitioned data [J]. Journal of Computer Research and Development, 2019, 56(10): 2243-2249. (in Chinese) [30] Krawczyk B. Learning from imbalanced data: open challenges and future directions [J]. Progress in Artificial Intelligence, 2016, 5(4): 221-232. |