2024年秋季《FM meets AI》3+X讨论班面向伊人直播-伊人直播app 本科生开设。
时间:周五下午13:00-15:00,10月18日开始。
⼈⼯智能技术特别是近年来以ChatGPT为代表的大语言模型已经成为新⼀轮科技⾰命和产业变⾰的核⼼驱动⼒,对经济发展和社会进步产⽣了极其深刻的影响。有别于传统软件,⼈⼯智能软件的开发往往依赖于⼤规模的数据统计,训练出的模型也不像传统的程序⼀样具有清晰的行为逻辑,难以被理解和验证,因此其安全性和可靠性很难从理论上得到保障。近年来,利⽤形式化⽅法研究⼈⼯智能系统的安全可信问题已成为学术界和⼯业界的热点问题之⼀,通过严格的推理和⾃动化的数学证明,为人工智能系统提供安全性、可靠性的保证,成为该领域主要的研究⽬标。另一方面,虽然近年来形式化方法研究取得了显著进展,在芯片设计研制、操作系统微内核验证、编译器验证、区块链智能合约验证等方面均有大量成功应用,但仍然面临成本高昂和可扩展性差的巨大挑战,利用人工智能技术提升逻辑推理和形式化验证的自动化程度,降低验证成本,已经成为形式化方法领域的研究潮流,有望为形式化方法的发展带来深刻变革。
⼈⼯智能与形式化⽅法的结合,将为人工智能系统的安全可信问题提供新的理论基础与技术⽀持,为⼈⼯智能在安全攸关领域实现⼤规模应⽤提供可信保障,为未来⼈与智能系统的安全协作奠定基础。本讨论班将围绕人工智能与形式化方法的交叉融合,关注AI for FM和FM for AI两方面的问题,探讨形式化方法和人工智能技术的双向赋能,组织同学阅读相关文献并进行报告。
时间 | 报告内容 | 报告人 |
---|---|---|
2024.10.18 | What Will Happen When AI Embraces Formal Methods? | 孙猛 |
2024.10.25 | 1. Semantic Robustness of Deep Neural Networks 2. Automata Extraction |
孙猛 魏泽明 |
2024.11.1 | Verification of Neural Networks (1) 1. Towards Verifying the Geometric Robustness of Large-Scale Neural Networks 2. Fairify: Fairness Verification of Neural Networks |
邬程灿 岳关璋 |
2024.11.22 | Verification of Neural Networks (2) 1. Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks 2. Verification of Neural Networks' Global Robustness |
邬程灿 张知辛 |
2024.11.29 | Testing for AI Systems | 2-3人 |
2024.12.20 | Repairing Techniques | 2-3人 |
2024.12.27 | AI Techniques for Theorem Proving | 2-3人 |
2025.1.3 | Large Language Models in Formal Verification | 2-3人 |
0. What Will Happen When AI Embraces Formal Methods?
阅读材料:
[1] 王戟,詹乃军,冯新宇,刘志明,形式化方法概貌,软件学报,vol. 30(1), 33-61, 2019。
[2] Saddek Bensalem, Xiaowei Huang, Wenjie Ruan, Qiyi Tang, Changshun Wu, Xingyu Zhao: Bridging formal methods and machine learning with model checking and global optimisation. J. Log. Algebraic Methods Program. 137: 100941 (2024)
[3] Xiyue Zhang, Xiaoning Du, Xiaofei Xie, Lei Ma, Yang Liu, Meng Sun. Decision-Guided Weighted Automata Extraction from Recurrent Neural Networks. Proceedings of AAAI 2021, pages 11699-11707, 2021.
[4] Xiyue Zhang, Xiaohong Chen, and Meng Sun. Towards a Unifying Logical Framework for Neural Networks. in Proceedings of ICTAC 2022, LNCS 13572, pages 442-461, Springer, 2022.
[5] Weidi Sun, Yuteng Lu, Xiyue Zhang, and Meng Sun. DeepGlobal: a Framework for Global Robustness Verification of Feedforward Neural Networks. Journal of Systems Architecture, vol. 128, 102582, 2022.
[6] Xiyue Zhang, Xiaofei Xie, Lei Ma, Xiaoning Du, Qiang Hu, Yang Liu, Jianjun Zhao, Meng Sun. Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty. in Proceedings of ICSE 2020, pages 739-751, ACM, 2020.
[7] Yuteng Lu, Weidi Sun, and Meng Sun. Towards Mutation Testing of Reinforcement Learning Systems. Journal of Systems Architecture, vol. 131, 102701, 2022.
1. Semantic Robustness of DNN:
报告相关论文:
[1] Hao Bu and Meng Sun. Clopper-Pearson Algorithms for Efficient Statistical Model Checking Estimation. IEEE Transactions on Software Engineering, vol. 50(7), pages 1726-1746, 2024.
[2] Hao Bu and Meng Sun. Measuring Robustness of Deep Neural Networks from the Lens of Statistical Model Checking. in Proceedings of IJCNN 2023, pages 1-8, IEEE, 2023.
[3] Hao Bu and Meng Sun. Certifying Semantic Robustness of Deep Neural Networks. in Proceedings of ICECCS 2023, pages 51-60, IEEE, 2023.
[4] Hao Bu and Meng Sun. Guiding the Comparison of Neural Network Local Robustness: An Empirical Study. In Proceedings of ICANN 2023, LNCS 14258, pages 312-323, Springer, 2023.
2. Automata extraction:
建议报告论文:
[1] Zeming Wei, Xiyue Zhang, Yihao Zhang and Meng Sun. Weighted Automata Extraction and Explanation of Recurrent Neural Networks for Natural Language Tasks. Journal of Logical and Algebraic Methods in Programming, vol. 136, 100907, 2024.
补充阅读材料:
[1] Zeming Wei, Xiyue Zhang, and Meng Sun. Extracting Weighted Finite Automata from Recurrent Neural Networks for Natural Languages. in Proceedings of ICFEM 2022, LNCS 13478, pages 370-385, Springer, 2022. Preprint version at arxiv: //arxiv.org/abs/2206.14621.
[2] Mohammadhosein Hasanbeig, Natasha Yogananda Jeppu, Alessandro Abate, Tom Melham, Daniel Kroening. DeepSynth: Automata Synthesis for Automatic Task Segmentation in Deep Reinforcement Learning. Proceedings of AAAI 2021, pages 7647-7656, 2021.
[3] Zhiwu Xu, Cheng Wen, Shengchao Qin, Mengda He. Extracting automata from neural networks using active learning. PeerJ Computer Science, vol. 7: e436, 2021.
[4] Gail Weiss, Yoav Goldberg, Eran Yahav. Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples. Proceedings of ICML 2018, pages 5244-5253.
3. Verification of Neural Networks:
建议报告论文:
[1] Anan Kabaha, Dana Drachsler-Cohen. Verification of Neural Networks' Global Robustness. Proc. ACM Program. Lang. 8(OOPSLA1), pages 1010-1039, 2024.
[2] Debangshu Banerjee, Changming Xu, Gagandeep Singh. Input-Relational Verification of Deep Neural Networks. Proc. ACM Program. Lang. 8(PLDI): pages 1-27, 2024.
[3] Jiaxiang Liu, Yunhan Xing, Xiaomu Shi, Fu Song, Zhiwu Xu, Zhong Ming. Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks. ACM Transactions on Software Engineering and Methodology, vol. 33(5): 129:1-129:35, 2024.
[4] Yedi Zhang, Fu Song, Jun Sun. QEBVerif: Quantization Error Bound Verification of Neural Networks. CAV (2) 2023: 413-437.
[5] Alessandro Abate, Alec Edwards, Mirco Giacobbe, Hashan Punchihewa, Diptarko Roy. Quantitative Verification with Neural Networks. CONCUR 2023: 22:1-22:18
[6] Eric Goubault, Sylvie Putot. A Zonotopic Dempster-Shafer Approach to the Quantitative Verification of Neural Networks. FM (1) 2024: 324-342.
[7] Sumon Biswas, Hridesh Rajan. Fairify: Fairness Verification of Neural Networks. ICSE 2023: 1546-1558, 2023.
[8] Fu Wang, Peipei Xu, Wenjie Ruan, Xiaowei Huang. Towards Verifying the Geometric Robustness of Large-Scale Neural Networks. AAAI 2023: 15197-15205, 2023.
[9] Guy Amir, Haoze Wu, Clark W. Barrett, Guy Katz. An SMT-Based Approach for Verifying Binarized Neural Networks. Proceedings of TACAS 2021 (2), LNCS 12652, pages 203-222, Springer, 2021.
补充阅读材料:
[1] Matthias König, Annelot W. Bosman, Holger H. Hoos, Jan N. van Rijn. Critically Assessing the State of the Art in Neural Network Verification. J. Mach. Learn. Res. 25: 12:1-12:53, 2024.
[2] Thomas A. Henzinger, Mathias Lechner, Dorde Zikelic. Scalable Verification of Quantized Neural Networks. Proceedings of AAAI 2021, pages 3787-3795, AAAI Press, 2021.
[3] Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Marian Dan, Martin T. Vechev. Scalable Polyhedral Verification of Recurrent Neural Networks. Proceedings of CAV 2021, LNCS 12759, pages 225-248, Springer, 2021.
[4] Teodora Baluta, Zheng Leong Chua, Kuldeep S. Meel, Prateek Saxena. Scalable Quantitative Verification For Deep Neural Networks. Proceedings of ICSE 2021, pages 312-323, IEEE, 2021.
4. Testing for AI Systems:
建议报告论文:
[1] Yuhan Zhi, Xiaofei Xie, Chao Shen, Jun Sun, Xiaoyu Zhang, Xiaohong Guan. Seed Selection for Testing Deep Neural Networks. ACM Trans. Softw. Eng. Methodol. 33(1): 23:1-23:33, 2024.
[2] Florian Tambon, Vahid Majdinasab, Amin Nikanjam, Foutse Khomh, Giuliano Antoniol. Mutation Testing of Deep Reinforcement Learning Based on Real Faults. ICST 2023: 188-198, 2023.
[3] Youcheng Sun, Xiaowei Huang, Daniel Kroening, James Sharp, Matthew Hill, Rob Ashmore. Structural Test Coverage Criteria for Deep Neural Networks. ACM Transactions on Embedded Computing Systems, volume 18, 94:1-94:23, 2019.
补充阅读材料:
[1] Yuteng Lu, Kaicheng Shao, Jia Zhao, Weidi Sun, Meng Sun. Mutation testing of unsupervised learning systems. Journal of Systems Architecture, vol. 146, 103050, 2024.
[2] Yuteng Lu, Weidi Sun, and Meng Sun. Towards Mutation Testing of Reinforcement Learning Systems. Journal of Systems Architecture, vol. 131, 102701, 2022.
[3] Fabrice Harel-Canada, Lingxiao Wang, Muhammad Ali Gulzar, Quanquan Gu, Miryung Kim. Is neuron coverage a meaningful measure for testing deep neural networks? Proceedings of ESEC/SIGSOFT FSE 2020, pages 851-862, ACM, 2020.
[4] Shenao Yan, Guanhong Tao, Xuwei Liu, Juan Zhai, Shiqing Ma, Lei Xu, Xiangyu Zhang. Correlations between deep neural network model coverage criteria and model quality. Proceedings of ESEC/SIGSOFT FSE 2020, pages 775-787, ACM, 2020.
[5] Lorenz Klampfl, Nour Chetouane, Franz Wotawa. Mutation Testing for Artificial Neural Networks: An Empirical Evaluation. Proceedings of QRS 2020, pages 356-365, IEEE, 2020.
[6] Jingyi Wang, Guoliang Dong, Jun Sun, Xinyu Wang, Peixin Zhang. Adversarial sample detection for deep neural network through model mutation testing. Proceedings of ICSE 2019, pages 1245-1256, IEEE/ACM, 2019.
[7] Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, Daniel Kroening. Concolic testing for deep neural networks. Proceedings of ASE 2018, pages 109-119, ACM, 2018.
[8] Augustus Odena, Catherine Olsson, David G. Andersen, Ian J. Goodfellow. TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing. Proceedings of ICML 2019, pages 4901-4911, 2019.
[9] Xiaofei Xie, Hongxu Chen, Yi Li, Lei Ma, Yang Liu, Jianjun Zhao. Coverage-Guided Fuzzing for Feedforward Neural Networks. Proceedings of ASE 2019, pages 1162-1165, 2019. (complete version on arxiv.org)
5. Repairing Techniques:
建议报告论文:
[1] Martin Tappler, Andrea Pferscher, Bernhard K. Aichernig, Bettina Könighofer. Learning and Repair of Deep Reinforcement Learning Policies from Fuzz-Testing Data. ICSE 2024, 6:1-6:13, 2024.
[2] Jianan Ma, Pengfei Yang, Jingyi Wang, Youcheng Sun, Cheng-Chao Huang, Zhen Wang. VeRe: Verification Guided Synthesis for Repairing Deep Neural Networks. ICSE 2024: 8:1-8:13, 2024.
[3] Jinhan Kim, Nargiz Humbatova, Gunel Jahangirova, Paolo Tonella, Shin Yoo. Repairing DNN Architecture: Are We There Yet? ICST 2023: 234-245, 2023.
补充阅读材料:
[1] Yuchu Fang, Wenzhong Li, Yao Zeng, Yang Zheng, Zheng Hu, Sanglu Lu. PatchNAS: Repairing DNNs in Deployment with Patched Network Architecture Search. AAAI 2023: 14811-14819, 2023.
[2] Hao Bu, Meng Sun. DeepPatch: A Patching-Based Method for Repairing Deep Neural Networks. DeepTest 2023: 25-32, 2023.
6. AI Techniques for Theorem Proving:
建议报告论文:
[1] Maciej Mikula, Szymon Tworkowski, Szymon Antoniak, Bartosz Piotrowski, Albert Q. Jiang, Jin Peng Zhou, Christian Szegedy, Lukasz Kucinski, Piotr Milos, Yuhuai Wu. Magnushammer: A Transformer-Based Approach to Premise Selection. ICLR 2024.
[2] Simon Frieder, Luca Pinchetti, Alexis Chevalier, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Petersen, Julius Berner. Mathematical Capabilities of ChatGPT. NeurIPS 2023.
[3] Liao Zhang, Lasse Blaauwbroek, Cezary Kaliszyk, Josef Urban. Learning Proof Transformations and Its Applications in Interactive Theorem Proving. FroCoS 2023: 236-254, 2023.
[4] Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, Michael Norrish. TacticToe: Learning to Prove with Tactics. Journal of Automated Reasoning, vol. 65(2): 257-286, 2021.
补充阅读材料:
[1] Lasse Blaauwbroek, David M. Cerna, Thibault Gauthier, Jan Jakubuv, Cezary Kaliszyk, Martin Suda, Josef Urban. Learning Guided Automated Reasoning: A Brief Survey. Logics and Type Systems in Theory and Practice 2024: 54-83. Preprint version available here.
[2] Xueliang Zhao, Lin Zheng, Haige Bo, Changran Hu, Urmish Thakker, Lingpeng Kong. SubgoalXL: Subgoal-based Expert Learning for Theorem Proving. CoRR abs/2408.11172, 2024.
[3] Emily First, Markus N. Rabe, Talia Ringer, Yuriy Brun: Baldur: Whole-Proof Generation and Repair with Large Language Models. ESEC/SIGSOFT FSE 2023: 1229-1241
[4] Xiyue Zhang, Yi Li, Weijiang Hong, Meng Sun. Using Recurrent Neural Network to Predict Tactics for Proving Component Connector Properties in Coq. TASE 2019: 107-112, 2019.
7. Large Language Models for Formal Verification:
建议报告论文:
[1] Cheng Wen, Jialun Cao, Jie Su, Zhiwu Xu, Shengchao Qin, Mengda He, Haokun Li, Shing-Chi Cheung, Cong Tian. Enchanting Program Specification Synthesis by Large Language Models Using Static Analysis and Program Verification. Proceedings of CAV 2024, pages 302-328, LNCS 14682, 2024.
[2] Haoze Wu, Clark W. Barrett, Nina Narodytska. Lemur: Integrating Large Language Models in Automated Program Verification. ICLR 2024.
[3] Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan J. Prenger, Animashree Anandkumar. LeanDojo: Theorem Proving with Retrieval-Augmented Language Models. NeurIPS 2023.
补充阅读材料:
[1] Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian. Assisting Static Analysis with Large Language Models: A ChatGPT Experiment. Proceedings of ESEC/SIGSOFT FSE 2023, pages 2107-2111, 2023.
[2] Matthias Cosler, Christopher Hahn, Daniel Mendoza, Frederik Schmitt, Caroline Trippel. nl2spec: Interactively Translating Unstructured Natural Language to Temporal Logics with Large Language Models. Proceedings of CAV 2023, LNCS 13965, pages 383-396, Springer, 2023.
[3] Kai Chen, Chunwei Wang, Kuo Yang, Jianhua Han, Lanqing HONG, Fei Mi, Hang Xu, Zhengying Liu, Wenyong Huang, Zhenguo Li, Dit-Yan Yeung, Lifeng Shang. Gaining Wisdom from Setbacks: Aligning Large Language Models via Mistake Analysis, ICLR 2024.
[4] Yiwen Sun, Xianyin Zhang, Shiyu Huang, Shaowei Cai, Bing-Zhen Zhang, Ke Wei. AutoSAT: Automatically Optimize SAT Solvers via Large Language Models. CoRR abs/2402.10705 (2024)
8. More Reading Materials:
[1] Yizhak Yisrael Elboher, Justin Gottschlich, Guy Katz. An Abstraction-Based Framework for Neural Network Verification. Proceedings of CAV 2020, LNCS 12224, pages 43-65, Springer, 2020.
[2] Guoliang Dong, Jingyi Wang, Jun Sun, Yang Zhang, Xinyu Wang, Ting Dai, Jin Song Dong, Xingen Wang. Towards Interpreting Recurrent Neural Networks through Probabilistic Abstraction. Proceedings of ASE 2020, pages 499-510, IEEE, 2020.
[3] Pengfei Yang, Renjue Li, Jianlin Li, Cheng-Chao Huang, Jingyi Wang, Jun Sun, Bai Xue, Lijun Zhang. Improving Neural Network Verification through Spurious Region Guided Refinement. Proceedings of TACAS 2021 (1), LNCS 12651, pages 389-408, Springer, 2021.
[4] Luca Pulina, Armando Tacchella. An Abstraction-Refinement Approach to Verification of Artificial Neural Networks. Proceedings of CAV 2010, LNCS 6174, pages 243-257, Springer, 2010.
[5] Nicolas Berthier, Amany Alshareef, James Sharp, Sven Schewe, Xiaowei Huang. Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features. CoRR abs/2103.03704, 2021.
[6] Jianlin Li, Jiangchao Liu, Pengfei Yang, Liqian Chen, Xiaowei Huang, Lijun Zhang. Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification. Proceedings of SAS 2019, LNCS 11822, pages 296-319, Springer, 2019.
[7] Nina Narodytska, Shiva Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv, Toby Walsh. Verifying Properties of Binarized Deep Neural Networks. Proceedings of AAAI 2018, pages 6615-6624. AAAI Press, 2018.
[8] Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. Proceedings of CAV 2017, LNCS 10426, pages 97–117. Springer, 2017.
[9] Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin T. Vechev. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. Proceedings of S&P 2018, pages 3-18, IEEE Computer Society, 2018.
[10] Matthew Mirman, Timon Gehr, Martin T. Vechev. Differentiable Abstract Interpretation for Provably Robust Neural Networks. Proceedings of ICML 2018, pages 3575-3583, PMLR 80, 2018.
[11] Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska. A game-based approximate verification of deep neural networks with provable guarantees. Theoretical Computer Science, vol. 807, pages 298-329, 2020.
[12] Yiting Wu, Min Zhang. Tightening Robustness Verification of Convolutional Neural Networks with Fine-Grained Linear Approximation. Proceedings of AAAI 2021, pages 11674-11681, AAAI Press, 2021.
[13] Cecilia Summers, Michael J. Dinneen. Nondeterminism and Instability in Neural Network Optimization. Proceedings of ICML 2021, PMLR 139:9913-9922, 2021.
[14] János D. Pintér. Calibrating artificial neural networks by global optimization. Expert Systems with Applications, vol. 39(1), pages 25-32, 2012.