The Safe And secure Foundation mOdel systems Lab (SaFoLab) at the University of Wisconsin, Madison, led by Professor Chaowei Xiao, is dedicated to pioneering research in trustworthy (MultiModal) Large Language Model Systems.
Our mission is to develop robust and secure AI systems that can be trusted across various application domains.
Recent News
Check out our github group for our latest projects and publications.
2024-08:
We got USENIX Security Dishtingushed Paper Award.
2024-07:
4/4 papers are accepted to ECCV on the topic of trustworthy VLM and driving. Two of them are from interns in my group.
2024-06:
Prof. Chaowei Xiao will give a talk to discuss recent progress on the Security in era of Vision Large Language Models at CVPR.
2024-06:
Prof. Chaowei Xiao will give a talk to discuss recent progress on the Security in era of Large Language Models at NAACL.
2024-05:
Prof. Chaowei Xiao will give a talk to discuss recent progress on the Security in era of Large Language Models at ICLR.
2024-05:
Our jailbreak paper is accepted to USENIX Security. Congratulations, Zhiyuan!
2024-03:
Five papers at NAACL on LLM security (4 main and 1 finding): two on the backdoor attack, one on backdoor defense, one on jailbreak attacks, and one on model fingerprint. Stay tuned on these exciting fields.
2024-03:
PreDa for personalized federated learning is accepted at CVPR 2024.
2024-01:
Three papers at ICLR.
2024-01:
Two papers at TMLR
2023-12:
Invited Talk at NeurIPS TDW workshop
2023-10:
Our paper MoleculeSTM has been accepted to Nature Machine Intelligence. MoleculeSTM aims to align the nature language and molecule representation into the same representation space.
2023-10:
Three papers at EMNLP and one paper at NeurIPS. For our NeurIPS paper, we study a new threat of the instruction tuning of LLMs by injecting the Ads. This is the first work that views the LLMs as the generative model and aims to attack the generative property of LLMs.
2023-10:
Our tutorial on Security and Privacy in the Era of Large Language Models is accepted to NAACL.
2023-05:
One paper at ACL. Congratulations to zhuofeng and jiazhao. We propose an attention-based method to defend against NLP backdoor attacks
2023-04:
Two papers at ICML. Congratulations to Jiachen and Zhiyuan. We propose the first benchmark for code copyright of code generation models.
2023-02:
Two papers at CVPR. Congratulations to Yiming and Xiaogeng. Xiaogeng is an intern from my group at ASU.
2023-02:
I will give a tutorial at CVPR 2023 on the topic of trustworthiness in the era of Foundation Models. Stay tuned!
2023-01:
Impact Award from Argonne National Laboratory.
2023-01:
One paper got accepted to USENIX Security 2023.
2023-01:
Three papers are accepted to ICLR 2023 [a]: We explain why and how to use diffusion model to improve adversarial robustness and design DensePure which leverages the pretrained diffusion model and classifier to provide the state-of-the-art certified robustness. [b]:This is our first attemp on retrieval-based framework and AI for drug discovery. We will recently release more work in this research line. Stay tuned!
2022-12:
[12/2022] Our team won the ACM Gordon Bell Special Prize for COVID-19 Research.
2022-09:
One papers got accepted to USENIX Security 2023.
2022-09:
Two papers got accepted to NeurIPS 2022.
2022-09:
Our paper RobustTraj has been accepted to CORL for oral presentations. We explore to train a robust Trajectory Prediction Model against adversarial attacks.