|Speaker: Han Xu
Date: Feb 12, 11:45am–12:45pm
Abstract: Recently, with the development of AI and ML, their corresponding safety problems, especially their vulnerability to adversarial attacks, have also become increasingly important. In order to enhance the ML safety, it is essential to discover sound solutions for (1) identifying adversarial examples to uncover the weakness of models, and (2) build robust models that can resist adversarial examples. In this talk, we will introduce some of our recent research findings in each direction. On the attack side, we will delve into attack algorithms which can achieve high efficiency and optimality especially in the discrete data domain, such as text data. On the defense side, we will introduce one frequently ignored characteristic/limitation of adversarial training (one of the most popular strategies to improve model robustness), also known as the bias issue of adversarial training. Motivated from these new finding and methodologies, some potential future research directions will also be discussed.
Biographical Sketch: Han Xu is a PhD candidate in the department of computer science at Michigan State University. He focuses on developing innovative solutions to enhance the trust worthiness of artificial intelligence by improving the robustness of machine learning models and eliminating biases of the models. His research is highly regarded in the scientific community and has resulted in high-quality publications in top machine learning and data mining conferences, including ICML, ICLR, NeurIPS, AAAI, and KDD.
Location and Zoom link: 307 Love, or https://fsu.zoom.us/j/92306202998