
ICML 2025: Dr. Yushun Dong’s team presented CEGA (Cost-Effective Graph Acquisition) at the International Conference on Machine Learning, one of the most prestigious venues in machine learning research. The work addresses a critical gap in graph-based model extraction attacks under realistic budget constraints, where bulk queries are prohibited. CEGA introduces an innovative node querying strategy that iteratively refines selection over multiple learning cycles, achieving superior performance in accuracy, fidelity, and F1 scores. This research has significant implications for both cybersecurity and low-resource research environments.
KDD 2025 (1): At the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, the premier venue for data science research, Dr. Dong’s team introduced a comprehensive fairness-aware graph learning benchmark. This groundbreaking work evaluates ten representative fairness-aware methods across seven real-world datasets, revealing key insights into balancing group fairness, individual fairness, and computational efficiency. The benchmark addresses a critical need in the field by providing systematic evaluation protocols and practical guidance for deploying fair graph learning systems. The research establishes new standards for ethical AI development in graph-based applications.
KDD 2025 (2): Dr. Dong’s research team presented ATOM, the first framework for real-time detection of graph-based model extraction attacks in Machine Learning as a Service environments, at KDD 2025. ATOM integrates sequential modeling with reinforcement learning to dynamically detect evolving attack patterns while leveraging k-core embedding for enhanced structural understanding. The framework demonstrates superior performance across multiple datasets, maintaining stability in real-time scenarios. This work fills a critical security gap in protecting graph neural networks deployed in cloud services.
KDD 2025 (3): Dr. Dong’s team presented the first comprehensive survey on model extraction attacks and defenses specifically for Large Language Models together with a tutorial, which helps position FSU as a leading institution in the corresponding community. The survey provides a novel taxonomy categorizing attacks into functionality extraction, training data extraction, and prompt-targeted attacks, while organizing defenses into model protection, data privacy, and prompt protection strategies. This systematic analysis addresses the urgent need for LLM security frameworks as commercial AI models face increasing extraction threats. The work serves as an essential reference for researchers, engineers, and security professionals protecting AI intellectual property.