|
|
Speaker: Xueqi Cheng Date: Oct 28, 11:45 AM – 12:45 PM Abstract: Large Language Models (LLMs) are increasingly deployed in multi-turn dialogue settings where preserving conversational context across turns is essential. A standard serving practice concatenates the full dialogue history at every turn, which reliably maintains coherence but incurs substantial cost in latency, memory, and API expenditure, especially when queries are routed to large proprietary models. Existing approaches often struggle to balance the trade-off between response quality and efficiency. We propose a framework that exploits the early turns of a session to estimate a local response manifold and then adapt a smaller surrogate model to this local region for the remainder of the conversation. Concretely, we learn soft prompts that maximize semantic divergence between the large and surrogate small language models’ responses to surface least-aligned local directions, stabilize training with anti-degeneration control, and distill the mined cases into localized LORA fine-tuning so the surrogate runs without prompts at inference. A simple gate enables a one-time switch with rollback on drift. We further provide a theoretical analysis for key components in SOMA. Extensive experiments show the effectiveness of SOMA. Biographical Sketch: Xueqi Cheng is a Ph.D. student in Computer Science at Florida State University, advised by Dr. Yushun Dong in the Responsible AI (RAI) Lab. His research aims to enhance the utility, security, and efficiency of Machine Learning as a Service (MLaaS), with applications ranging from graph-based learning to natural language processing and computer vision. He is also broadly interested in social network analysis and AI for social good, focusing on how AI can help address key societal challenges. Location (In-Person Only): LOV 307 |