We are redefining the landscape of information retrieval by transitioning from generic search algorithms to personalized, agent-driven experiences powered by LFMs.
​
-
Large Foundation Models for Recommendation: We aim to develop foundation models that are tuned for personalized recommendation through five main research directions. The first is LLM4Rec, where we focus on generative recommendation and reasoning-aware personalization. The second is Recommendation Foundation Model, where we study foundation models that integrate tokenization, unified recommender, and memory for scalable and lifelong personalization. The third is Domain-aware Continual Training that focuses on incorporating newly observed behaviors while preserving long-term user preferences. The fourth is Large Model Efficiency in which we study data-efficient and parameter-efficient fine-tuning strategies during training, and large and small model collaboration during testing. The last is Proactive Recommendation in which we focus on proactively identifying user uncertainty, latent intent, or suboptimal preference states, and deciding when and how the system should intervene. The overall goal is to improve long-term user satisfaction, decision quality, and system effectiveness by optimizing recommendation policies over interaction sequences instead of single-step responses.
​
-
Personalization of LFMs and Agents: This research explores personalization as a foundational capability of LFMs and LFM-based agents, advancing from population-level adaptation toward individual-level personal intelligence. The research is carried out through four aspects of personalization. First, we study memory as the foundation of lifelong personalization, focusing on structured and efficient user memory built from heterogeneous and multimodal interaction signals. We further investigate learning-based memory management, including updating, consolidation, and retrieval, to support continuously evolving user representations beyond static profiles. Second, we explore how to align LFM and agent behavior with individual users’ preferences, values, and long-term objectives, developing causality- and reasoning-augmented alignment methods for precise, safe, and trustworthy personalization. Third, we examine how multi-turn and multimodal interactions refine internal user models and downstream performance, while optimizing interaction efficiency and enabling proactive agent behaviors. Fourth, we study the mechanisms through which language models represent and express multiple personas and preferences, and analyze related scaling laws to understand how personalization emerges and scales across multiple users.
​
-
Personas and Personalization: We aim to study personas as a fundamental abstraction for modeling user behaviors and enabling privacy-aware, controllable, and generalizable personalization through five main research directions. The first direction is Persona Abstraction, where we learn personas that group user behaviors at a fine-grained level across diverse scenarios, and generalize personas by learning common, scenario-specific, and user-specific behavior patterns. The second is Persona-Compositional User Modeling, where we model user behaviors as mixtures of personas to support privacy-preserving personalization, and study how personas are combined and updated under different contexts and applications. The third is Persona-based User Simulation, where we use personas to construct user simulators as verifiable environments for reward generation, as evaluation protocols for model validation, and as tools to analyze the long-term impact of models. The fourth is Evaluation of Persona Models, where we define evaluation protocols and benchmarks to measure real-to-sim persona consistency. The last direction is Persona Discovery from Emergent Behaviors, where we study rapid persona discovery in new domains and adaptive persona modeling in lifelong experience. The overall goal is to establish personas as a reusable and privacy-aware abstraction that bridges raw user interactions, personalization, and long-term system optimization across different scenarios.
​