Research

I am captivated by the challenge of enabling machines to move beyond mere information retrieval—to truly understand, reason about, and interact with knowledge. My research sits at the dynamic intersection of Information Retrieval (IR) and Natural Language Processing (NLP), with a focus on advancing web search and conversational search. I like to think of my work as "NLP for IR," where the goal is to create intelligent systems that adapt to users' complex and evolving information needs. By blending foundational IR principles with cutting-edge neural architectures, I strive to transform retrieval systems into adaptive tools that go beyond finding information to enhancing understanding and relevance.

A central theme in my research is improving how retrieval systems represent and process text to better align with user intent. Relevance is not static—it evolves with the interplay of a user’s query, the document content, and the interaction's context. To address this, I explore neural IR techniques that dynamically adapt document representations to the nuances of the query. This flexibility ensures that retrieval systems are not just repositories but active participants in delivering precise and meaningful results.

Another core aspect of my work leverages the power of Large Language Models (LLMs) to elevate retrieval and recommendation systems. I am particularly intrigued by how LLMs can generate representations that enable retrieval systems to better comprehend and meet user needs. These representations not only improve result accuracy but also provide richer contextual understanding, making search systems more responsive to complex queries involving entities, relationships, and evolving user goals. Beyond representation, I am exploring how LLMs can drive adaptive, self-refining retrieval systems, mirroring the iterative refinement processes of human experts. By dynamically responding to user feedback and task complexity, such systems can provide a more tailored and intelligent search experience.

In addition to LLMs, I am deeply interested in reinforcement learning (RL) as a framework for optimizing IR and conversational search. One of my key pursuits involves using LLMs as simulators of user behavior, enabling retrieval systems to learn strategies such as asking clarifying questions, re-ranking results, or refining retrieval in response to simulated multi-turn interactions. This approach leverages the generative capabilities of LLMs to simulate rich, context-driven user interactions, providing an effective alternative to the sparse and noisy feedback typical in real-world scenarios.

The theoretical underpinnings of neural IR also fascinate me, particularly in designing loss functions that go beyond traditional metrics like nDCG or recall. I am interested in crafting loss formulations that prioritize diversity, contextual relevance, and user-centric outcomes. These theoretical advancements aim to align model optimization with the diverse and often subtle needs of real-world search scenarios.

Broadly, my vision is to bridge retrieval systems with reasoning and decision-making capabilities, creating systems that do more than retrieve—they synthesize, interpret, and explain information in ways that mirror human cognition. By integrating advancements in representation learning, reinforcement learning, and large-scale generative models, I aim to redefine the possibilities of search, enabling personalized, conversational, and adaptive information access. This vision positions IR not just as a tool but as a critical enabler of intelligent, human-centered information systems.

We are at a transformative moment in the field, where breakthroughs in LLMs, neural IR, and reinforcement learning are converging to solve challenges that once seemed insurmountable. This is an exhilarating era, and I am driven by the opportunity to build systems that make information access smarter, more intuitive, and profoundly impactful across diverse applications—from conversational assistants and search engines to personalized education and beyond.

Research Interests: Neural IR, Conversational IR, Entity-Oriented Search, Representation Learning for IR, LLMs for search and recommendation