Visit
Mon 04/07
Tianyu Gao headshot

Seminar @ Cornell Tech: Tianyu Gao

Enabling Language Models to Process Information at Scale

Language models (LMs) can effectively internalize knowledge from vast amounts of pre-training data, enabling them to achieve remarkable performance on exam-style benchmarks. Expanding their ability to compile, synthesize, and reason over large volumes of information on the fly will further unlock transformative applications, ranging from AI literature assistants to generative search engines. In this talk, Tianyu Gao will present his research on advancing LMs for processing information at scale. (1) Gao will present his evaluation framework for LM-based information-seeking systems, emphasizing the importance of providing citations for verifying the model-generated answers. His evaluation highlights shortcomings in LMs’ abilities to reliably process long-form texts (e.g., dozens of webpages), which he addresses by developing state-of-the-art long-context LMs that outperform leading industry efforts while using a small fraction of the computational budget. (2) Gao will then introduce his foundational work on using contrastive learning to produce high-performing text embeddings, which form the cornerstone of effective and scalable search. (3) In addition to building systems that can process large-scale information, Gao will discuss his contributions to creating efficient pre-training and customization methods for LMs, which enable scalable deployment of LM-powered applications across diverse settings. Finally, Gao will share his vision for the next generation of autonomous information processing systems and outline the foundational challenges that must be addressed to realize this vision.

Speaker Bio

Bio: Tianyu Gao is a fifth-year PhD student in the Department of Computer Science at Princeton University, advised by Danqi Chen. His research focuses on developing principled methods for training and adapting language models, many of which have been widely adopted across academia and industry. Driven by transformative applications, such as using language models as information-seeking tools, his work also advances robust evaluation and fosters a deeper understanding to guide the future development of language models. He led the first workshop on long-context foundation models at ICML 2024. He won an outstanding paper award at ACL 2022 and received an IBM PhD Fellowship in 2023. Before Princeton, he received his BEng from Tsinghua University in 2020.