From Greener AI to Richer 3D Worlds: 23 Papers Debuted at NeurIPS Conference
By Grace Stanley
Cornell Tech faculty made a strong showing at the 2025 Conference on Neural Information Processing Systems (NeurIPS), held Dec. 2–7 in San Diego, presenting 23 research papers at one of the world’s premier gatherings for artificial intelligence and machine learning. NeurIPS draws thousands of scholars and industry leaders each year and is widely recognized as a leading forum for breakthroughs in AI, computational neuroscience, statistics, and large-scale modeling.
This year, Cornell Tech researchers pushed the boundaries of AI on multiple fronts — from safeguarding data privacy and strengthening AI evaluation standards to boosting the speed and efficiency of large language models.
Other contributions unveiled tools for analyzing environmental and health interventions, matching images to architectural plans, and generating realistic 3D scenes with unprecedented efficiency — innovations with far-reaching implications for public health, robotics, urban planning, and immersive media.
“NeurIPS brings together the brightest minds in AI, and we are proud that our researchers are among them,” said Greg Morrisett, Jack and Rilla Neafsey Dean and Vice Provost of Cornell Tech. “These contributions reflect Cornell Tech’s commitment to advancing AI creatively, responsibly, and at scale across diverse industries.”
Together, these advances highlight the depth of Cornell Tech’s research portfolio. Faculty from the Cornell Ann S. Bowers College of Computing and Information Science and Cornell Engineering also presented papers, reflecting the university’s collaborative strength across campuses. Explore the full list of papers by Cornell Tech faculty below.
Cornell Tech Papers at NeurIPS 2025
The following papers featuring Cornell Tech authors were accepted to NeurIPS 2025:
(Note: Only authors who are Cornell Tech faculty are listed below. Please refer to the individual papers for the full list of contributors.)
- Accelerating RL for LLM Reasoning with Optimal Advantage Regression Faculty: Wen Sun
- Analog In-memory Training on General Non-ideal Resistive Elements: The Impact of Response Functions Faculty: Tianyi Chen
- Avoiding exp(R) Scaling in RLHF Through Preference-based Exploration Faculty: Wen Sun
- Beyond Value Functions: Single-loop Bilevel Optimization Under Flatness Conditions Faculty: Tianyi Chen
- C3Po: Cross-View Cross-Modality Correspondence by Pointmap Prediction Faculty: Noah Snavely
- Efficient Adaptive Experimentation with Noncompliance Faculty: Nathan Kallus
- Encoder-Decoder Diffusion Language Models for Efficient Training and Inference Faculty: Volodymyr Kuleshov
- GST-UNet: A Neural Framework for Spatiotemporal Causal Inference with Time-Varying Confounding Faculty: Nathan Kallus
- Harnessing the Universal Geometry of Embeddings Faculty: Vitaly Shmatikov
- Knot So Simple: A Minimalistic Environment for Spatial Reasoning Faculty: Yoav Artzi
- Machine Unlearning Doesn’t Do What You Think: Lessons for Generative AI Policy, Research, and Practice Faculty: James Grimmelmann, Vitaly Shmatikov
- Objective Soups: Multilingual Multi-Task Modeling for Speech Processing Faculty: Tianyi Chen
- Optimal Adjustment Sets for Nonparametric Estimation of Weighted Controlled Direct Effect Faculty: Kyra Gan
- Provably Optimal Distributional RL for LLM Post-Training Faculty: Nathan Kallus, Wen Sun
- Remasking Discrete Diffusion Models with Inference-Time Scaling Faculty: Volodymyr Kuleshov
- Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor Faculty: Angelina Wang
- Scaling Offline RL via Efficient and Expressive Shortcut Models Faculty: Wen Sun
- Simulation-Based Inference for Adaptive Experiments Faculty: Nathan Kallus
- Speculate Deep and Accurate: Lossless and Training-Free Acceleration for Offloaded LLMs via Substitute Speculative Decoding Faculty: Mohamed Abdelfattah
- Targeted Maximum Likelihood Learning: An Optimization Perspective Faculty: Kyra Gan
- Value-Guided Search for Efficient Chain-of-Thought Reasoning Faculty: Nathan Kallus, Wen Sun
- When Additive Noise Meets Unobserved Mediators: Bivariate Denoising Diffusion for Causal Discovery Faculty: Kyra Gan
- WildCAT3D: Appearance-Aware Multi-View Diffusion in the Wild Faculty: Hadar Averbuch-Elor
Grace Stanley is the staff writer-editor for Cornell Tech.
Media Highlights
Tech Policy Press
Content Moderation, Encryption, and the LawRELATED STORIES