Frontiers of AI Summit 2026 Program Details
Wednesday, May 27, 2026
Verizon Executive Education Center | Cornell Tech
WELCOME | 9 a.m.
Welcome and Agenda
Yoav Artzi — Associate Professor of Computer Science, Cornell Tech
Greg Morrisett — Jack and Rilla Neafsey Dean and Vice Provost, Cornell Tech
KEYNOTE
Think Before You Speak: Next Gen LLMs with Global Reasoning and External Memory
Kilian Q. Weinberger — Professor of Computer Science, Cornell University
Moderator: Yoav Artzi — Associate Professor of Computer Science, Cornell Tech
The dominant paradigm in language modeling — scaling next-token prediction with parametric knowledge storage — delivers impressive capabilities but also fundamental limitations: brittle factual memory, inefficient parameters, and myopic reasoning. Progress requires a shift toward external memory and architectures that reason globally before committing to tokens.
In this talk, Weinberger will present two recent directions supporting this claim: Limited-Memory Language Models, which externalize factual knowledge during pre-training to yield more controllable, verifiable, and parameter-efficient models; and latent diffusion–augmented language models, which demonstrate how planning in continuous latent space overcomes the foresight limitations of next-token prediction, improving reasoning and coherence.
LIGHTNING TALKS
Moderator: Yoav Artzi — Associate Professor of Computer Science, Cornell Tech
>
Beyond Language Models: Toward Multimodal World Modeling
Saining Xie — Assistant Professor of Computer Science, NYU; Co-Founder and Chief Science Officer, AMI Labs
Xie will present recent work on vision-centric multimodal learning, aiming to move beyond purely linguistic intelligence toward more grounded, robust world modeling. He will highlight three directions shaping this future: learning scalable visual representations for grounding without heavy reliance on language supervision; rethinking benchmarks and using video as a rich medium for spatial sensing and next-generation multimodal applications; and exploring the convergence of multimodal learning with generative models. Together, these directions point toward a multimodal future that is more grounded, scalable, and increasingly connected to real-world applications with broad and tangible impact.
>
Building Cursor Composer
Sasha Rush — Researcher, Cursor
Rush will describe the technical challenges in building Composer, the specialized coding model designed by Cursor. Composer is a large-scale agentic model post-trained with reinforcement learning that scores highly on public and proprietary coding benchmarks. Composer development illustrates the systems and evaluation challenges in building large-scale generative models for specific domains.
MORNING BREAK
LIGHTNING TALKS | 11:00 a.m.
Moderator: Yoav Artzi — Associate Professor of Computer Science, Cornell Tech
>
Illuminating Unknown Areas of Protein Biology With AI
Roshan Rao — Research Scientist, Biohub
Protein biology has long been organized around what has been experimentally observed and formally characterized — but this foundation is inherently incomplete. Rao will present ESMC, a model that learns directly from evolutionary-scale protein sequences to reveal hidden structure in protein space, surfacing relationships and functions beyond what traditional experiments alone can uncover.
By treating biology as a computable space, ESMC enables millions of hypotheses to be generated, tested, and refined in silico before a single experiment is run — opening new possibilities across understudied organisms and metagenomic data where annotation remains sparse.
>
Making Sense of the Multimodal World
Carl Vondrick — Associate Professor of Computer Science, Columbia University
People experience the world through modalities of sight, sound, words, touch, and more. By leveraging their natural relationships and developing multimodal learning methods, Vondrick’s research creates artificial perception systems with diverse skills, including spatial, physical, logical, and cognitive abilities, for flexibly analyzing visual data. This multimodal approach provides versatile representations for tasks like 3D reconstruction, visual question answering, and object recognition, while offering inherent explainability and excellent zero-shot generalization across tasks. By closely integrating diverse modalities, this approach can overcome key challenges in machine learning and enable new capabilities in computer vision, especially for the many upcoming applications that require physical interaction.
KEYNOTE CONVERSATION
Open Source AI
Soumith Chintala — Chief Technology Officer, Thinking Labs
Moderator: Yoav Artzi — Associate Professor of Computer Science, Cornell Tech
LUNCH | 12:00 p.m.
KEYNOTE | 1:00 p.m.
How Could a Superhuman AI Mathematician Come About?
Sanjeev Arora — Director, Princeton Language and Intelligence, Charles C. Fitzmorris Professor of Computer Science, Princeton University
Moderator: Robert Kleinberg — Professor of Computer Science, Cornell University
Can AI systems exceed the capabilities of the human experts who provided their training data? Arora will examine the hypothesis of AI self-improvement through mechanisms such as synthetic data generation, reinforcement learning, and tool-augmented reasoning with formal verification loops.
He will present recent work from Princeton, including the Gödel Prover V2 for Lean-based theorem proving and a new inference pipeline that achieved state-of-the-art performance on IMO-ProofBench at moderate inference costs. These examples will illustrate how AI systems can escape “cognitive wells” — local optima in reasoning capabilities — while also highlighting the important hurdles and open questions that remain on the path to superhuman mathematical reasoning.
STARTUP SPOTLIGHTS
Accelerating Agentic Inference with Heterogeneous Hardware
Natalie Serrino — Co-Founder, Gimlet Labs
As AI agents become the next application layer for software, latency and GPU underutilization have emerged as critical infrastructure challenges. Serrino will argue that single-model inference is already a heterogeneous workload, and that agents compound the problem by chaining models, tools, code execution, and retrieval — each with very different compute, memory, and bandwidth needs.
She will discuss her work at Gimlet Labs, the first multi-silicon inference cloud built for agents, which deploys workloads across GPUs, CPUs, and specialized accelerators to deliver dramatically better performance per watt. The talk will provide an overview of the technical challenges unique to heterogeneous environments — including workload disaggregation, scheduling, and kernel generation — along with results from frontier-scale models.
>
Modal — AI Infrastructure That Developers Love
Erik Bernhardsson — Chief Executive Officer, Modal
Modal builds cloud infrastructure that powers a wide range of AI applications, including Suno, Lovable, and Cognition, running at large scale across thousands of GPUs for use cases spanning generative video to protein folding. Chief Executive Officer Erik Bernhardsson will share why Modal chose to build its own infrastructure stack from the ground up — including a custom file system and container system — and what that decision has enabled at scale.
The talk will offer a practitioner’s perspective on the architectural choices behind a high-performance AI infrastructure platform and will explore what comes next for Modal as AI workloads continue to grow in complexity and demand.
>
Automatic GPU Performance Engineering in the Age of Code Generation
Mohamed Abdelfattah — Assistant Professor of Electrical and Computer Engineering, Cornell Tech; Co-Founder, Chief Science Officer, Makora
AI performance engineering is growing rapidly in complexity, driven by scaling model sizes and the increasing heterogeneity of modern hardware platforms. Abdelfattah will present Makora, a code generation framework that automatically synthesizes low-level, hardware-specific kernels to fully exploit AI accelerators.
He will then outline a vision for self-evolving AI performance systems that continuously adapt to evolving models, workloads, and data center architectures — leveraging advances in agentic code generation to rethink how performance engineering is performed. The talk will explore how automation can close the gap between rapidly advancing hardware capabilities and the software stacks needed to fully utilize them.
AFTERNOON BREAK
KEYNOTE | 3:00 p.m.
Building an AI Polymath
Shirley Ho — Group Leader, Simons Foundation; Chief Executive Officer, Polymathic Technologies; Professor of Physics, NYU
Moderator: Kilian Q. Weinberger — Professor of Computer Science, Cornell University
AI has made remarkable strides in natural language processing and image recognition, yet its impact on the natural sciences is fragmented. While specialized models like AlphaFold have revolutionized biology, the scientific enterprise remains siloed, with most foundational models narrowly tailored to specific domains or modalities. In this keynote, Ho advocates for a new class of scientific AI: the polymathic foundation model. Inspired by the intellectual versatility of human polymaths, such a model would integrate diverse data types and disciplinary knowledge across the natural sciences. She argues that building such a model is not only technically feasible but epistemologically necessary. She draws on lessons from existing interdisciplinary successes and outlines key challenges: scientific dataset curation, multimodal and multitask learning, verifiable knowledge transfer, and interpretability. She will close with a cautiously optimistic roadmap for how such models could transform scientific discovery in the next decade.
LIGHTNING AND SPOTLIGHTS
Moderator: Kilian Q. Weinberger — Professor of Computer Science, Cornell University
>
Accelerated Materials Discovery Through Self-Driving Labs
Jorge Colindres — Co-Founder, Radical AI
Every major technological leap has been gated by the discovery of new materials — yet materials science remains one of the slowest experimental disciplines, constrained by manual processes and sparse, inconsistent data. Colindres will present Radical AI’s approach: self-driving labs that combine physical AI with scientific reasoning in a continuous experimental loop capable of generating high-quality proprietary data and compounding learning over time.
The talk will explore how multi-agent systems, machine-learned force fields, robotic labs, and a self-improving data flywheel are working together toward a future where the pace of scientific discovery is no longer limited by human bandwidth.
>
Interoperability Isn’t Enough: Building Trustworthy AI Infrastructure with MCP
Ania Musial — Head of AI Platforms Product, Chief Technology Office, Bloomberg
Since adopting the Model Context Protocol (MCP) in 2024 as a foundation for interoperable AI agents and tools, Bloomberg has learned that interoperability is the easy part — building trustworthy AI infrastructure is the harder, more important challenge. Musial will share how MCP now powers Bloomberg’s flagship agentic AI solution, its internal AI development platform, and production workflows across the firm.
She will outline the architectural decisions that made this scale possible: interceptors to constrain and guide agentic behavior, tool variants to support different models and surfaces, and governance strong enough to let teams innovate without compromising control. The talk will make the case that interoperability connects AI agents, but trustworthiness is what enables them to scale.
>
The Outer Loop: How AI Agents Are Taking Over the Code Review, the Bug Ticket, and the 3 a.m. Incident
Robert Brennan — Chief Executive Officer, OpenHands
While AI coding tools have focused heavily on the inner loop—autocomplete and chat-in-the-IDE—the outer loop of engineering work, including code reviews, dependency upgrades, production incidents, and security debt, is where teams actually lose time. Drawing on data from thousands of real engineering workflows, Brennan will share what agents can own end-to-end versus where human oversight remains essential, along with the architecture required to run hundreds of parallel agents effectively.
Attendees will come away with a framework for classifying their team’s outer-loop tasks by agent-readiness, a reference architecture for outer-loop automation, and a grounded view of what same-day bug resolution actually requires.
KEYNOTE CONVERSATION
The AI Bubble
Andrew Ross Sorkin — Award-Winning Journalist; Founder, Editor at Large, The New York Times DealBook; Co-Anchor, CNBC’s Squawk Box; Author, “1929” and “Too Big to Fail”; Co-Creator, Showtime’s “Billions”
Moderator: Kavita Bala — Provost, Cornell University
CLOSING RECEPTION