Visit

By Grace Stanley

Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities, and a report detailing how Reddit communities are changing their policies to address a surge in AI-generated content.

The team collected metadata and community rules from the online communities, known as subreddits, during two periods in July 2023 and November 2024. The researchers will present a paper with their findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems being held April 26 to May 1 in Yokohama, Japan.

One of the researchers’ most striking discoveries is the rapid increase in subreddits with rules governing AI use. According to the research, the number of subreddits with AI rules more than doubled in 16 months, from July 2023 to November 2024.

Read more on the Cornell Chronicle.

Grace Stanley is a staff writer-editor for Cornell Tech.


By Grace Stanley

Daniel D. Lee, Tisch University Professor of Electrical and Computer Engineering at Cornell Tech, has won the FlyWire Ventral Nerve Cord (VNC) Matching Challenge award for a competition that tasked researchers with creating a method to align the connectomes — aka neural connection maps — of male and female fruit flies, represented as large graphs.

To find the best way to match the “nodes” (neurons) of the graphs, Lee’s team, named “Old School,” utilized matrix representations. Matrix representations are ways to organize data using matrices, which are rectangular arrays of numbers arranged in rows and columns. Specifically, Lee analyzed doubly-stochastic and permutation matrices, which are special types of square, nonnegative matrices, allowing his team to develop a winning solution.

Lee and his teammate, senior researcher Lawrence Saul of the Flatiron Institute, presented their solution to the FlyWire challenge at the Princeton Neuroscience Institute on March 7. FlyWire is a global initiative that combines human expertise and AI to construct a detailed map of the neural pathways and connections in fruit fly brains, striving to advance research in neurobiology.

“This competition seeks to better understand recently measured connectomes in brains using computational methods,” Lee said. “Our method can potentially help discover gender and individual differences in connectome data in the future.”

Daniel Lee and Lawrence Saul accepting award for the Flywire Challenge.
Professor Daniel D. Lee and Researcher Lawrence Saul accepting an award for the FlyWire Ventral Nerve Cord Matching Challenge.

Lee and his teammate are both members of the Flatiron Institute Center for Computational Mathematics. Their work at the competition showcased the innovative research styles conducted at both the Flatiron Institute and Cornell Tech, where Lee performs research on machine learning, robotics, and computational neuroscience.

“This work is a good example of research at Cornell Tech that can facilitate further scientific discovery by others,” Lee said. “The open collaborative nature of Cornell was instrumental in nurturing this work.”

Grace Stanley is a staff writer-editor for Cornell Tech.


By Jennifer Wholey

For 24 hours, donors rallied together to help Cornell “reach for the stars” on the 11th Giving Day, held March 13.

This year’s space-themed event raised $11,206,717 from 17,591 donors, for a total of 25,929 gifts making a tangible show of support for causes across the university.

“Cornellians everywhere demonstrated their continued commitment to our founding principles and mission ‘to do the greatest good,’” said Fred Van Sickle, vice president for alumni affairs and development (AAD). “In these times of great uncertainty for higher education, the results of Giving Day 2025 help the university provide an exceptional educational experience for its students and bolster our impact around the world.”

Seventeen Giving Day events across Cornell’s Ithaca campus and at Cornell Tech in New York City drew in 1,620 students with giveaways, snacks, postcard-writing and games.

Read more in the Cornell Chronicle.

Jennifer Wholey is a marketing writer in Alumni Affairs and Development.


By Grace Stanley

A team of researchers from Cornell Tech has developed a new tool designed to revolutionize hardware troubleshooting, with the help of 3D phone scans.

SplatOverflow – inspired by StackOverflow, a widely used platform for tackling software issues – brings a similar approach to hardware support, enabling users to diagnose and fix hardware issues asynchronously with the help of remote experts.

paper about the new tool will be presented April 30 at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems, taking place April 26-May 1 in Yokohama, Japan.

SplatOverflow was developed in the Matter of Tech Lab at Cornell Tech, directed by Thijs Roumen, assistant professor at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS).

Read more on the Cornell Chronicle.

Grace Stanley is a staff writer-editor for Cornell Tech.


By Grace Stanley

One might wonder why a CEO would talk about a divisive political topic – especially when it’s unrelated to their core business model. After all, why would you want to hear what the guy selling beans has to say about politics?

New research coming out of Cornell Tech suggests you might be on to something.

In a paper titled “When (Not) To Talk Politics in Business,” published Feb. 25 in Strategic Management Journal, researchers illuminate circumstances under which it is more or less beneficial for CEOs to talk about politics.

“We’re not looking here at people who go out in the streets and protest because of their own convictions. We’re looking at multi-billion-dollar firms. If they’re doing something, there’s usually a business reason,” said Tommaso Bondi, assistant professor of marketing at Cornell Tech and at the Samuel Curtis Johnson Graduate School of Management.

Read more at the Cornell Chronicle.

Grace Stanley is a staff writer-editor for Cornell Tech.


By David Nutt

There is more to photovoltaic panels than the materials that comprise them: The design itself can also drive – or potentially diminish – the widespread adoption of solar technology.

Put bluntly: Most solar panels are not much to look at. And their flat, nonflexible composition means they can only be affixed to similarly flat structures. But what if photovoltaic panels were instead a hinged, lightweight fabric that was aesthetically attractive and could wrap around complex shapes, even contorting its form to better absorb sunlight?

Thus was born the idea for HelioSkin, an interdisciplinary project led by Jenny Sabin, the Arthur L. and Isabel B. Weisenberger Professor in Architecture in the College of Architecture, Art and Planning, in collaboration with Itai Cohen, professor of physics in the College of Arts and Sciences, and Adrienne Roeder, professor in the Section of Plant Biology in the School of Integrative Plant Science, in the College of Agriculture and Life Sciences and at the Weill Institute for Cell and Molecular Biology.

“What we’re really passionate about is how the system could not only produce energy in a passive way, but create transformational environments in urban or urban-rural settings,” Sabin said.

Read more in the Cornell Chronicle.

David Nutt is a Senior Staff Writer for the Cornell Chronicle.


Backslash at Cornell Tech, dedicated to advancing new works of art and technology that escape convention, has announced Nigerian-American artist Mimi Ọnụọha as its first Backslash Fellow.

Ọnụọha is an artist-in-residence at Cornell Tech, embedding her creative practice in the new Backslash Studio at the Tata Innovation Center and allowing her to foster collaboration between researchers and students on the Roosevelt Island campus. Ọnụọha was selected because she uses technology to take her practice in bold, unconventional directions.

“We are honored to recognize Mimi as the first Backslash Fellow,” said Greg Pass, founder of Backslash and former Chief Entrepreneurial Officer at Cornell Tech. “Mimi’s unorthodox applications of technology perfectly represent the nonlinear artistic practices we champion with Backslash.”

As a Backslash Fellow, Ọnụọha receives a grant valued at $60,000. This includes an honorarium, travel stipend, project materials, and support for collaboration with Ph.D. and master’s students at Cornell Tech.

Ọnụọha is developing a docufiction film about a custom predictive machine learning model that she teamed up with the Human Rights Data Analysis Group to build. “This work feels like a marked evolution for me personally,” said Ọnụọha. “In this work — and aided by the support and resources of Backslash — I want to push my practice a step further and create a work that talks about history, land, and machine learning in a way that isn’t typically seen.”

Ọnụọha will collaborate with Cornell Tech Computer Science Professor Noah Snavely and his research group to reconstruct 3D scenes from 2D photography, combining archival research footage and materials with footage developed by computer vision techniques.

“The rapid advancements in computer vision have the ability to push boundaries across many creative sectors, and we are already inspired by the collaborations we have had with Backslash artists,” said Snavely. “Mimi’s documentary work presents our research group with a unique opportunity to integrate bleeding-edge computer vision and graphics technology with art.”

Ọnụọha has historically worked at the intersection of art and technology to question and expose contradictory logics of technological progress. Using technological mediums such as code, data, and video, her pieces offer new orientations for making sense of absences in the systems of labor, ecology, and relations.

Since 2016, Backslash has supported artists and Cornell University students across the Ithaca and Cornell Tech campuses whose practices are unconventional, adventurous, intense, and primed for engagement with new technologies. Backslash is inspired by the \ keyboard character, known as an escape character in computer programming, indicating that the characters after the \ should be interpreted outside the normal mode of input and output to do something special.


By Patricia Waldron

Deciphering some people’s writing can be a major challenge – especially when that writing is cuneiform characters imprinted into 3,000-year-old tablets.

Now, Middle East scholars can use artificial intelligence (AI) to identify and copy over cuneiform characters from photos of tablets, letting them read complicated scripts with ease.

Along with Egyptian hieroglyphs, cuneiform is one of the oldest known forms of writing, and consists of more than 1,000 unique characters. The appearance of these characters can vary across eras, cultures, geography and even individual writers, making them difficult to interpret. Researchers from Cornell and Tel Aviv University (TAU) have developed an approach called ProtoSnap that “snaps” into place a prototype of a character to fit the individual variations imprinted on a tablet.

With the new approach, they can make an accurate copy of any character and reproduce whole tablets.

“When you go back to the ancient world, there’s a huge variability in the character forms,” said Hadar Averbuch-Elor, assistant professor of computer science at Cornell Tech and in the Cornell Ann S. Bowers College of Computing and Information Science, who led the research. “Even with the same character, the appearance changes across time, and so it’s a very challenging problem to be able to automatically decipher what the character actually means.”

Rachel Mikulinsky, a masters student and co-first author from TAU, will present “ProtoSnap: Prototype Alignment for Cuneiform Signs” in April at the International Conference on Learning Representations (ICLR).

An estimated 500,000 cuneiform tablets sit in museums, but only a fraction have been translated and published. “There’s an endless amount of 2D scans of these cuneiforms, but the amount of labeled data is very scarce,” Averbuch-Elor said.

To see if they could automatically decipher these scans, the team applied a diffusion model – a type of generative AI model often used for computer vision tasks, such as image generation – to calculate the similarity between each pixel in an image of a character on a tablet and a general prototype of the character. Then they aligned the two versions and snapped the template to match the strokes of the actual character.

The snapped characters also can be used to train downstream AI models that perform optical character recognition – essentially turning images of the tablets into machine-readable text. The researchers showed that, when trained with this data, the downstream models perform far better at recognizing cuneiform characters – even ones that are rare or that show a lot of variation – compared to previous efforts using AI.

This advance could help automate the tablet-copying process, saving experts countless hours, and allowing for large-scale comparisons of characters between different times, cities and writers.

“At the base of our research is the aim to increase the ancient sources available to us by tenfold,” said co-author Yoram Cohen, professor of archaeology at TAU. “This will allow us, for the first time, the manipulation of big data, leading to new measurable insights about ancient societies – their religion, economy, social and legal life.”

Additional researchers on the study include co-first-author Morris Alper, a graduate student at TAU; Enrique Jimenez, professor of ancient oriental languages at Ludwig-Maximilians-Universität München; and Shai Gordin, senior lecturer for ancient Near Eastern history and digital humanities at Ariel University.

This research received funding from the TAU Center for Artificial Intelligence & Data Science and the LMU-TAU Research Cooperation Program.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.


New York, N.Y. – Bloomberg L.P. co-founder and philanthropist Tom Secunda today announced a landmark donation to Cornell Tech, Cornell University’s tech campus in New York City, totaling $10.5 million to create the Cornell Empire AI Post-Doctoral Fellowship Program Fund. This donation will advance the mission of Empire AI, providing researchers and faculty with unparalleled access to computing power and opportunities to engage in groundbreaking AI research for the public good.

“I’m proud to be supporting Cornell’s top AI researchers in their pursuit of new, groundbreaking AI developments that will benefit the people of New York. By giving these researchers the freedom to explore all the opportunities that this nascent technology has to offer, we are opening up a new world of technological development,” said Secunda, who is a member of the steering committee for the Jacobs Technion-Cornell Institute at Cornell Tech. “I’m also honored to be contributing to the advancement of Empire AI, which is shaping up to be one of the most impactful public-private technology partnerships in recent memory.”

Secunda’s gift, which totals $10.5 million over 5 years, will establish the Cornell Empire AI Post-Doctoral Fellowship Program Fund at Cornell Tech in partnership with the Jacobs Technion-Cornell Institute. The gift will benefit researchers at both the Cornell Tech campus in New York City and at Cornell’s Bowers College of Computing and Information Science in Ithaca. It will also support an annual public conference to be held at Cornell Tech to showcase the research work being conducted by the Cornell fellows and faculty utilizing Empire AI.

“Mr. Secunda’s generous support represents a remarkable opportunity for Cornell University, which is already a national leader in AI research, to bring more top AI scholars from around the world to both its Cornell Tech campus in New York City and to the Bowers College of Computing and Information Science on our Ithaca campus,” said Kavita Bala, Provost of Cornell University. “The transformational potential of their work will be facilitated by the computing resources now available through the Empire AI consortium and will also showcase New York State’s leadership in AI.”

Empire AI is a new resource for high-performance computing power that enables responsible AI research and development. The public-private technology partnership was anchored by key support from New York State and Governor Kathy Hochul in 2024. The Empire AI Consortium connects Cornell and several other collaborating research universities and institutions across New York to a central computing infrastructure.

“This computing resource is a partnership in a research instrument – one that can be matched in compute capability with these researchers’ combination of leading-edge discovery and thoughtfulness about how AI is harnessed to address many real-world challenges,” said Krystyn Van Vliet, Cornell Vice President for Innovation and External Engagement Strategy.

“The Cornell Tech community is grateful to Tom Secunda for his wisdom and generosity in creating this opportunity for our campus and for the AI research community,” added Greg Morrisett, Jack and Rilla Neafsey Dean and Vice Provost of Cornell Tech. “Tom’s support allows us to significantly expand our research in AI here in New York City by attracting an elite cohort of the best researchers in the world. Their work, which will now progress much more quickly thanks to Empire AI, will allow our campus to further increase its significant economic impact on New York City and its growing tech sector.”

Faculty associated with the Jacobs Technion-Cornell Institute at Cornell Tech will also be key participants. The fellowship program will drastically increase capacity for faculty doing AI research by allowing them to recruit an elite cohort of up-and-coming researchers. The highly selective cohort will additionally benefit from Empire AI’s large computing resources. The very first of these systems at Empire AI, called Alpha, was launched in October of 2024. Out of the gate, Alpha was among the top 250 most high-performing systems in the world, notably at the current leading edge of processor capabilities for AI-enabled research.

About Empire AI

Launched in April 2024 by Governor Kathy Hochul, Empire AI is a bold partnership of multiple new academic research institutions coming together to establish a state-of-the-art artificial intelligence academic research computing resource, housed at SUNY’s University at Buffalo. The consortium comprises several higher education institutions including Columbia University, Cornell University, The City University of New York (CUNY), New York University (NYU), Rensselaer Polytechnic Institute (RPI), and The State University of New York (SUNY) as well as philanthropic backers such as Tom Secunda and the Simons Foundation, whose Flatiron Institute works to advance research through computational methods. Intended to promote responsible academic research and development, and unlock AI opportunities focused on public good in New York, Empire AI is bridging the gap between private funding and public interest through investments in both sectors to accelerate the development of AI centered in public interest for the state. Additional information can be found here.

About Cornell Tech

Cornell Tech is Cornell University’s state-of-the-art campus in New York City that develops leaders and technologies for the AI era through foundational and applied research, graduate education, and new ventures. Located on Roosevelt Island, the growing campus was founded in partnership with the Technion-Israel Institute of Technology and in close collaboration with the NYC Economic Development Corporation after Cornell won a worldwide competition initiated by Mayor Michael R. Bloomberg’s administration to create an applied sciences campus in New York City. More than 1,000 Cornell students are now educated annually on the campus, including 700 in Cornell Tech programs. Since opening in 2012, nearly 120 new companies have spun out from startup programs at Cornell Tech and 95 percent of them are based in New York City. Cornell Tech continues to have a transformative economic impact on the region’s tech sector.


By Jim Schnabel

A new AI-based system for analyzing images taken over time can accurately detect changes and predict outcomes, according to a study led by investigators at Weill Cornell Medicine, Cornell’s Ithaca campus and Cornell Tech. The system’s sensitivity and flexibility could make it useful across a wide range of medical and scientific applications.

The new system, termed LILAC (Learning-based Inference of Longitudinal imAge Changes), is based on an AI approach called machine learning. In the study, which appears Feb. 20 in the Proceedings of the National Academy of Sciences, the researchers developed the system and demonstrated it on diverse time-series of images—also called “longitudinal” image series—covering developing IVF embryos, healing tissue after wounds and aging brains. The researchers showed that LILAC has a broad ability to identify even very subtle differences between images taken at different times, and to predict related outcome measures such as cognitive scores from brain scans.

“This new tool will allow us to detect and quantify clinically relevant changes over time in ways that weren’t possible before, and its flexibility means that it can be applied off-the-shelf to virtually any longitudinal imaging dataset,” said study senior author Dr. Mert Sabuncu, vice chair of research and a professor of electrical engineering in radiology at Weill Cornell Medicine and professor in the School of Electrical and Computer Engineering at Cornell University’s Ithaca campus and Cornell Tech.

The study’s first author is Dr. Heejong Kim, an instructor of artificial intelligence in radiology at Weill Cornell Medicine and a member of the Sabuncu Laboratory.

Traditional methods for analyzing longitudinal image datasets tend to require extensive customization and pre-processing. For example, researchers studying the brain may take raw brain MRI data and pre-process the image data to focus on just one brain area, also correcting for different view angles, sizing differences and other artifacts in the data—all before performing the main analysis.

The researchers designed LILAC to work much more flexibly, in effect automatically performing such corrections and finding relevant changes.

“This enables LILAC to be useful not just across different imaging contexts but also in situations where you aren’t sure what kind of change to expect,” said Dr. Kim, LILAC’s principal designer.

In one proof-of-concept demonstration, the researchers trained LILAC on hundreds of sequences of microscope images showing in-vitro-fertilized embryos as they develop, and then tested it against new embryo image sequences. LILAC had to determine, for randomized pairs of images from a given sequence, which image was taken earlier—a task that cannot be done reliably unless the image data contain a true “signal” indicating time-related change. LILAC performed this task with about 99% accuracy, the few errors occurring in image pairs with relatively short time intervals.

LILAC also was highly accurate in ordering pairs of images of healing tissue from the same sequences, and in detecting group-level differences in healing rates between untreated tissue and tissue that received an experimental treatment.

Similarly, LILAC predicted the time intervals between MRI images of healthy older adults’ brains, as well as individual cognitive scores from MRIs of patients with mild cognitive impairment—in both cases with much less error compared with baseline methods.

The researchers showed in all these cases that LILAC can be adapted easily to highlight the image features that are most relevant for detecting changes in individuals or differences between groups—which could provide new clinical and even scientific insights.

“We expect this tool to be useful especially in cases where we lack knowledge about the process being studied, and where there is a lot of variability across individuals,” Dr. Sabuncu said.

The researchers now plan to demonstrate LILAC in a real-world setting to predict treatment responses from MRI scans of prostate cancer patients.

The LILAC source code is freely available at https://github.com/heejong-kim/LILAC

Many Weill Cornell Medicine physicians and scientists maintain relationships and collaborate with external organizations to foster scientific innovation and provide expert guidance. The institution makes these disclosures public to ensure transparency. For this information, see profile for Dr. Mert Sabuncu.

Funding for this project was provided in part by grants from the National Cancer Institute and the National Institute on Aging, both part of the National Institutes of Health, through grant numbers K25CA283145, R01AG053949, R01AG064027 and R01AG070988. For aging brain experiments, data were provided by OASIS-3: Longitudinal Multimodal Neuroimaging: Principal Investigators: T. Benzinger, D. Marcus, and J. Morris; NIH P30 AG066444, P50 AG00561, P30 NS09857781, P01 AG026276, P01 AG003991, R01 AG043434, UL1 TR000448, and R01 EB009352.

Jim Schnabel is a freelance writer for Weill Cornell Medicine.