Visit

With an increased focus on mental health and growing understanding of its complexities, new research led by Cornell Tech Ph.D. candidate Dan Adler finds that there’s no one-size-fits all for how we experience mental health symptoms in everyday life. Using artificial intelligence, Adler is identifying trends that advance our understanding of the field to make symptom detection and treatment more effective.

New research led by Cornell Tech highlights the complex challenges and opportunities of using artificial intelligence to support mental health tracking and precision medicine. While the study found AI currently unreliable for such tracking, it raised important questions for future research, including the potential for bespoke, tailored solutions for targeted populations, and the challenges that are inherent when attempting to implement broad-stroke diagnoses and solutions to large and diverse groups of people.

Adler’s research paper, published in npj Mental Health Research, looked at how technology, such as smartphone data, can aid in measuring behaviors related to mental health. For instance, smartphones can track GPS data to monitor mobility, which is closely associated with depression symptoms – prior researchers have published papers showing that those who are more mobile throughout the day are less prone to depression symptoms than those who are more sedentary.

Adler’s research also uses AI to find correlations between behaviors and mental health. He explains that while some studies argue for the consistency of such measurements, his team, which includes faculty advisor Tanzeem Choudhury, Professor in Computing and Information Sciences and the Roger and Joelle Burnell Chair in Integrated Health and Technology, focuses on a larger, diverse population. Their research reveals that no single set of behaviors uniformly measures mental health across all individuals, a finding that emphasizes the importance of personalized measurement in mental health care.

Despite the dedication of mental health clinicians who strive to support their patients, Adler points out significant challenges within care, particularly with regard to measurement. Traditionally, mental health diagnoses and assessments rely heavily on self-reported information, clinician observations and collateral information from family and friends. This approach often complicates accurate diagnosis and treatment evaluation.

Mental health measurement is inherently complex and often lacks objective tools for clinicians to utilize because patient progress looks different to everyone. Adler notes the limitations of the historical pursuit of more objective measures, such as biomarkers in the brain, or the smartphone measurements he researched. “Research continues to emphasize that mental health isn’t that simple,” he said, emphasizing that while data-driven methods are promising, mental health remains a deeply personal and subjective experience.

“We used AI tools to find associations between behaviors and mental health, and we found that these tools are not very accurate,” Adler says of the paper. His research suggests conflicting signals in the data, indicating that a one-size-fits-all approach to mental health measurement is ineffective. Instead, Adler advocates for precision medicine and personalized tools, which can tailor care to individual triggers or needs.

For example, his paper shows that high phone use might be associated with depression for older adults, while low phone use might be associated with depression for younger adults, showing that additional context is needed to understand how behavior precisely impacts mental health.

Choudhury says that “the promise of wearable sensors and smartphones may lie in their ability to account for differences, track symptoms, and support precision treatment for individualized symptom trajectories.”

Adler’s engineering background and the interdisciplinary environment at Cornell Tech create a unique environment in which solutions can be explored in the context of multiple disciplines and perspectives. His work, influenced by personal experiences with the mental health care system, is driven by a passion to advance technological solutions to these challenges and create a more effective care system for patients and providers alike.

He stresses the importance of real-world impact in academic research, a principle deeply ingrained at Cornell Tech and in Choudhury’s People-Aware Computing group, which focuses on advancing the future of technology-assisted well-being.

For future research, Adler still sees significant potential in using AI to address access to care challenges. For example, Adler mentioned that new large language model tools could bridge gaps in mental health services. However, he cautions against the uncritical adoption of such technologies. Technologists, he argues, must implement guardrails to ensure these systems offer helpful, not harmful, guidance.

Adler envisions a balanced approach to AI in mental health care, where AI serves both as a way to fill gaps that are known to exist in the health care system and also as a way to supplement existing care practices. Adler believes that using AI to handle administrative tasks or summarize information can improve efficiency, but that it’s crucial to evaluate these tools to genuinely enhance care delivery.


By Kate Blackwood

Using experiments with COVID-19 related queries, Cornell sociology and information science researchers found that in a public health emergency, most people pick out and click on accurate information.

Although higher-ranked results are clicked more often, they are not more trusted, and misinformation does not damage trust in accurate results that appear on the same page. In fact, banners warning about misinformation decrease trust in misinformation somewhat but decrease trust in accurate information even more, according to “Misinformation Does Not Reduce Trust in Accurate Search Results, But Warning Banners May Backfire” published in Scientific Reports on May 14.

Internet users searching for medical advice might be vulnerable to believing, incorrectly, that the rank of the search result indicates authority, said co-author Michael Macy, Distinguished Professor of Arts and Sciences in Sociology and director of the Social Dynamics Laboratory in the College of Arts and Sciences (A&S). “When COVID hit, we thought this problem was worth investigating.”

The relationship between search result rank and misinformation is particularly important during a global pandemic because medical misinformation could be fatal, said Sterling Williams-Ceci ’21, a doctoral student in information science and the paper’s first author.

“Misinformation has been found to be highly ranked in audit studies of health searches, meaning accurate information inevitably gets pushed below it. So we tested whether exposure to highly ranked misinformation taints people’s trust in accurate information on the page, and especially in accurate results when they are ranked below the misinformation,” Williams-Ceci said. “Our study provided hopeful evidence that people do not lose faith in everything else they see in searches when they see misinformation at the very top of the list.”

Mor Naaman, professor of information science at Cornell Tech, and the Cornell Ann S. Bowers College of Computing and Information Science, also contributed to the study.

Williams-Ceci designed a series of online experiments to measure how results rank, the presence of misinformation, and the use of warning banners affect people’s trust in search results related to COVID-19.

The researchers built an online interface that showed participants a search engine results page with a question about COVID-19. The researchers randomized the rank of results that contained accurate information and manipulated whether one of the top three results contained misinformation. Participants were asked to choose one result that they would click, then to rate some of the individual results they had seen on a trustworthiness scale.

The experiments showed that misinformation was highly distrusted in comparison with accurate information, even when shown at or near the top of the results list. In fact, contrary to assumptions in prior work, there was no general relationship between search results’ ranking on the page and how trustworthy people considered them to be.

“Misinformation was rarely clicked and highly distrusted: Only 2.6% of participants who were exposed to inaccurate results clicked on these results,” the researchers wrote.

Further, the presence of misinformation, even when it showed up near the top of the results, did not cause people to distrust the accurate information they had seen below it.

Another experiment introduced warning banners on the search pages. These banners appeared at the top of the page for some participants and warned that unreliable information may be present in the results without identifying what this information said.

Google currently uses banners like these, but few studies have explored how they affect decisions about what information to trust in online searches, Williams-Ceci said.

The researchers found that one of these banners had an unanticipated backfire effect: It significantly decreased people’s trust in accurate results, while failing to decrease their trust in misinformation results to the same degree.

Overall, the results assuage fears that search engines diminish peoples’ trust in authoritative sources, such as the Centers for Disease Control and Prevention, even if these sources’ information is not at the top of the page, the researchers concluded. Macy said this is among the first studies to show that combatting misinformation with warning banners in search engines has mixed outcomes, potentially harmful to getting accurate results in front of internet users.

“The backfire effect of warning labels is very alarming, and further research is needed to learn more about why the labels backfire and how misinformation can be more effectively combatted, not only on Google but on other platforms as well,” Macy said.

Kate Blackwood is a writer for the Cornell University College of Arts and Sciences.


By Louis DiPietro

Amid the unpredictability and occasional chaos of emergency rooms, a robot has the potential to assist health care workers and support clinical teamwork, Cornell and Michigan State University researchers found.

The research team’s robotic crash cart prototype highlights the potential for robots to assist health care workers in bedside patient care and offers designers a framework to develop and test robots in other unconventional areas.

“When you’re trying to integrate a robot into a new environment, especially a high stakes, time-sensitive environment, you can’t go straight to a fully autonomous system,” said Angelique Taylor, assistant professor in information science at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science. “We first need to understand how a robot can help. What are the mechanisms in which the robot embodiment can be useful?”

Taylor is the lead author of “Towards Collaborative Crash Cart Robots that Support Clinical Teamwork,” which received a best paper honorable mention in the design category at the Association of Computing Machinery (ACM)/Institute of Electrical and Electronics Engineers (IEEE) International Conference on Human-Robot Interaction in March.

The paper builds on Taylor’s ongoing research exploring robotics and team dynamics in unpredictable health care settings, like emergency and operating rooms.

Within the medical field, robotics are used in surgery and other health care operations with clear, standardized procedures. The Cornell-Michigan State team, however, set out to learn how a robot can support health care workers in fluid and sometimes chaotic bedside situations, like resuscitating a patient who has gone into cardiac arrest.

The challenges of deploying robots in such unpredictable environments are immense, said Taylor, who has been researching the use of robotics in bedside care since her days as a doctoral student. For starters, patient rooms are often too small to accommodate a stand-alone robot, and current robotics are not yet robust enough to perceive, let alone assist within, the flurry of activity amid emergency situations. Furthermore, beyond the robot’s technical abilities, there remain critical questions concerning its impact on team dynamics, Taylor said.

But the potential for robotics in medicine is huge, particularly in relieving workloads for health care workers, and the team’s research is a solid step in understanding how robotics can help, Taylor said.

The team developed a robotic version of a crash cart, which is a rolling storage cabinet stocked with medical supplies that health care workers use when making their rounds. The robot is equipped with a camera, automated drawers, and – continuing Cornell Bowers CIS researchers’ practice of “garbatrage” – a repurposed hoverboard for maneuvering around.

Through a collaborative design process, researchers worked with 10 health care workers and learned that a robot could benefit teams during bedside care by providing guidance on medical procedures, offering feedback, and tracking tasks, and by managing medications, equipment, and medical supplies. Participants favored a robot with “shared control,” wherein health care workers maintain their autonomy regarding decision-making, while the robot serves as a kind of safeguard and monitors for any possible mistakes in procedures, researchers found.

“Sometimes, fully autonomous robots aren’t necessary,” said Taylor, who directs the Artificial Intelligence and Robotics Lab (AIRLab) at Cornell Tech. “They can cause more harm than good.”

As with similar human-robot studies she has conducted, Taylor said participants expressed concern over job displacement. But she doesn’t foresee it happening.

“Health care workers are highly skilled,” she said. “These environments can be chaotic, and there are too many technical challenges to consider.”

Paper coauthors are Tauhid Tanjim, a doctoral student in the field of information science at Cornell, and Huajie Cao and Hee Rin Lee, both of Michigan State University.

Louis DiPietro is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.


Intimate partner violence is notoriously underreported and correctly diagnosed at hospitals only around a quarter of the time, but a new method provides a more realistic picture of which groups of women are most affected, even when their cases go unrecorded.

PURPLE, an algorithm developed by researchers at Cornell and the Massachusetts Institute of Technology, estimates how often underreported health conditions occur in different demographic groups. Using hospital data, the researchers showed that PURPLE can better quantify which groups of women are most likely to experience intimate partner violence compared with methods that do not correct for underreporting.

The new method was developed by Divya Shanmugam, formerly a doctoral student at MIT who will join Cornell Tech as a postdoctoral researcher this fall, and Emma Pierson, the Andrew H. and Ann R. Tisch Assistant Professor of computer science at the Jacobs Technion-Cornell Institute at Cornell Tech and in the Cornell Ann S. Bowers College of Computing and Information Science. They describe their approach in “Quantifying Disparities in Intimate Partner Violence: a Machine Learning Method to Correct for Underreporting,” published May 15 in the journal npj Women’s Health.

“Often we care about how commonly a disease occurs in one population versus another, because it can help us target resources to the groups who need it most,” Pierson said. “The challenge is, many diseases are underdiagnosed. Underreporting is intimately bound up with societal inequality, because often it tends to affect groups more if they have worse access to health services.”

Shanmugam became interested in intimate partner violence after Pierson recommended the book “No Visible Bruises: What We Don’t Know About Domestic Violence Can Kill Us” by Rachel Louise Snyder. She realized that the pervasive issue of underreporting was something statistical methods could help address. The result was PURPLE (Positive Unlabeled Relative PrevaLence Estimator), a machine learning technique that estimates the relative prevalence of a condition when the true numbers of affected people in different groups are unknown.

The researchers applied PURPLE to two real-life datasets, one that included 293,297 emergency department visits to a hospital in the Boston area, and a second with 33.1 million emergency department visits to hospitals nationwide. PURPLE used demographic data along with actual diagnoses of intimate partner violence and associated symptoms, like a broken wrist or bruising, which could indicate the condition even when the patient was not actually diagnosed.

“These broad datasets, describing millions of emergency department visits, can produce relative prevalences that are misleading using only the observed diagnoses,” Shanmugam said. “PURPLE’s adjustments can bring us closer to the truth.”

PURPLE indicated that patients who are nonwhite, not legally married, on Medicaid or who live in lower-income or metropolitan areas are all more likely to experience intimate partner violence. These results match up with previous findings in the literature, demonstrating the plausibility of PURPLE’s results.

The results also show that correcting for underreporting is important to produce accurate estimates. Without this correction, the hospital datasets do not show a straightforward relationship between income level and rates of victimization. But PURPLE clearly shows that rates of violence are higher for women in lower income brackets, a finding that agrees with the literature.

Next, the researchers hope to see PURPLE applied to other often-underreported women’s health issues, such as endometriosis or polycystic ovarian syndrome.

“There’s still a lot more work to be done to measure the extent to which these outcomes are underdiagnosed, and I think PURPLE could be one tool to help answer that question,” Shanmugam said.

The new technique also has potential applications beyond health conditions. PURPLE could be used to reveal the relative prevalence of underreported police misconduct across precincts or the amounts of hate speech directed at different demographic groups.

Kaihua Hou, a doctoral student at the University of California, Berkeley, contributed to the study. Pierson also has an appointment with Weill Cornell Medicine.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.


With artificial intelligence increasingly integrated into our daily lives, one of the most pressing concerns about this emerging technology is ensuring that the new innovations being developed consider their impact on individuals from different backgrounds and communities. The work of researchers like Cornell Tech PhD student Ben Laufer is critical for understanding the social and ethical implications of algorithmic decision-making.

Laufer was recently named a 2024 Stanford Rising Star in Management Science and Engineering. Those named “rising stars” attend a workshop that is focused on celebrating and fast-tracking the careers of exceptional young scholars across relevant interdisciplinary fields at a critical inflection point in their academic career.

Since starting his doctorate in 2021, Laufer has been pursuing research at the intersection of tech and ethics through Cornell Tech’s Information Science program, which drives the crucial work of studying the interactions between people and technology and how technology is shaping individual lives and social groups.

Through examining information systems like machine learning, artificial intelligence, and human-computer interaction in their social, cultural, economic, historical, legal, and political contexts, the program has supported both Laufer’s ability to develop the technical skills and analytic tools needed to evaluate the use of technology with an eye toward social good.

“Interdisciplinary programs like Information Science at Cornell Tech are an acknowledgement that a scholarly understanding of technology requires human perspectives in addition to understanding modeling, networks, complex systems, and the more technical aspects of things,” Laufer explained. “My work aims to establish algorithmic accountability and bring an ethical lens to our technical tools in light of some of the corrosive effects that technology can have on society.”

After receiving his undergraduate degree in Operations Research and Financial Engineering from Princeton University, Laufer worked as a data scientist in the Bay Area. The “move fast and break things” ideology in the tech industry led him to be more curious about ethics and accountability.

Most recently, Laufer’s research has been focused on general purpose AI, with an emphasis on capturing the interaction between the general technology, downstream users, and how innovation, new capabilities, and product features do or do not create a trade off with other attributes like safety, performance, and bias.

With AI and Machine Learning technologies being implemented and used across both private and public sectors, Laufer’s research models out the various actors and stakeholders in the field, specifies their interests, and uses a game theory lens to capture their interplay and observes how these factors could have an impact on our society and specific communities.

“Beneficial innovation isn’t in conflict with ethics or regulation, on the contrary, technology needs ethics and regulation to benefit us and earn our trust,” Laufer explained. “We need to continue to empower academic institutions and research centers to ensure that those most harmed by technology have their views represented and that technology is developed in a way that benefits everybody.”

This will only become increasingly important with initiatives like the Empire AI Consortium, which is providing $400 million in funding for artificial intelligence research to New York State’s leading research institutions including Cornell Tech to bridge the gap between profit-driven development and New Yorkers’ public interest – ensuring AI safety and its sustainable ethical impact for the state and beyond as the technology continues its rapid growth.


The Google Cyber NYC Institutional Research Program has awarded funding to seven new Cornell projects aimed at improving online privacy, safety, and security.

Additionally, as part of this broader program, Cornell Tech has also launched the Security, Trust, and Safety (SETS) Initiative to advance education and research on cybersecurity, privacy, and trust and safety.

Cornell is one of four New York institutions participating in the Google Cyber NYC program, which is designed to provide solutions to cybersecurity issues in society, while also developing New York City as a worldwide hub for cybersecurity research.

“The threats to our digital safety are big and complex,” said Greg Morrisett, the Jack and Rilla Neafsey Dean and Vice Provost of Cornell Tech and principal investigator on the program. “We need pioneering, cross-disciplinary methods, a pipeline of new talent, and novel technologies to safeguard our digital infrastructure now and for the future. This collaboration will yield new directions to ensure the development of safer, more trustworthy systems.”

The seven newly selected research projects from Cornell are:

  • Protecting Embeddings, Vitaly Shmatikov, professor of computer science at Cornell Tech.

Embeddings are numerical representations of inputs, such as words and images, fed into modern machine learning (ML) models. They are a fundamental building block of generative ML and knowledge retrieval systems, such as vector databases. Shmatikov aims to study security and privacy issues in embeddings, including their vulnerability to malicious inputs and unintended leakage of sensitive information, and to develop new solutions to protect embeddings from attacks.

  • Improving Account Security for At-Risk Users (renewal), Thomas Ristenpart, professor of computer science at Cornell Tech, with co-PI Nicola Dell, associate professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech.

Online services often employ account security interfaces (ASIs) to communicate security information to users, such as recent logins and connected devices. ASIs can be useful for survivors of intimate partner violence, journalists, and others whose accounts are more likely to be attacked, but bad actors can spoof devices on many ASIs. Through this project, the researchers will build new cryptographic protocols for identifying devices securely and privately, to prevent spoofing attacks of ASIs, and investigate how to make ASIs more effective and with improved user interfaces.

  • From Blind Faith to Cryptographic Certification in ML, Michael P. Kim, assistant professor of computer science.

Generative language models, like ChatGPT and Gemini, demonstrate great promise, but also pose new risks to users by producing misinformation and abusive content. In existing AI frameworks, individuals must blindly trust that platforms implement their models responsibly to address such risks. Kim proposes to borrow tools from cryptography to build a new framework for trust in modern prediction systems. He will explore techniques to enable platforms to earn users’ trust by proving that their models mitigate serious risks.

  • Making Hardware Comprehensively Secure Against Spectre — by Construction (renewal), Andrew Myers, professor of computer science.

In this renewed project, Myers will continue his work to design secure and efficient hardware systems that are safe from Spectre and other “timing attacks.” This type of attack can steal sensitive information, such as passwords, from hardware by analyzing the time required to perform computations. Myers is developing new hardware description languages, which are programming languages that describe the behavior or structure of digital circuits, that will successfully prevent timing attacks.

  • Safe and Trustworthy AI in Home Health Care Work, Nicola Dell, with co-PIs, Deborah Estrin, professor of computer science at Cornell Tech, Madeline Sterling, associate professor of medicine at Weill Cornell Medicine, and Ariel Avgar, the David M. Cohen Professor of Labor Relations at the ILR School.

This team will investigate the trust, safety, and privacy challenges related to implementing artificial intelligence (AI) in home health care. AI has the potential to automate many aspects of home health services, such as patient–care worker matching, shift scheduling, and tracking of care worker performance, but the technology carries risks for both patients and care workers. Researchers will identify areas where the use of AI may require new oversight or regulation, and explore how AI systems can be designed, implemented, and regulated to ensure they are safe, trustworthy, and privacy-preserving for patients, care workers, and other stakeholders.

  • AI for Online Safety of Disabled People, Aditya Vashistha, assistant professor of information science.

Vashistha will evaluate how AI technologies can be leveraged to protect people with disabilities from receiving ableist hate online. In particular, he will analyze the effectiveness of platform-mediated moderation, which primarily uses toxicity classifiers and language models to filter out hate speech.

DEFNET: Defending Networks With Reinforcement Learning, Nate Foster, professor of computer science, with co-PI Wen Sun, assistant professor of computer science.

Traditionally, security has been seen as a cat-and-mouse game, where attackers exploit vulnerabilities in computer networks and defenders respond by shoring up weaknesses. Instead, Foster and Sun propose new, automated approaches that will use reinforcement learning – an ML technique where the model makes decisions to achieve the most optimal results – to continuously defend the network. They will focus their work at the network level, training and deploying defensive agents that can monitor network events and configure devices such as routers and firewalls to protect data and prevent disruptions in essential services.

Under director Alexios Mantzarlis, formerly a principal at Google’s Trust and Safety Intelligence team, the newly formed SETS Initiative at Cornell Tech will focus on threats ranging from ransomware and phishing of government officials to breaches of personal information and digital harassment.

“There are new vectors of abuse every day,” said Mantzarlis. He emphasizes that the same vulnerabilities exploited by state actors that threaten national security can also be used by small-time scammers. “If a system is unsafe and your data is leaky, that same system will be a locus of harassment for users.”

Additionally, SETS will serve as a physical and virtual hub for academia, government, and industry to tackle emerging online threats.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.


Written by: Sarah Marquart

The U.S. National Science Foundation (NSF) has awarded $12 million to a multi-institutional team of researchers that includes Cornell Tech Assistant Professor Udit Gupta for an initiative to establish new standards for carbon accounting in the computing industry.

The multi-institutional team is led by researchers at Harvard University and the University of Pennsylvania, and consists of distinguished researchers at California Institute of Technology, Carnegie Mellon University, The Ohio State University, and Yale University, in addition to Cornell.

The project, called “NSF Expeditions in Computing: Carbon Connect — An Ecosystem for Sustainable Computing,” comes at a critical time. Computing currently generates two to four percent of global emissions, a figure poised to climb as the demand for digital solutions surges. This rise is driven by the proliferation of consumer devices — wearables, AR/VR headsets, mobile phones — advanced communication systems like 4G, 5G, and soon 6G, and the expanding infrastructure of data centers.

“Simultaneously, realizing pervasive efficiency improvements is becoming increasingly more challenging due to the slowing of Moore’s Law,” said Gupta, underlining the urgency of this initiative. “If left unchecked, computing’s energy and environmental footprint will grow tremendously in the coming decade.”

Over the next five years, this grant will provide crucial support to the team as they embark on a mission to redefine the approach of computer scientists to environmental sustainability. They plan to achieve this through three key strategies, each designed to address a specific aspect of the challenge.

First, the team will investigate new models, frameworks, and tools to help engineers accurately measure and report the environmental impact of computing systems across their lifetimes. This is important, as, currently, there is a lack of easily accessible tools and data on the carbon footprint of different software and hardware systems, leading to a lack of standardization in data collection and reporting methods.

“We hope to develop tools to allow hardware and software engineers to elevate sustainability as a first-order design consideration alongside performance, efficiency, and quality of service,” said Gupta. “This will allow developers to carefully weigh the environmental impact of new technologies.”

Next, the researchers will create new methodologies to develop sustainable computing systems in the future, with the goal of reducing computing’s carbon footprint by 45 percent within the next decade. Achieving this will require a combination of solutions to mitigate operational carbon (from using computing chips and their energy consumption) and embodied carbon (from manufacturing computing chips).

“Some ideas we are particularly excited about are considering emerging technologies at the circuit and VLSI level, extending the lifetime of our servers to amortize manufacturing emissions, and creating new algorithmic techniques to mitigate the footprint of AI training and inference,” said Gupta.

Finally, the team will build educational materials to help bring sustainable computing to these and other universities.

The researchers hope their efforts will contribute to a more sustainable future while influencing future energy policy and legislation, which can have significant ripple effects. “Policy can have a large impact in encouraging technology companies to transparently report the carbon footprint of computing systems,” says Gupta. “This is crucial not only to foster new research and innovation in high-impact areas of sustainable computing but also to ensure our methods of carbon modeling and optimizations are being translated to real-world use cases to mitigate the carbon footprint of systems.”

Carbon Connect – An Ecosystem for Sustainable Computing is one of three NSF-funded computing projects. The foundation awarded $36 million in total through its Expeditions in Computing program to support initiatives that have the potential to revolutionize computing and significantly reduce the carbon footprint of computers’ lifecycles.


Portobello, a new driving simulator developed by researchers at Cornell Tech, blends virtual and mixed realities, enabling both drivers and passengers to see virtual objects overlaid in the real world.

This technology opens up new possibilities for researchers to conduct the same user studies both in the lab and on the road – a novel concept the team calls “platform portability.”

The research team, led by Wendy Ju, associate professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion, presented their paper, “Portobello: Extended Driving Simulation from the Lab to the Road,” at the ACM Conference on Human Factors in Computing Systems (CHI) in May. The paper earned honorable mention at the conference.

Co-authors included doctoral students Fanjun Bu, Stacey Li, David Goedicke, and Mark Colley; and Gyanendra Sharma, an industrial adviser from Woven by Toyota.

Portobello is an on-road driving simulation system that enables both drivers and passengers to use mixed-reality (XR) headsets. The team’s motivation for developing Portobello stemmed from its work on XR-OOM, an XR driving simulator system. The tool could merge aspects of the physical and digital worlds, but it had limitations.

“While we could stage virtual objects in and around the car – such as in-car virtual displays and virtual dashboards – we had problems staging virtual events relative to objects in the real world, such as a virtual pedestrian crossing on a real crosswalk or having a virtual car stop at real stop signs,” Bu said.

This posed a significant obstacle to conducting meaningful studies, particularly for autonomous driving experiments that require precise staging of objects and events in fixed locations within the environment.

Portobello was conceived to overcome these limitations and anchor on-road driving simulations in the physical world. During the design phase, researchers utilize the Portobello system to generate a precise map of the study environment. Within this map, they can strategically position virtual objects based on real-world elements (placing virtual pedestrians near stop signs, for example). The vehicle operates within the same mapped environment, seamlessly blending simulation and reality.

With the successful integration of Portobello, the team has not only addressed the limitations of XR-OOM but has also introduced platform portability. This innovation enables researchers to conduct identical studies in both controlled laboratory settings and real-world driving scenarios, enhancing the precision and applicability of their findings.

“Participants treat in-lab simulators as visual approximations of real-world scenarios, almost a performative experience,” Bu said. “However, participants treat on-road simulators as functional approximations. [They] felt more stress in on-road simulators and felt their decisions carried more weight.”

Bu said Portobello could facilitate the “twinning of studies” – running the same study across different environments. This, he said, not only makes findings more realistic, but also helps uncover how other factors might affect the results.

Said Ju: “We believe that by going beyond running pristine studies and allowing some variability from real-world to bleed through, research results will be more applicable to real-world settings.”

Hiroshi Yasuda, a human-machine interaction researcher at Toyota Research Institute (TRI), also contributed to the research, which was supported by TRI and Woven by Toyota.


In the past few years, Machine Learning and Large Language Models have taken the world by storm, with ChatGPT having over 180 million users and openai.com receiving approximately 1.6 billion visits per month in 2024. But this rapid growth raises the question: How do we maintain the accessibility and efficiency of Large Language Models at a pace that keeps up with the rapid growth of new programs?

A large part of the current struggle to advance the efficiency of these programs lies in the fact that the hardware supporting them is not up to date or developed as quickly as the software applications. Every time someone types a question into ChatGPT, five computers are working to try to get an answer back – a task that consumes a substantial amount of resources and will only worsen if trends continue.

If we want to increase the accessibility, efficiency, and performance of Machine Learning, we need to improve the hardware applications they utilize. This is the exact focus for Mohamed Abdelfattah, an Assistant Professor at Cornell Tech, who has received a prestigious U.S. National Science Foundation (NSF) Early Career Development Award in order to develop specialized computer chips and software programs that enhance AI performance.

The award supports his research proposal, “Efficient Large Language Model Inference Through Codesign: Adaptable Software Partitioning and FPGA-based Distributed Hardware,” for a five-year period from 2024 through 2029 with a total amount of $883,082.

“The key challenge is still scaling; we need to make these models bigger and add more data to make them capable, but we don’t yet have the right computing platforms,” said Abdelfattah. “Rethinking hardware architecture together with software and algorithms is crucial for unleashing the generative tasks. The NSF project proposes optimizing the entire computing stack, composed of three main areas of algorithms, software, and hardware, to make distributed and large-scale language models run more efficiently.”

Before becoming a professor at Cornell Tech, Abdelfattah spent six years as a principal scientist at Samsung Electronics working with hardware. He realized during his time in the industry that Machine Learning was the future so he began exploring methods for allowing his work to combine the two.

When Machine Learning and Large Language Models took off and the scale of Artificial Intelligence became exponentially higher than it was even a few years prior, the research and engineering industry was faced with the dilemma of new technology that worked really well but was developing so rapidly it was unsustainable. Abdelfattah saw the dilemma as an opportunity for drastic improvement in the limits of running the novel technology on a larger scale.

“Large Language Models had all the makings of a challenging research project to tackle,” he said. “Getting these systems to work efficiently at the rate they’re developing is a massive feat that we can’t overcome by making our systems 10% better; the level of impact requires us to work toward a solution that makes them 100% better.”

Abdelfattah’s work in making Machine Learning more efficient and accessible is crucial for its financial sustainability and growth. Presently, Large Language Models struggle with economic profitability – platforms ending up losing money because of the energy consumption required to deploy them.

Decreasing energy consumption, paired with an increase in efficiency, will make the technology viable to deploy on a large scale, which presents endless opportunities for what Machine Learning and Large Language Models have the potential to achieve. With increased accessibility allowing for more people to use the models in everything from coding to the legal space, Abdelfattah’s work will help shape and change the future of our productivity and efficiency.


What does it mean to use academic research for social justice and good and what are the best methods for connecting the two? This complex but important issue is what Sera Linardi is attempting to tackle through her recent appointment as Siegel Public Interest Tech (PiTech) Faculty Impact Fellow for Cornell Tech.

The Faculty PiTech fellowship, selected annually and with terms ranging from six to twelve months, provides a platform for established faculty to explore public interest technology ventures or initiatives in their teaching and research. In the four months since the start of her fellowship, Linardi has supported Cornell Tech by helping connect the academic research of the campus with the needs of the communities it serves.

Introduced in 2021 and central to Cornell Tech’s mission of incorporating social considerations into all aspects of their research, Cornell Tech’s PiTech initiative is at the forefront of a movement to build a commitment to responsible tech and public interest technology. The program, funded by David Siegel and brought to fruition by Associate Dean and Robert V. Tishman ’37 Professor Deborah Estrin, was established in recognition of the need to imbue a public interest orientation in students that they can carry into their professional lives as they pursue careers in tech.

“There are so many conversations currently about creating smart cities and utilizing tech for social good, but because so much of the focus is on academic innovation, it can be difficult for those conversations to translate to community needs,” said Linardi. “We’re attempting to answer the question of what it means to use tech for good and social justice in a university setting, connecting tech research with communities in a way that creates a practical impact.”

Linardi’s journey that led her to delve into the intersection of research and community began after she received her PhD in social science from CalTech and became a professor at the University of Pittsburgh. As a researcher in social science, she worked with nonprofit organizations like School on Wheels that didn’t have the funding to conduct statistical analytical research experiments, and she also conducted direct outreach to communities to how academia could best meet their needs.

In her position as associate professor at the University of Pittsburgh, she founded the Center for Analytical Approaches to Social Innovation (CAASI), which, amidst the devastation of George Floyd’s murder in 2020 and following social reckoning, offered a community for healing. CAASI began expanding rapidly, beginning as a place for meditative reflection and gradually becoming an incubator for creating practical web and data projects driven by students and created in collaboration with communities.

Linardi’s extensive work growing and developing CAASI for five years made her well-equipped to take on the role of Executive Director of Equity and Access in Algorithms, Mechanisms, and Optimization (EEAMO). A network of diverse researchers with various orientations to the intersection of academia and equity, EEAMO serves as a nonprofit organization focused on using interdisciplinary research to improve equity and access to opportunity in historically under-served communities. As Executive Director, Linardi took her work scaffolding the divide between student learning and community and translated it to connecting academic researchers and community.

“We take people who are already in the research network and help them build, understand, and integrate the perspectives of historically underserved populations,” Linardi explained. “Reaching out to people who are academically invested in their respective fields and exposing them to what communities actually need helps to create greater equity for historically underserved communities.”

Linardi’s tireless efforts in her role as Founding Director of EAAMO are helping to ensure that math, computing, and technology research are supporting the efforts of underserved communities and increasing overall equity. Cornell Tech’s PiTech fellowship has allowed her to expand her crucial work, making it more accessible to additional universities and communities. Linardi will be working within the Cornell Tech community through December 2024.