Visit

RELATED STORIES


RELATED STORIES


RELATED STORIES


RELATED STORIES


RELATED STORIES

By David Nutt, Cornell Chronicle

A Cornell-led collaboration used machine learning to pinpoint the most accurate means, and timelines, for anticipating the advancement of Alzheimer’s disease in people who are either cognitively normal or experiencing mild cognitive impairment.

The modeling showed that predicting the future decline into dementia for individuals with mild cognitive impairment is easier and more accurate than it is for cognitively normal, or asymptomatic, individuals. At the same time, the researchers found that the predictions for cognitively normal subjects is less accurate for longer time horizons, but for individuals with mild cognitive impairment, the opposite is true.

The modeling also demonstrated that magnetic resonance imaging (MRI) is a useful prognostic tool for people in both stages, whereas tools that track molecular biomarkers, such as positron emission tomography (PET) scans, are more useful for people experiencing mild cognitive impairment.

The team’s paper, “Machine Learning Based Multi-Modal Prediction of Future Decline Toward Alzheimer’s Disease: An Empirical Study,” published Nov. 16 in PLOS ONE. The lead author is Batuhan Karaman, a doctoral student in the field of electrical and computer engineering.

Alzheimer’s disease can take years, sometimes decades, to progress before a person exhibits symptoms. Once diagnosed, some individuals decline rapidly but others can live with mild symptoms for years, which makes forecasting the rate of the disease’s advancement a challenge.

“When we can confidently say someone has dementia, it is too late. A lot of damage has already happened to the brain, and it’s irreversible damage,” said senior author Mert Sabuncu, associate professor of electrical and computer engineering in the College of Engineering and Cornell Tech, and of electrical engineering in radiology at Weill Cornell Medicine.

“We really need to be able to catch Alzheimer’s disease early on,” Sabuncu said, “and be able to tell who’s going to progress fast and who’s going to progress slower, so that we can stratify the different risk groups and be able to deploy whatever treatment options we have.”

Clinicians often focus on a single “time horizon” – usually three or five years – to predict Alzheimer’s progression in a patient. The timeframe can seem arbitrary, according to Sabuncu, whose lab specializes in analysis of biomedical data – particularly imaging data, with an emphasis on neuroscience and neurology.

Sabuncu and Karaman partnered with longtime collaborator and co-author Elizabeth Mormino of Stanford University to use neural-network machine learning that could analyze five years’ worth of data about individuals who were either cognitively normal or had mild cognitive impairment. The data, captured in a study by the Alzheimer’s Disease Neuroimaging Initiative, encompassed everything from an individual’s genetic history to PET and MRI scans.

“What we were really interested in is, can we look at these data and tell whether a person will progress in upcoming years ?” Sabuncu said. “And importantly, can we do a better job in forecasting when we combine all the follow-up datapoints we have on individual subjects?”

The researchers discovered several notable patterns. For example, predicting a person will move from being asymptomatic to exhibiting mild symptoms is much easier for a time horizon of one year, compared to five years. However, predicting if someone will decline from mild cognitive impairment into Alzheimer’s dementia is most accurate on a longer timeline, with the “sweet spot” being about four years.

“This could tell us something about the underlying disease mechanism, and how temporally it is evolving, but that’s something we haven’t probed yet,” Sabuncu said.

Regarding the effectiveness of different types of data, the modeling showed that MRI scans are most informative for asymptomatic cases and are particularly helpful for predicting if someone’s going to develop symptoms over the next three years, but less helpful for forecasting for people with mild cognitive impairment. Once a patient has developed mild cognitive impairment, PET scans, which measure certain molecular markers such as the proteins amyloid and tau, appear to be more effective.

One advantage of the machine learning approach is that neural networks are flexible enough that they can function despite missing data, such as patients who may have skipped an MRI or PET scan.

In future work, Sabuncu plans to modify the modeling further so that it can process complete imaging or genomic data, rather than just summary measurements, to harvest more information that will boost predictive accuracy.

The research was supported by the National Institutes of Health National Library of Medicine and National Institute on Aging, and the National Science Foundation.

Many Weill Cornell Medicine physicians and scientists maintain relationships and collaborate with external organizations to foster scientific innovation and provide expert guidance. The institution makes these disclosures public to ensure transparency. For this information, see profile for Dr. Sabuncu.

This story originally appeared in the Cornell Chronicle.


By Louis DiPietro, Cornell Ann S. Bowers College of Computing and Information Science

A Cornell team has created an interface that allows users to handwrite and sketch within computer code – a challenge to conventional coding, which typically relies on typing.

The pen-based interface, called Notate, lets users of computational, digital notebooks – such as Jupyter notebooks, which are web-based and interactive – to open drawing canvases and handwrite diagrams within lines of traditional, digitized computer code.

Powered by a deep learning model, the interface bridges handwritten and textual programming contexts: Notation in the handwritten diagram can reference textual code and vice versa. For instance, Notate recognizes handwritten programming symbols, like “n,” and then links them up to their typewritten equivalents. In a case study, users drew quantum circuit diagrams inside of Jupyter notebook code cells.

The tool was described in “Notational Programming for Notebook Environments: A Case Study with Quantum Circuits,” presented at the ACM Symposium on User Interface Software and Technology, held Oct. 29 through Nov. 2 in Bend, Oregon. The paper, whose lead author is Ian Arawjo, doctoral student in the field of information science, won an honorable mention at the conference.

“A system like this would be great for data science, specifically with sketching plots and charts that then inter-operate with textual code,” Arawjo said. “Our work shows that the current infrastructure of programming is actually holding us back. People are ready for this type of feature, but developers of interfaces for typing code need to take note of this and support images and graphical interfaces inside code.”

Arawjo said the work demonstrates a new path forward by introducing artificial intelligence-powered, pen-based coding at a time when drawing tablets are becoming more widely used.

“Tools like Notate are important because they open us up to new ways to think about what programming is, and how different tools and representational practices can change that perspective,” said Tapan Parikh, associate professor of information science at Cornell Tech and a paper co-author.

Other co-authors are: Anthony DeArmas ’22; Michael Roberts, a doctoral student in the field of computer science; and Shrutarshi Basu, Ph.D. ’18, currently a visiting assistant professor of computer science at Middlebury College.

Louis DiPietro is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

This story originally appeared in the Cornell Chronicle.


Recent innovation in artificial intelligence, machine learning, mobile sensing technology, and virtual reality are creating opportunities for improving patient care. Optum Labs, the research and development arm of UnitedHealth Group, and Cornell Tech have created a collaborative research hub to identify new, industry-disrupting ways to deliver better, more equitable health care.

Optum Labs is providing funding in 2022-’23, which will drive innovative research in precision behavioral health, extended reality for aging in place, and equitable human and algorithmic decision-making.

The partnership will be led by Deborah Estrin, an Associate Dean and a Robert V. Tishman ’37 Professor at Cornell University, and Tanzeem Choudhury, Ph.D., Senior Vice President at Optum Labs, and a Roger and Joelle Burnell Professor in Integrated Health and Technology at Cornell University, with the specific intent of transforming patient health outcomes and care by incorporating new types of health data from wearables and IoT devices and creating new types of remote intervention and care delivery using augmented reality and virtual reality actuation technologies with computational techniques.

“The Digital Health Research Hub is creating closer collaborations between world-renowned health technology researchers from academia and Optum Labs scientists,” said Tanzeem Choudhury, Ph.D., Senior Vice President at Optum Labs. “The new algorithms and computational systems resulting from this partnership have the potential to shape the future of digital health care solutions.”

“This is an exciting opportunity to work with Optum Labs experts to shape technical research in ways that will advance health care in real-world settings,” said Deborah Estrin, Associate Dean and Robert V. Tishman ’37 Professor at Cornell University. “Aligned with Cornell Tech’s mission for deep engagements with the health care ecosystem, this new research initiative will accelerate the translation of novel research into scalable tools for improving individual and community health.”

“The Optum Labs Digital Health Research Hub, in partnership with Cornell Tech, supports UnitedHealth Group’s mission – helping people live healthier lives,” said Ranju Das, CEO of Optum Labs. “By bringing academia and research institutions together, we are creating a collaborative environment where research outcomes are achieved efficiently with an immediate opportunity for equitable impact.”


By Tom Fleischman, Cornell Chronicle

Personal sensing data could help monitor and alleviate stress among resident physicians, although privacy concerns over who sees the information and for what purposes must be addressed, according to collaborative research from Cornell Tech.

Burnout in all types of workplaces is on the rise in the U.S., where the “Great Resignation” and “silent quitting” have entered the lexicon in recent years. This is especially true in the health care industry, which has been strained beyond measure due to the COVID-19 pandemic.

Stress is physical as well as mental, and evidence of stress can be measured through the use of smartphones, wearables and personal computers. But data collection and analysis – and the larger questions of who should have access to that information, and for what purpose – raise myriad sociotechnical questions.

“We’ve looked at whether we can measure stress in workplaces using these types of devices, but do these individuals actually want this kind of system? That was the motivation for us to talk to those actual workers,” said Daniel Adler, co-lead author with fellow doctoral student Emily Tseng of “Burnout and the Quantified Workplace: Tensions Around Personal Sensing Interventions for Stress in Resident Physicians,” published Nov. 11 Proceedings of the ACM on Human-Computer Interaction.

The paper is being presented at the ACM Conference on Computer-Supported Cooperative Work (CSCW) and Social Computing, taking place virtually Nov. 8-22.

Adler and Tseng worked with senior author Tanzeem Choudhury, the Roger and Joelle Burnell Professor in Integrated Health and Technology at the Jacobs Technion-Cornell Institute at Cornell Tech. Contributors came from Zucker School of Medicine at Hofstra/Northwell Health and Zucker Hillside Hospital.

The resident physician’s work environment is a bit different from the traditional apprenticeship situation in that their supervisor, the attending physician, is also their mentor. That can blur the lines between the two.

“That’s a new context,” Tseng said. “We don’t really know what the actual boundaries are there, or what it looks like when you introduce these new technologies, either. So you need to try and decide what those norms might be to determine whether this information flow is appropriate in the first place.”

Choudhury and her group addressed these issues through a study involving resident physicians at an urban hospital in New York City. After hourlong interviews with residents on Zoom, the residents and their attendings were given mockups of a Resident Wellbeing Tracker, a dashboard with behavioral data on residents’ sleep, activity and time working; self-reported data on residents’ levels of burnout; and a text box where residents could characterize their well-being.

Tseng said the residents were open to the idea of using technology to enhance well-being. “They were also very interested in the privacy question,” she said, “and how we could use technologies like this to achieve those positive ends while still balancing privacy concerns.”

The study featured two intersecting use cases: self-reflection, in which the residents view their behavioral data, and data sharing, in which the same information is shared with their attendings and program directors for purposes of intervention.

Among the key findings: Residents were hesitant to share their data without the assurance that supervisors would use it to enhance their well-being. There is also a question of anonymity, which was more likely with more participation. But greater participation would hurt the potential usefulness of the program, since supervisors would not be able to identify which residents were struggling.

“This process of sharing personal data is somewhat complicated,” Adler said. “There is a lot of interesting continuing work that we’re involved in that looks at this question of privacy, and how you present yourself through your data in more-traditional mental health care settings. It’s not as simple as, ‘They’re my doctor, therefore I’m comfortable sharing this data.’”

The authors conclude by referring to the “urgent need for further work establishing new norms around data-driven workplace well-being management solutions that better center workers’ needs, and provide protections for the workers they intend to support.”

Other contributors included Emanuel Moss, a postdoctoral researcher at Cornell Tech; David Mohr, a professor in the Feinberg School of Medicine at Northwestern University; as well as Dr. John Kane, Dr. John Young and Dr. Khatiya Moon from Zucker Hillside Hospital.

The research was supported by grants from the National Institute of Mental Health, the National Science Foundation and the Digital Life Initiative at Cornell Tech.

This story originally appeared in the Cornell Chronicle.



RELATED STORIES