Visit

From consumer wellness apps to clinical decision-making tools to care operations, digital technologies are transforming today’s healthcare landscape. With guidance from clinical and industry experts, students participating in Cornell Tech’s Product Studio program have pioneered innovative new products to support patients, healthcare providers, and people with disabilities.

Memorial Sloan Kettering Cancer Center asked: How might we free clinicians from computer workstations, allowing them to focus directly on their patients during clinical encounters?

Eshann Toteja, Master in Computer Science ’19, said he was inspired to take on this challenge in part by his and his teammates’ personal experiences in clinical settings: “As patients at hospitals, our whole team has felt ignored or unimportant at some point in the exam room.”

In addition to improving patients’ experiences, Toteja said their goal was to stay “valuable and true to the biggest problems we saw providers facing in exam rooms.”

The team started by learning as much as they could from as many people as possible in the healthcare industry about the problems providers faced in the exam room. They also researched the kinds of solutions already being explored to address those problems.

“We did quite a few on-site days with Memorial Sloan Kettering,” said Skyler Erickson, Master in Computer Science ’19, “and we were working with physicians there and observing patient encounters and really trying to dig into, ‘Where is the computer workstation a problem and when does that distract from the patient interaction?’”

Team Memorial Sloan Kettering

For Erickson, one of the key moments was “this realization that it’s not so much about what you do see, it’s equally important what you don’t see.” For example, he said, they never saw a physician pull up exam results during an appointment because doing so via their workstation was too difficult, time-consuming, and distracting. The goal, said Erickson, was to allow doctors to order tests “naturally and seamlessly and in one conversation.”

Ultimately the team, which also included Benjamin Yellin, Technion-Cornell Dual Master’s Degrees in Health Tech ’20, and Kriti Shah, Johnson Cornell Tech MBA ’19, developed an efficient new voice assistant, similar to Alexa or Google Home. With the aid of this tool, providers can easily order a test for a patient by saying, “Order a test for Patient X,” rather than interrupting the conversation to navigate various screens on their computer workstations.

The students credit their faculty advisor, Deborah Estrin, Cornell Tech’s Robert V. Tishman ’37 Founder’s Chair, with providing valuable real-world feedback. Toteja described her as “an immense help, constantly offering up the exact criticism and advice we needed to hear.” Her ties to the healthcare field, he said, “kept her current and connected, making her input extremely valuable.”

“We met with her many, many times to make sure we were on the right track and to pivot when we ran into some roadblocks,” said Yellin.

Microsoft asked: How might we use technology to help people with disabilities perform everyday tasks?

As the team from Cornell Tech began thinking about this question, Nabeel Seedat, Master in Electrical and Computer Engineering ’19, said, “We all had an association of people that we knew that had visual disabilities, and we felt that this was something that, as a team, we were all passionate about.” Based on this experience and with assistance from their advisors at Microsoft, the team from Cornell Tech decided to focus on developing a solution for people with visual impairment.

“We wanted to figure out which specific task is a big problem faced by those with vision impairment,” said Raga Kolli, Johnson Cornell Tech MBA ’19. “After doing research online, we realized that a common theme is social engagement—people with vision impairment face a lot of isolation, and a lack of social interaction causes further anxiety. A lot of what we do today is online and there’s a big information gap online for those with visual impairment vs. sighted users.”

That insight led them to tackle the problem of social media engagement, which poses a particular challenge to people with visual disabilities, given the prevalence of image-based content.

Existing solutions address this problem via screen-reader technologies, including programs which describe images simply and literally. Seedat said they wanted to go further, asking themselves, “How could we combine both the image content and the textual content around the image, in order to create a narrative which would allow people to better engage with the content in the posts?’”

Team Microsoft

They were inspired to develop a web browser extension for people with visual disabilities to use when browsing social media sites. The browser extension “translates” images surrounded by text into a brief description that captures the post’s meaning, not just its literal content. For example, it would translate a photo of a toddler with chocolate smeared all over her face and the caption, “Enjoying her food!” as something like, “Child making a mess with her food” rather than “Child eating food.” This allows people with impaired vision to better understand and meaningfully interact with posts.

Throughout the product development process, the team benefited from feedback from Estrin, their faculty advisor, who encouraged them to develop “not just purely a technological product but also something that can have an impact on people.”

Kolli said Estrin “really helped us figure out our narrative (and) develop an impactful product that has meaning to our users.” Boning Hou, Master in Computer Science ’19, added that Estrin “helped us narrow down the idea and helped us to think about why we are doing what we are doing so that our product will be more helpful to the people that we are trying to help.”

The team has made enormous strides in just one semester. Hou said they have finished a prototype that can parse web content and translate an image into a complete sentence. For now, it works with websites like Instagram and Pinterest; they are in the process of trying to make it compatible with Facebook as well. “Right now, we are planning to make this project open source,” said Hou, “so that more people can contribute to it and make this have a larger impact.”


Because nearly everything we do today involves tech, it has an enormous impact on an ever-growing number of people. But although tech affects everyone, not everyone is represented when new products are launched and new companies are formed.

Cornell Tech believes that increasing diversity and practicing inclusion is key to improving technology—and people’s lives. Meet two dynamic executives and entrepreneurs who are helping the school to do just that.

Managing Entrepreneurial Officer at Cornell Tech and co-founder of early stage venture capital firm 645 Ventures Aaron Holiday has played a key role in helping launch Cornell Tech’s Startup Studio program in addition to promoting social responsibility on campus. Denise Young Smith, formerly the Chief Human Resources Officer and Vice President of Inclusion and Diversity at Apple, joined the Cornell Tech community as Executive-in-Residence in January 2018.

“I was working with Aaron and I thought: we’ve got two prominent African-American leaders and professionals now focusing on the Startup Studio program at Cornell Tech—this is unprecedented,” said Smith, who in addition to leading Apple’s Human Resources division, built their retail teams, and oversaw its global efforts to create a diverse and inclusive culture and workforce during her 20 years with the company. “It’s an incredible differentiator in and of itself when you have professionals and entrepreneurs who bring a diverse perspective to help influence and enhance this already world-class program. I hope our work here will not only add to the quality of the program but will say to existing and prospective students that Cornell Tech welcomes and embraces everyone’s perspective.”

What does Smith hope to accomplish at Cornell Tech? “My objective has been to help Cornell Tech fulfill its mission of being a premier institution where you not only experience diversity here in the community, but more importantly learn about its critical importance if you’re going into the world as an entrepreneur, technologist, or leader,” Smith said. “The objective is our students will have learned in their experience here that being diverse and inclusive is non-negotiable in the world we live in today and tomorrow.”

In Smith’s view, it’s not only important to value diversity for its own sake, but also to have a keen understanding of why it’s so crucial in tech. “The world is rapidly changing,” she said. “[Diversity] is important in the critical niches of technology because tech has to keep up with and lead in this rapidly changing world.”

Dionna McPhatter, Aaron Holiday and Denise Young Smith sitting at table talking
Left to right: Dionna McPhatter, Aaron Holiday, and Denise Young Smith are working together to expand diversity and inclusion at Cornell Tech.

Holiday, too, wants to ensure that Cornell Tech students understand why it’s so urgent for the tech world to recruit diverse talent and promote inclusive leadership. Having studied computer science as an undergraduate, he knows it is a “hyper-collaborative” field in which “it’s almost impossible to keep up with the content” via self-study alone. Diversity is key, he said, because “you need to have inclusion of all people to grasp the content and learn and grow.”

In a world where “software and engineering have become the primary mechanism to invent and create new things,” Holiday said, “in order for people of color and women to be able to participate in the creation of the new world, we need more women and people of color building things, so that everybody is a part of building the new world.”

Because Holiday believes that diversity is “not only a matter of gender and race but also of skill sets,” he encourages students to consider whether a team has the right ratio of engineers, MBAs, and people with legal expertise and advises them to ask, “What will a person contribute to a team based on their total experience?”

Smith also seeks to instill in students the idea that by increasing representation, they can gain valuable perspectives that will enhance their project, product, or company.

Smith and Holiday have invited other prominent figures to the campus to enrich the studio experience and add perspective. Dionna McPhatter, a West Point graduate and data scientist, has visited the Studio several times. 

There are, said Smith, “many different examples every day in the commercial world” of how expanding our understanding of the “inclusion imperative” as she phrases it with staff at Cornell Tech, has improved technology and its effectiveness. In her view, tech leaders should ask, “How dated is this item or concept against a dynamically changing world demographic? Who designed it? And how does it need to evolve now?” So many common products that people use every day—“toys, cosmetics, emoticons,” to name only a few—need updating, she said, and the people who use those products need representation.

Like Holiday, Smith wants to ensure that the brave new world we are building doesn’t replicate and reinforce old biases. “Everyone is talking about AI,” she said, “but if we aren’t extremely urgent and deliberate about who is in the room when we design code for the challenges of the future, we are going to come up with ineffective and limited products. How are we deciding who is getting loans with banks? Or who should be screened in and screened out with new security systems?”

After all, Smith said, “there’s nothing terribly artificial about AI. It’s coming from the minds of human beings and it will replicate human biases.”

The flip side of this risk, according to both Smith and Holiday, is that making sure a wide variety of people are represented when launching a new product or company can improve the product and expand its customer base. It can mean significantly enhanced success for a company.


The Data Incubator, an alumnus company of the Runway Startup Postdoc Program at the Jacobs Technion-Cornell Institute, announced a merger with Pragmatic Marketing, the authority on product management and marketing training, to form the Pragmatic Institute.

The Data Incubator, founded by Michael Li, is a data science education company that provides real-world, hands-on experience and training to data scientists looking to transition from academia to industry or to improve their skills.

A release issued by the company said: “The new Pragmatic Institute will combine the expertise of these two companies to create a single source for comprehensive, hands-on training in product management, product marketing and data science.”

“The Runway Startup Postdoc Program at the Jacobs Technion-Cornell Institute provided a great opportunity for scientists to commercialize their knowledge,” said Li, part of the inaugural January 2014 cohort of the Runway Program. “The program was instrumental in helping launch The Data Incubator and the lessons I learned and the connections I have made will last a lifetime. I am excited about this new phase of our journey with Pragmatic Institute as we combine our respective expertise in product marketing and data science to reach new audiences and train the workforce to be ready for the digital and AI economy.”

Read the full release here.



Look up at the Bloomberg Center facade and, over the multitude of tilted disks, part of Cornell Tech’s DNA unfolds. The facade is at the intersection of technology, art, and sustainability and it represents our commitment to our institutional roots in Ithaca and location in New York City.

Since Cornell Tech’s campus on Roosevelt Island opened in 2017 visitors often ask, “what do the circles on the Bloomberg Center mean?”

The so-called “circles” are actually three-dimensional disks cut from the metal panels that surround the second, third, and fourth floors of the building. Each disk tilts in a different direction. The eye-catching and unique design often evokes additional questions like, “do the circles move?”

In short, the answer is no, but there is a reason each circle is tilted the way it is.

close up of Bloomberg Center facade
Close up view of a metal panel on the Bloomberg Center. Provided by Morphosis.

Like many of the design decisions made for the campus, the origins of the Emma and Georgina Bloomberg Center facade began with sustainability.

The Bloomberg Center aspires to net-zero energy consumption. In addition to the mounted solar panels on the building’s roof, the exterior walls needed to be designed in a way to help regulate and maintain its internal temperature. The building’s architecture firm, Morphosis, designed the exterior facade so that stainless steel panels would trap air between the exterior of the building and the elements to help regulate the internal building temperature. The question then became which pattern might make the panels more interesting. After some trial and error, the answer came from research in computer vision and image statistics, an area in which Dean Dan Huttenlocher and several other faculty members have expertise.

To illustrate the connection between the Cornell University campus in Ithaca and the new campus in New York City, Morphosis chose an image of a waterfall in Ithaca as inspiration for the eastern side of the building and the Manhattan skyline across the East River from campus for the western side of the building.

diagram of images used on the east and west side of the building
Provided by Morphosis

Then, drawing on the resident expertise in computer vision, the team used an algorithm to collect spatially varying statistics by shifting the image horizontally and vertically over a small range and computing the average of all those images. Anything that persisted over that region would show in the image average and minor differences would not.

Images of an urban skyline, show many vertical and horizontal lines and repeated elements (like the lines of a building and windows). The resulting average of the image evokes a busy, slightly chaotic feeling, much like the city.

west side of Bloomberg Center
The west side of the Bloomberg Center

An image of a rural or natural landscape contains elements with many different orientations. This results in a more fluid and smooth-looking design.

East side of the Bloomberg Center.
The east side of the Bloomberg Center.

After determining the image statistics, the architects at Morphosis used this data to determine the tilt of the disks so they would catch the sunlight in interesting ways. This is why the building can appear to be a different color based on the time of day and the season.

Collage of the Bloomberg Center in different lighting.
The Bloomberg Center appears to be different colors depending on the season, time of day, and light.

Learn more about the Bloomberg Center.


NEW YORK, NY— Today, the Mayor’s Office to End Domestic and Gender-Based Violence (ENDGBV) announced the expansion of its partnership with Cornell Tech and the NYU Tandon School of Engineering to strengthen New York City’s supportive services for victims of stalking. The expansion includes the development of a diagnostic tool designed to identify applications on cell phones that can be used for cyberstalking. Cornell Tech is piloting the use of this groundbreaking tool with clients at New York City’s Family Justice Centers to conduct digital privacy checkups that include scanning for spyware or malware and having an informative discussion about privacy settings to educate clients on how to maximize safety when using technology.

ENDGBV initially partnered with Cornell Tech in 2016 on a research project titled, Digital Safety and Security in Intimate Partner Violence. Through the partnership, Cornell Tech researchers conducted interviews with Family Justice Center providers and clients to understand how technology can be used negatively as a tool of control in abusive relationships. The study also explored whether partners in abusive relationships had access to the client’s online and personal accounts, technology hardware, and client knowledge of mobile applications and the safe use of technology. Following the study, Cornell Tech created screening questions for staffers at the Family Justice Centers to use as part of the overall safety assessment that is done with clients when they come into the centers. Through the expansion of the partnership, clients can be referred to onsite Cornell Tech staff to have the privacy checkup performed on their device using the new technology.

The ENDGBV Healthy Relationship Training Academy (the Academy) is also assisting Cornell Tech with research they are conducting to assess how teens  use apps and other social media in order to understand teens’ behavior around digital privacy and disclosure and identify ways in which teens experience technology abuse and how they seek help. The research will be used to develop tools tailored to teens, parents, and educators that address privacy protection strategies, risk behavior, and resources that help teens navigate technologies safely.

“The City is committed to finding cutting-edge solutions to the growing problem of cyber-stalking,” said Deputy Mayor for Health and Human Services Dr. Herminia Palacio. “Through this key collaboration, ENDGBV, Cornell Tech, and the NYU Tandon School of Engineering are working together to ensure survivors of intimate partner violence feel safe not just in their homes and on the streets, but in the digital world in which we all spend so much time.”

“So many of our clients at the Family Justice Centers are simply unaware of the vulnerabilities that can lead to cyber stalking or stalking in general,” said ENDGBV Commissioner Cecile Noel. “That’s why our team thought it was extremely important to continuing this groundbreaking work with Cornell Tech. I’d like to thank them and the NYU Tandon School of Engineering for joining with us in our mission to create a safer New York for survivors and their families.”

“Cornell Tech is proud to partner with the Mayor’s Office to End Domestic and Gender-Based Violence and NYU Tandon School of Technology to advance this critical work to protect survivors. There are so many security and privacy risks online that can put survivors in danger, and not nearly enough awareness or resources for them. Through this collaborative study, we are working to ensure that survivors can get the technology help they need. Cornell Tech is committed to developing cutting-edge research and technology that will have a positive impact on New York communities, and this is a profound example of this important mission,” said Nicola Dell, assistant professor at the Jacobs Technion-Cornell Institute at Cornell Tech and Thomas Ristenpart, associate professor at Cornell Tech, who co-lead the research team working on this important issue.

“I couldn’t be happier that ENDGBV is expanding this project to include a diagnostic tool for clients at New York City’s Family Justice Centers,” said Damon McCoy, professor of computer science and engineering at the NYU Tandon School of Engineering, who helped design the screening software. “I’m proud to have worked with Cornell Tech to develop this technology, especially since it helps victims of domestic abuse and stalking in such an immediate, tangible way.”

“This groundbreaking technology stands to become an empowering and invaluable resource for victims of stalking,” said Samir Saini, Commissioner of the Department of Information Technology and Telecommunications. “I thank Commissioner Noel and the teams at Cornell Tech and NYU Tandon School of Engineering for working together to expand this important program that helps New Yorkers protect their privacy and safeguard their digital lives.”



As people get increasingly dependent on using technology to control aspects of daily life from ordering on-the-go coffee to tracking circadian rhythms at night, cybersecurity has become paramount for large organizations and individual users to protect the data created.

Leading companies and organizations in New York City tasked Cornell Tech students to develop innovative technology solutions to their respective cybersecurity challenges. Interdisciplinary Product Studio teams collaborated throughout the fall semester and demonstrated their final prototype — backed by user research and strategy — to key stakeholders.

Here are three solutions Cornell Tech students built:

Blockchain Banking & Marketing Advisors asked: How might we understand cryptocurrencies to help startups launch initial coin offerings that are compliant, valuable, and relevant to the business and customers needs?

The initial public offering (IPO) process has existed for centuries, but the cryptocurrency counterpart is still in its infancy. Initial Coin Offering (ICO) is an unregulated way of crowd-fundraising using cryptocurrencies. The amount of money raised through ICOs has exploded in recent years, but there is still a lot of distrust in the process.

A team of Cornell Tech masters students worked with Blockchain Banking & Marketing Advisors (BBMA), a firm that helps companies manage the initial coin offering process, to try to improve trust in the ICO process.

Cyrus Ghazanfar, Master in Computer Science ‘19 said, “ICOs aren’t subject to the same laws and regulations as IPOs. IPO buyers usually have a vesting period for the shares so they don’t take advantage of the market. It is pre-regulated. That doesn’t exist in the ICO market.”

Some companies participate in an ICO lock-up period, similar to the traditional IPO lock-up period, to ensure that insiders who purchased the stock before the company went public can’t liquidate their assets right away and, in doing so, destabilize the market. However, Ghazanfar explains that there was no way to ensure that the funds were locked during an ICO lock-up period due to the unregulated nature of it.

The cross-disciplinary team from Cornell Tech built Vestvault, a client-facing user interface that leverages smart contract technology to allow founders and other executives within a company to enter the specifications of the shareholder distribution without needing to code. The smart contract vests the tokens to the specified shareholders over time automatically and without the need for a third party. The information they submit is automatically used to generate and upload a smart contract to the blockchain and all users can transparently view any activity in real time to  ensure the lock-up period is maintained. The funds are instantly released when the lock-up period ends.

“This solution is completely decentralized. You don’t need to trust a third party,” said Ghazanfar, “It is the rule of the code. It is a completely autonomous end-to-end solution.”

The most difficult part of building the technology — which relies on coding languages Solidity and Python — was making sure that the smart contract securely and correctly transfers value between “wallets,” Pooja Kale, Masters in Computer Science ’19 said. In the past, companies have lost millions of dollars because of buggy code, according to Ghazanfar.

The team, which also includes Jim Campbell, Johnson Cornell Tech MBA ‘19, and Kibum “George” Byun, Master of Laws in Law, Technology and Entrepreneurship ‘19, may continue to work on their project after graduation and they have already generated interest from meetups they’ve attended and introductions to industry leaders.

Citigroup asked: How might we realize the benefits of shared data and computational models amongst untrusted parties while maintaining the security and privacy of each party and their data?

Cornell Tech students worked with Citigroup to create a prototype that would address anti-money laundering when financial institutions share bank client transaction data. Although the data is not sold or disclosed between financial institutions, it is used to train a model that all of the institutions can access. As soon as the model is built, the data is erased from the local memory. Their product, datawall, enables institutions to share machine-learning models securely.

They built a privately distributed machine-learning platform, enabling the marketization of data that allows multiple financial institutions to work together to train and use machine-learning models. “Central to this framework is the use of smart contracts that executes payment and compensation transactions between the parties per the terms of the contract for any inferences generated by a party querying the model,” said Peng.

“Through this business model innovation, datawall becomes a platform that enables the monetization of data assets and the creation of an inference-as-a-service marketplace,” Peng said. “In addition to a flexible architecture that allows for easy integration of newer security technologies, datawall includes security features such as data validation to protect against security threats such as data poisoning and breaches.”

“The price [financial institutions] pay to access the data and the payout every time a query is used is based on the amount of data [the firm] contributes and the value of the data,” said Daniel Nissani, Technion-Cornell Dual Master’s Degrees in Connective Media ‘20.

The team, which also includes Anthony Bisulco, Master in Electrical and Computer Engineering ’19 and Wei Duan, Master in Computer Science ’19, plans to continue to work on the project and have already had one venture capital meeting. Datawall would generate revenue as a percentage fee on all transactions.

Roku asked: How might we gain the trust, not just consent, of consumers to use their data for personalizing their ad-viewing experience:

Roku tasked Cornell Tech students with helping consumers feel more comfortable sharing their data with the company. At first, the team had trouble assessing if viewers trusted Roku. But then they realized, “You don’t need to say how much you trust a company, but you will show your trust based upon how much you use the product,” said Sergio Campos, Master of Laws in Law, Technology and Entrepreneurship ’19. The team, advised by Ben Biddle and Michael Gladstone from Roku, decided to create a unique ad experience that provides users with more relevant and valuable ads.

They built KonnActAd. When a viewer sees a product they like during a movie or television show, they press pause and are served with ads for products in the scene and can click to shop directly. The team manually selected the product options and displayed them in a web application written using JavaScript for their minimum viable product. They think that media, product, and service companies would be interested in tagging their products in specific scenes. Eventually, they want to automate the entire process by watermarking products in each scene using computer vision.

Professor Tom Ristenpart advises Team Roku in the Tata Innovation Center.

“Our theory is that if we give people control of when they engage to see ads and make those ads relevant and personalized, they’ll get more of what they want, our algorithm will get better at giving them what they want, and as a result, trust will be built,” said Ryan Sydnor, Johnson Cornell Tech MBA ‘19.

KonnActAd was different from other products in the clinic because it did not exclusively address technical security. “There was an element of ‘social security.’ If there is a group of people in my living room watching my TV and it personalizes embarrassing ads for me, is that acceptable? No, of course not! That would erode trust,” said Sydnor. “Professor Ristenpart was the first to pick up on this and was happy to dive into both technical and social implications of security. His ability to see multiple perspectives on the problem is something that stood out to me about him.”

The team, which also includes Zhenwei Zhang, Technion-Cornell Dual Master’s Degrees in Connective Media ‘20, and Roger Wang, Master in Computer Science ‘19, may continue to work on the project. Campos says that the interdisciplinary team and collaborative experience taught them all new skills — he even submitted his first line of code to GitHub. Instead of sticking to their individual skill sets, they made all decisions together, taught each other, and provided regular feedback.


 

When it comes to news, we believe what we want to believe – even though deep down we may know better.

Cornell Tech researchers and colleagues have found that people are far more likely to say that news stories are true if they align with their own political views regardless of the outlet. But when offered a cash bonus for correctly evaluating the stories’ accuracy, participants were more likely to say they believed the news stories that countered their views.

“There’s an issue of expressive responding, where people say what they want to be true rather than what they actually believe to be true,” said Mor Naaman, associate professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech and senior author of “The Role of Source, Headline and Expressive Responding in Political News Evaluation,” which will be presented Feb. 1 at the Computation + Journalism Symposium in Miami.

In the study, researchers told participants that they’d receive a bonus if they guessed the accuracy of all the headlines correctly, “motivating them to say what they truly believe,” Naaman said. “People were suddenly more willing to admit that claims aligned with the other side were true.”

This effect was more pronounced for right-leaning participants than for those with left-leaning politics; Naaman said future research will explore why.

Maurice Jakesch, a Cornell Tech doctoral student in the field of information science, is the paper’s first author. Also contributing were Cornell information science doctoral student Anna Evtushenko and Moran Koren of the Technion-Israel Institute of Technology.

The study is the first to test for expressive responding, as well as to examine trust in news outlets independently of trust in story content. Participants were asked to rate stories associated with The New York Times and Fox News, but the outlet where the story was said to appear did not affect their trust level, whatever their political orientation.

“The results are pretty clear: It’s not about people believing the Times vs. Fox News; it’s about whether the claim in the headline agrees with their view of the world,” Naaman said.”

The researchers recruited a diverse group of around 400 participants, evenly divided between right- and left-leaning in their political views. Each participant was shown two political headlines aligned with Democratic views and two aligned with Republican news, randomly assigned to either Fox News or the Times. They were also shown 12 other headlines that were not part of the experiment.

The political headlines – including “Trump lashes out at Vanity Fair, one day after it lambastes his restaurant,” and “Companies are already canceling plans to move U.S. jobs abroad” – were all true, but none actually came from the Times or Fox News. They were chosen based on previous studies that showed they were right- or left-leaning, and that readers had trouble ascertaining their accuracy.

Participants were given 15 seconds to rate each headline as true or false. The headlines could not be copied, making it impossible to plug them into a search engine. They were each paid $1 for about five minutes of work.

To determine whether they truly believed their answers, half the participants were offered a bonus of $1.60 if they correctly answered 12 out of 16 questions. (All participants in that group received the bonus.) The other half were in a control group.

Though further study is needed, Naaman said the findings have potential applications for news aggregators, which might focus on balancing news feeds politically by content instead of merely by news outlet, or social media sites, which could incentivize people to only share stories they trust.

“We’re living in an age of misinformation, where it’s very hard for people to distinguish between established and trustworthy and credible news organizations,” Naaman said. “Understanding how people make decisions in online news when it comes to the stories they read and how the react to them is important, so that we can design information systems and presentation systems that support trustworthy sources above others.”

The research was supported by Yahoo Research and Oath, which is part of Verizon Media Group, through the Cornell Tech Connected Experiences Lab.


By law, credit and loan decisions cannot discriminate on the basis of race or lead to outcomes that differ substantially by race. But to ensure that they don’t discriminate, banks and other lenders aren’t allowed to ask about race on most applications. This makes it challenging for auditors to make sure credit decisions are fair.

To evaluate racial disparities in lending decisions, lenders or auditors have to infer applicants’ races, generally using a system – known as a proxy – that guesses applicants’ races based on what they do know, such as their neighborhoods and surnames.

But these proxies – including a method used by the Consumer Financial Protection Bureau to audit lenders – can yield very different results depending on tiny changes in how they guess applicants’ races, according to a new Cornell-led study.

“It’s worrisome that these models are being used to determine whether financial institutions comply with the law,” said Madeleine Udell, the Richard and Sybil Smith Sesquicentennial Fellow and assistant professor in the School of Operations Research and Information Engineering. “They’re clearly not assessing what they’re supposed to.”

Their paper, “Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved,” will be presented at the ACM Conference on Fairness, Accountability and Transparency, Jan. 29-31 in Atlanta. Cornell Tech doctoral student Xiaojie Mao is the lead author. Co-authors included Udell; Nathan Kallus, assistant professor of operations research and information engineering at Cornell Tech; and financial industry data scientists Jiahao Chen and Geoffry Svacha.

Understanding the risks of discrimination when using artificial intelligence is especially important as financial institutions increasingly rely on machine learning for lending decisions. Machine learning models can analyze reams of data to arrive at relatively accurate predictions, but their operations are opaque, making it difficult to ensure fairness.

“How can a computer be racist if you’re not inputting race? Well, it can, and one of the biggest challenges we’re going to face in the coming years is humans using machine learning with unintentional bad consequences that might lead us to increased polarization and inequality,” Kallus said. “There have been a lot of advances in machine learning and artificial intelligence, and we have to be really responsible in our use of it.”

Race is one of several characteristics protected by state and federal law; others include age, gender and disability status.

The researchers used data from mortgages – the one type of consumer loan that includes race on applications – to test the accuracy of the Bayesian Improved Surname Geocoding (BISG) auditing system. They found its results often either underestimated or overestimated racial discrepancies, depending on several factors. Assuming race based on the census tracts where applicants live erases black applicants who live in mostly white neighborhoods and white applicants who live in mostly black neighborhoods.

The BISG model estimates the probability that someone is a certain race, and in performing calculations a user can set a minimum probability – for example, choosing to use any examples in which the probability of a given race is 80 percent or more. But differences in that minimum probability yielded unexpectedly large variations in the results, the researchers found.

“Depending on what threshold you picked, you would get wildly different answers for how fair your credit procedure was,” Udell said.

The researchers’ findings not only shed light on BISG’s accuracy, they could help developers improve the machine learning models that make credit decisions. Better models could help banks make more informed decisions when they approve or reject loans, which may lead them to give credit to qualified but lower-income applicants.

“You can figure out who will actually default or not in ways that are fair,” Kallus said. “What we want to do is make sure we put these constraints on the machine learning systems that we build and train, so we understand what it means to be fair and how we can make sure it’s fair from the outset.”

This article originally appeared in the Cornell Chronicle.