Visit
By Grace Stanley

Before big tech engineers can improve the fairness of recommendation systems, such as social media feeds and online shopping results, they need to define what “fairness” even means.

Should an app show people only the content it predicts they will like most, or should it boost newer creators, small businesses or historically underrepresented groups? Should an online store rank products purely by past clicks and sales, or make sure independent sellers can compete with dominant brands?

“Recommendation systems are particularly prone to the ‘rich get richer’ effect,” said Allison Koenecke, assistant professor of information science at Cornell Tech. “Top-ranked items often get more clicks, which can lead to disproportionately inflated metrics for those items, cementing their place at the top of a search feed or social media page – and perhaps unfairly penalizing slightly-lower-ranked items that could be higher quality.”

Koenecke, along with co-lead authors Emma Harvey, a doctoral student in information science based at Cornell Tech, and Jing Nathan Yan, Ph.D. ’24, is an author of “Fairness-in-the-Workflow: How Machine Learning Practitioners at Big Tech Companies Approach Fairness in Recommender Systems,” which was presented at the 2026 Association for Computing Machinery CHI Conference on Human Factors in Computing Systems. The research was also authored by Junxiong Wang, Ph.D. ’24 and Jeffrey Rzeszotarski, assistant professor at Loyola University Maryland.

Read more in the Cornell Chronicle.