Visit
By Grace Stanley

As generative AI reshapes how we communicate, work, and make decisions, Angelina Wang is making sure these systems serve everyone — not just a privileged few.

As a new assistant professor of information science at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science, Wang brings a sharp sociotechnical lens to the fast-evolving world of responsible AI. Her work is driven by a mission to make AI more equitable, accountable, and beneficial for all.

Wang’s research tackles some of the most pressing questions in the field: How do we evaluate generative AI systems when their behavior shifts depending on who’s interacting with them? How can we build AI systems that reflect the complexity of the societies they serve? What does fairness look like in practice — not just in theory?

With research featured in MIT Technology Review, Vice, and the Washington Post, as well as honors from the NSF GRFP, the EECS Rising Stars, the Siebel Scholarship, the Microsoft AI & Society Fellowship, and the ACL Best Paper award, Wang is one of the rising voices shaping the future of responsible AI.

Previously a postdoctoral researcher at Stanford University, Wang earned her Ph.D. in computer science from Princeton University and her B.S. in electrical engineering and computer science from the University of California, Berkeley. Now at Cornell Tech, she’s excited to engage with New York City’s thriving responsible AI community, one that spans academia, industry, and public interest organizations.

Read a Q&A with Wang about her work, below.

What is your academic and research focus?

My research focus is on responsible AI. I’m interested in the ways that we can better build artificial intelligence technologies that benefit all of us. For me, this has largely been operationalized by focusing on issues of fairness, evaluation, and societal impacts as they manifest in AI. Methodologically, I’m interested in how we can apply a more sociotechnical lens to these kinds of problems.

What motivated you to come to Cornell Tech?

Beyond the possibility of taking an air tram to work? I was excited to come to Cornell Tech because I find Cornell to have such a phenomenally strong and warm interdisciplinary community working in the space of responsible AI, and being in New York City on top of that is such a large benefit. Cornell Tech itself has a fantastic community of faculty and students, with a lot of momentum and initiatives towards building technology beneficially.

What are you most looking forward to about working in New York City?

I hope to engage with the city in various capacities, including but not limited to its robust responsible AI community that spans industry, non-profits, and many academic institutions. In general, I’m excited to get to know the city more, and understand more concretely what engagement can look like. NYC is such a thriving area, and I certainly hope to take great advantage of being located here.

What inspired you to pursue a career in this field?

In 2018, I went to the Grace Hopper Conference and watched Joy Buolamwini give a talk on bias against Black women in facial recognition technologies. It was incredibly emotional and inspiring, and resonated deeply with the issues that I care about. After that conference, I re-oriented from what I was working on at the time and began looking into the burgeoning field of machine learning fairness.

Which courses are you most looking forward to teaching?

The first course I’ll be teaching is a Ph.D. seminar called “Non-Ideal Algorithmic Fairness.” In this course, rather than imagining what algorithmic fairness should look like in an ideal world (e.g., where humans are perfectly rational and informed, and any created law will be exactly followed), we will explore what the research tells us it actually looks like today, and further, what can be done given the incentives and practical constraints of the world we are in.

For instance, the “silver bullets” of algorithmic fairness are often abstract calls made to increase participation, transparency, and regulation. However, each approach faces practical difficulties in implementation. I’m excited to dig into all of these issues and strategies with the students and learn together about how we can make progress on these critical problems in today’s world.

What scientific questions are you looking to answer next?

One area I’m really excited about at the moment is the evaluation of generative AI systems. Their open-ended outputs contrast them from the tasks of predictive AI systems we have often considered in the past.

I’m especially interested in what it means to evaluate generative AI when the exact same system can have such different behaviors depending on the context of interaction, and who is doing the interacting. For instance, when I interact with ChatGPT, I get a very different behavior and persona from when you interact with ChatGPT. Given that, what kinds of evaluations can be meaningful to us both? This has important impacts for understanding what it means to deem a system safe and desirable for use by a diverse set of users.

Grace Stanley is the staff writer-editor for Cornell Tech.