Security, Trust, and Safety (SETS)

Hosting Discussions That Matter
SETS convenes practitioners and researchers on Roosevelt Island for high-level conversations with leaders of the field. At left, a summary video of our first Fireside Chat with Australia’s eSafety Commissioner Julie Inman-Grant.
SETS 2025 Summer Fellows
The inaugural cohort of SETS Fellows will research prevention measures against manipulation of AI agents, abuse of offline trackers, unwanted interactions on Bluesky, and democratized moderation on messaging apps.
The selected fellows are:
- Akshaya Kumar (Georgia Tech) will work on cryptographic accountability mechanisms to combat the abuse of Bluetooth-based trackers such as Apple’s Airtags, with a view to mitigating risks that include stalking and theft.
- Hal Triedman (Cornell Tech) will be investigating how AI agent systems taking semi-autonomous actions can be subverted, probing the security, privacy, and safety vulnerabilities they introduce in our information landscape.
- Joey Schafer (University of Washington) will be investigating user- and community-driven beliefs and behaviors on Bluesky for moderation and safety protections when identifying and engaging with unfamiliar, hostile, and/or possibly LLM-enabled accounts on the platform.
- Sudhamshu Hosamane (Rutgers University) will study community moderation interventions on WhatsApp through the development of a group-configurable moderation bot and a fact-checking dashboard.
Fellows will spend 10 weeks at Cornell Tech this summer working with faculty hosts Mor Naaman, Thomas Ristenpart, Vitaly Shmatikov, and Aditya Vashistha, as well as engaging with the rest of the campus research community.
SETS 2025 Summer Fellows Findings
Akshaya Kumar (Georgia Tech)
Over the last decade, a revolution in location-tracking technology has changed how we keep track of our belongings. Known as Offline Finding (OF) systems, these networks enable users to locate a device even when it’s not connected to the internet or anywhere near them. Read More.
Hal Triedman (Cornell Tech)
What makes a machine learning model “change its mind”? This summer I spent my time trying to understand the foundations of that question — thinking critically about the methodological pitfalls of asking LLMs what they think and trying to design better baselines. I’ll get into the specifics of experimental design and findings in a moment, but first I want to take a step back and explain why it’s critical to better understand how LLMs respond to evidence. Read More.
Joey Schafer (University of Washington)
Social media systems are incredibly important to daily life, including for political discussions, disaster relief, and community-building. However, especially in the age of large language models which can plausibly sound human online, differentiating who one is engaging with online is a critical challenge to protect users from information operations, scams, and harassment, among other possible harms. Read More.
Sudhamshu Hosamane (Rutgers University)
Rules are the quiet infrastructure of online groups: they set expectations, reduce friction, and help communities scale. On public platforms such as Reddit, Discord, and Twitch, visible, collaboratively authored rule sets are linked to lower harassment and toxicity, healthier newcomer participation, and more sustainable moderator workloads. Read More.

Security, Trust, and Safety (SETS) Newsletter
Explore the latest insights, updates, and innovations from Cornell Tech’s Security, Trust, and Safety (SETS).