‘Red Team’ Students Stress-Test NYC Health Department’s AI
Categories
Security Trust and Safety, Security, New York City, Students
By Tom Fleischman, Cornell Chronicle
People usually strive to be their true, authentic selves, but this fall, five master’s students at Cornell Tech adopted not only alter egos but also “bad intent,” in an effort to make AI safer for health workers serving people with diabetes.
The students – Divya Bhanushali, Bhavya Gopal, Ali Hasan, Nikhil Jain and Om Kamath – were in the first cohort of the new Security, Trust, and Safety Initiative (SETS) AI Content Red Team Clinic, which debuted this fall on the Roosevelt Island campus in New York City. The free service provides assistance to public-service organizations that don’t have the resources or capacity to conduct stress tests on their artificial-intelligence assets by “red-teaming.”
In the digital realm, a red team is a group that simulates outside attacks an organization’s AI tools.
“Red-teaming is an exercise – originating from cybersecurity, but now popular also in digital safety – of trying to break a tool by exposing its vulnerabilities,” said Alexios Mantzarlis, director of the SETS Initiative, “and helping to create the appropriate guardrails and safety measures to avoid that happening in real life.”
Media Highlights
Tech Policy Press
Content Moderation, Encryption, and the LawRELATED STORIES
From Milstein Scholar to Design Tech Pioneer