Visit
By Tom Fleischman, Cornell Chronicle

Conversational AI tools denied blunt requests for harmful content by researchers posing as intimate partner abusers, but these guardrails were easily circumvented when they requested the content under false pretenses, a new Cornell Tech study has found.

Investigating whether Gemini and ChatGPT can be weaponized in intimate partner violence (IPV), the researchers conducted chat sessions that combined current AI capabilities with established tactics in “coercive control” – behavior aimed at exerting power over another.

“Until now, we’ve mostly seen other kinds of tech-facilitated IPV. But with the emergence of AI, we’re seeing a need to figure out how to help survivors who are experiencing AI-facilitated abuse,” said Nicola Dell, associate professor of information science at Cornell Tech, the Jacobs Technion-Cornell Institute and the Cornell Ann S. Bowers College of Computing and Information Science.

Read more in the Cornell Chronicle.


RELATED STORIES