By Tom Fleischman, Cornell Chronicle
Artificial intelligence-powered writing tools such as autocomplete suggestions can definitely change the way people express themselves, but can they also change how they think? Cornell Tech researchers think so.
In two large-scale experiments, participants were exposed to a biased AI writing assistant that provided autocomplete suggestions as they wrote about societal issues like whether the death penalty should be abolished or whether fracking should be allowed. Using pre- and post-experiment surveys, the researchers found that participants who used the biased AI had their views gravitate toward the AI’s positions.
What’s more, participants were unaware of the shifts in their opinions – and explaining the AI’s bias to the participants, either before or after the exercise, didn’t mitigate AI’s influence.
Media Highlights
Tech Policy Press
Content Moderation, Encryption, and the LawRELATED STORIES