Visit
By Grace Stanley

Ultra-personalized artificial intelligence for assisted communication risks muting aspects of the user’s identity and occasionally breaches privacy, according to a new study from a Cornell Tech doctoral student who trained the technology on himself.

Doctoral student Tobias Weinberg, who uses augmentative and alternative communication (AAC), conceived the research when he realized he could train a model on his own speech data.

“Since I’m typing it anyway, I might as well see what I can do with it,” he said.

Rather than relying on hypothetical users or lab-based simulations, or asking others to take on privacy and identity risks, Weinberg used his own speech to ask questions like: “What does it mean to train a machine to be you?”

That question became the foundation of “I, Robot?,” presented in April at the 2026 CHI Conference on Human Factors in Computing Systems, which explores the promises and risks of ultra-personalized AI in AAC. The work emerged from Cornell Tech’s Matter of Tech Lab and was co-authored by Weinberg; Thijs Roumen, assistant professor at Cornell Tech; Ricardo Gonzalez Penuela, a doctoral student in information science based at Cornell Tech; and Stephanie Valencia, an assistant professor at the University of Maryland.

Read more in the Cornell Chronicle.