Professor Vitaly Shmatikov Wins Test of Time Award for Deep Learning Research
Categories
By Andrew Clark
Vitaly Shmatikov, professor of computer science at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science, has received the Association for Computing Machinery Conference on Computer and Communications Security (ACM CCS) Test of Time Award for his influential 2015 paper, “Privacy-Preserving Deep Learning.”
Co-authored with Reza Shokri, associate professor at the National University of Singapore, the paper pioneered methods for training accurate deep learning models without exposing sensitive data — a breakthrough that has shaped the evolution of privacy-preserving machine learning over the past decade.
The award honors research that has had a lasting impact on computer security and privacy. Shmatikov and Shokri’s paper was among the first to demonstrate that large-scale neural network models could be collaboratively trained by multiple participants — such as hospitals, research institutions, or companies — without any of them having to share raw data. Instead, the system enabled distributed learning through selective sharing of model parameters during training, balancing accuracy with robust privacy protection.
“This work was an early demonstration that it is possible to train large neural networks without gathering and stockpiling all training data in one place,” said Shmatikov. “The past ten years saw many abuses of personal data, and we now understand why it can be dangerous to share information from users’ devices, medical records, financial transactions, etc. Our paper provided a roadmap for building AI systems without requiring that users give up their data to companies that create these systems.”
Their research anticipated many of the challenges now central to AI governance and data protection, including minimizing information leakage and applying differential privacy techniques during model training. Ultimately, this paper continues to inform contemporary research on secure and ethical AI development, illustrating how innovative technical design can reconcile data utility with the fundamental right to privacy.
“I hope that the main impact of our paper will be a new generation of AI systems that respect privacy and confidentiality of their training data, and that can be applied in sensitive domains such as biomedical data and people’s personal communications and transactions,” he said.
Andrew Clark is a freelance writer for Cornell Tech.