Nick (Hengrui) Jia
I’m a PhD student at the CleverHans Lab at the Vector Insititute and University of Toronto, advised by Prof. Nicolas Papernot.
My research interests are at the intersection of security and machine learning, or trustworthy machine learning. In particular, I wish to answer the following question: what risks may come along with the benefits brought by machine learning, what/who is responsible for such risks, and how can we mitigate them? If you would like to learn more about my research, I recommend reading my publications listed below, or the corresponding blog posts such as proof-of-learning, or machine unlearning.
Publications
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD. Anvith Thudi, Hengrui Jia, Casey Meehan, Ilia Shumailov, Nicolas Papernot. Proceedings of the 33rd USENIX Security Symposium.
LLM Dataset Inference: Did you train on my dataset?. Pratyush Maini, Hengrui Jia, Nicolas Papernot, Adam Dziedzic. Arxiv preprint.
Proof-of-Learning is Currently More Broken Than You Think. Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot. Proceedings of the 8th IEEE European Symposium on Security and Privacy.
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning. Anvith Thudi, Hengrui Jia, Ilia Shumailov, Nicolas Papernot. Proceedings of the 31st USENIX Security Symposium.
A Zest of LIME: Towards Architecture-Independent Model Distances. Hengrui Jia, Hongyu Chen, Jonas Guan, Ali Shahin Shamsabadi, Nicolas Papernot. Proceedings of the 10th International Conference on Learning Representations.
SoK: Machine Learning Governance. Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers, Mohammad Yaghini, Nicolas Papernot. Arxiv preprint.
Proof-of-Learning: Definitions and Practice. Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot. Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA.
Entangled Watermarks as a Defense against Model Extraction. Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot. Proceedings of the 30th USENIX Security Symposium.
Machine Unlearning. Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot. Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA.
Awards
Ontario Graduate Scholarship, Province of Ontario and University of Toronto (2024-2025)
Ontario Graduate Scholarship, Province of Ontario and University of Toronto (2023-2024)
Mary H. Beatty Fellowship, University of Toronto (2022-2023)
Vector Scholarship in Artificial Intelligence, Vector Institute (2020-2021)
Dean’s List, University of Toronto (2016-2020)
Invited Talks
Ownership Resolution in ML, Purdue University (2024)
Ownership Resolution in ML, Northwestern University (2024)
Ownership Resolution in ML, University of Wisconsin–Madison (2024)
Ownership of ML Models, Mila - Quebec AI Institute (2024)
Entangled Watermarks as a Defense against Model Extraction, DeepMind (2023)
A Zest of LIME: Towards Architecture-Independent Model Distances, Workshop on Algorithmic Audits of Algorithms (2023)
Entangled Watermarks as a Defense against Model Extraction, Intel (2022)
Machine Unlearning, Vector Institute (2021)