Zero-shot Machine Unlearning on Tabular Data
DUIRI - Discovery Undergraduate Interdisciplinary Research Internship
Spring 2026
Accepted
Global Security
Machine unlearning has gained much popularity in recent years with the introduction of international regulations, such as the European Union’s General Data Protection Regulation (GDPR), that center around users' right to be forgotten. The need for machine unlearning also stems from concerns around security and privacy of individuals whose data is present in the data used to train a machine learning model. Malicious actors might access individuals' sensitive data by exploiting system security vulnerabilities or infer through model's prediction. These dangers mandate that when an individual requests their data to be removed, it is not enough to remove it from the training data but also that their effect on the model be unlearned.
Existing machine unlearning methods often rely on the assumption that the entire training dataset is available during the unlearning process. However, this assumption may not hold true in practical scenarios where the original training data may not be accessible. Recently, source-free unlearning has been shown to be applicable to image data with by estimating the Hessian of the unknown remaining training data. We plan to validate this method on tabular data.
Romila Pradhan
This project will implement an existing source-free machine unlearning algorithm tailoring it to tabular data.
Enrolled as an undergraduate student in Computer Science, Data Science, or a related field. Interest in technical details of artificial intelligence and machine learning. Proficient in Python.
0
10 (estimated)
Home