Vancouver, Canada - July 2025
Generative AI models are trained on internet-scale datasets, yielding powerful capabilities but also introducing risks like copyright infringement, PII leakage, and harmful knowledge. Targeted removal or unlearning of sensitive data is challenging, as retraining on curated sets is computationally expensive, driving research into machine unlearning and model editing. Yet approaches like RLHF only suppress undesirable outputs, leaving underlying knowledge vulnerable to adversarial extraction. This raises urgent privacy, security, and legal concerns, especially under the EU's GDPR "right to be forgotten". Because neural networks encode information across millions of parameters, precise deletion without degrading performance is complex, and adversarial or whitebox attacks can recover ostensibly erased data.
This workshop brings together experts in AI safety, privacy, and policy to advance robust, verifiable unlearning methods, standardized evaluation frameworks, and theoretical foundations. By achieving true erasure, we aim to ensure AI can ethically and legally forget sensitive data while preserving broader utility.
We invite contributions exploring key challenges and advancements at the intersection of machine unlearning and generative AI.
Coming soon!
Time | Event |
---|---|
09:00 AM - 09:10 AM | Opening Remarks |
09:10 AM - 09:40 AM | Invited Talk 1 |
09:40 AM - 10:25 AM | Live Poster Session 1 |
10:25 AM - 10:45 AM | Coffee Break |
10:45 AM - 11:15 AM | Invited Talk 2 |
11:15 AM - 11:30 AM | Contributed Talk 1 |
11:30 AM - 11:45 AM | Contributed Talk 2 |
11:45 AM - 12:00 PM | Contributed Talk 3 |
12:00 PM - 12:30 PM | Invited Talk 3 |
12:30 PM - 01:30 PM | Lunch Break |
01:30 PM - 02:00 PM | Invited Talk 4 |
02:00 PM - 02:30 PM | Invited Talk 5 |
02:30 PM - 03:00 PM | Invited Talk 6 |
03:00 PM - 03:45 PM | Live Poster Session 2 |
03:45 PM - 04:00 PM | Coffee Break |
04:00 PM - 04:55 PM | Live Panel Discussion with Speakers and Panelists |
04:55 PM - 05:00 PM | Closing Remarks |