ICML 2025 Workshop on Machine Unlearning for Generative AI

MUGen @ ICML'25

Vancouver, Canada - July 2025

ICML Conference Logo

About the Workshop

Generative AI models are trained on internet-scale datasets, yielding powerful capabilities but also introducing risks like copyright infringement, PII leakage, and harmful knowledge. Targeted removal or unlearning of sensitive data is challenging, as retraining on curated sets is computationally expensive, driving research into machine unlearning and model editing. Yet approaches like RLHF only suppress undesirable outputs, leaving underlying knowledge vulnerable to adversarial extraction. This raises urgent privacy, security, and legal concerns, especially under the EU's GDPR "right to be forgotten". Because neural networks encode information across millions of parameters, precise deletion without degrading performance is complex, and adversarial or whitebox attacks can recover ostensibly erased data.

This workshop brings together experts in AI safety, privacy, and policy to advance robust, verifiable unlearning methods, standardized evaluation frameworks, and theoretical foundations. By achieving true erasure, we aim to ensure AI can ethically and legally forget sensitive data while preserving broader utility.

News
  • March 19, 2025
    Workshop proposal accepted at ICML 2025!
  • March 31, 2025
    Call for Papers published!
Important Dates
  • Paper Submission Deadline 11.59 p.m. AoE, May 19, 2025
  • Decision Notification June 9, 2025
  • Workshop Date July 18 or July 19, 2025

Call for Papers

We invite contributions exploring key challenges and advancements at the intersection of machine unlearning and generative AI.

  • Problem formulations for machine unlearning
  • Standardized evaluation frameworks for robust unlearning
  • Unlearning copyrighted data, PII, or hazardous knowledge in LLMs, VLMs, diffusion models
  • Theoretical foundations supporting large-scale unlearning
  • Irreversible unlearning resistant to fine-tuning attacks
  • Model editing for altering or removing specific learned concepts
  • Benchmarks and datasets for unlearning and model editing
  • Differential privacy, exact unlearning, and provable guarantees

Submission Guidelines

Speakers and Panelists

Nicholas Carlini
Nicholas Carlini
Anthropic, USA
Ling Liu
Ling Liu
Georgia Institute of Technology, USA
Shagufta Mehnaz
Shagufta Mehnaz
Pennsylvania State University, USA
Sijia Liu
Sijia Liu
Michigan State University, USA
Shiqiang Wang
Eleni Triantafillou
Google DeepMind, UK
Peter Hase
Peter Hase
Anthropic, USA
A. Feder Cooper
A. Feder Cooper
Microsoft Research, (Incoming) Yale University, USA
Amy Cyphert
Amy Cyphert
West Virginia University, USA

Organizing Team

Vaidehi Patil
Vaidehi Patil
UNC Chapel Hill, USA
Mantas Mazeika
Mantas Mazeika
Center for AI Safety, USA
Yang Liu
Yang Liu
University of California, Santa Cruz, USA
Katherine Lee
Katherine Lee
Google DeepMind, USA
Mohit Bansal
Mohit Bansal
UNC Chapel Hill, USA
Bo Li
Bo Li
University of Chicago, USA

Program Committee

Coming soon!

Schedule

Time Event
09:00 AM - 09:10 AMOpening Remarks
09:10 AM - 09:40 AMInvited Talk 1
09:40 AM - 10:25 AMLive Poster Session 1
10:25 AM - 10:45 AMCoffee Break
10:45 AM - 11:15 AMInvited Talk 2
11:15 AM - 11:30 AMContributed Talk 1
11:30 AM - 11:45 AMContributed Talk 2
11:45 AM - 12:00 PMContributed Talk 3
12:00 PM - 12:30 PMInvited Talk 3
12:30 PM - 01:30 PMLunch Break
01:30 PM - 02:00 PMInvited Talk 4
02:00 PM - 02:30 PMInvited Talk 5
02:30 PM - 03:00 PMInvited Talk 6
03:00 PM - 03:45 PMLive Poster Session 2
03:45 PM - 04:00 PMCoffee Break
04:00 PM - 04:55 PMLive Panel Discussion with Speakers and Panelists
04:55 PM - 05:00 PMClosing Remarks