Watermarking Makes Language Models Radioactive - Ecole Normale Supérieure paris-Saclay
Communication Dans Un Congrès Année : 2024

Watermarking Makes Language Models Radioactive

Résumé

We investigate the radioactivity of text generated by large language models (LLM), i.e., whether it is possible to detect that such synthetic input was used to train a subsequent LLM. Current methods like membership inference or active IP protection either work only in settings where the suspected text is known or do not provide reliable statistical guarantees. We discover that, on the contrary, it is possible to reliably determine if a language model was trained on synthetic data if that data is output by a watermarked LLM. Our new methods, specialized for radioactivity, detects with a provable confidence weak residuals of the watermark signal in the fine-tuned LLM. We link the radioactivity contamination level to the following properties: the watermark robustness, its proportion in the training set, and the fine-tuning process. For instance, if the suspect model is open-weight, we demonstrate that training on watermarked instructions can be detected with high confidence (p-value < 10 -5 ) even when as little as 5% of training text is watermarked.
Fichier principal
Vignette du fichier
10211_Watermarking_Makes_Langu.pdf (971.3 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04766621 , version 1 (05-11-2024)

Licence

Identifiants

  • HAL Id : hal-04766621 , version 1

Citer

Tom Sander, Pierre Fernandez, Alain Durmus, Matthijs Douze, Teddy Furon. Watermarking Makes Language Models Radioactive. NeurIPS 2024 - 38th Conference on Neural Information Processing Systems, Dec 2024, Vancouver, Canada. pp.1-35. ⟨hal-04766621⟩
0 Consultations
0 Téléchargements

Partager

More