A trio of scientists from the University of North Carolina, Chapel Hill recently published preprint artificial intelligence (AI) research showcasing how difficult it is to remove sensitive data from large language models (LLMs) such as OpenAI’s ChatGPT and Google’s Bard.
According to the researchers' paper, the task of “deleting” information from LLMs is possible, but it’s just as difficult to verify the information has been removed as it is to actually remove it.
The reason for this has to do with how LLMs are engineered and trained. The models are pretrained on databases and then fine-tuned to generate coherent outputs (GPT stands for “generative pretrained transformer”).
Once a model is trained, its creators cannot, for example, go back into the database and delete specific files in order to prohibit the model from outputting related results. Essentially, all the information a model is trained on exists somewhere inside its weights and parameters where they’re undefinable without actually generating outputs. This is the “black box” of AI.
A problem arises when LLMs trained on massive datasets output sensitive information such as personally identifiable information, financial records, or other potentially harmful and unwanted outputs.
In a hypothetical situation where an LLM was trained on sensitive banking information, for example, there’s typically no way for the AI’s creator to find those files and delete them. Instead, AI devs use guardrails such as hard-coded prompts that inhibit specific behaviors or reinforcement learning from human feedback (RLHF).
In an RLHF paradigm, human assessors engage models with the purpose of eliciting both wanted and unwanted behaviors. When the models’ outputs are desirable, they receive feedback that tunes the model toward that behavior. And when outputs demonstrate unwanted behavior, they receive feedback designed to limit such behavior in future outputs.
However, as the UNC researchers point out, this method relies on humans finding all the flaws a model might exhibit, and even when successful, it still doesn’t “delete” the information from the model.
Per the team’s research paper:
“A possibly deeper shortcoming of RLHF is that a model may still know the sensitive information. While there is much debate about what models truly ‘know’ it seems problematic for a model to, e.g., be able to describe how to make a bioweapon but merely refrain from answering questions about how to do this.”
Ultimately, the UNC researchers concluded that even state-of-the-art model editing methods, such as Rank-One Model Editing “fail to fully delete factual information from LLMs, as facts can still be extracted 38% of the time by whitebox attacks and 29% of the time by blackbox attacks.”
The model the team used to conduct their research is called GPT-J. While GPT-3.5, one of the base models that power ChatGPT, was fine-tuned with 170 billion parameters, GPT-J only has 6 billion.
Ostensibly, this means the problem of finding and eliminating unwanted data in an LLM such as GPT-3.5 is exponentially more difficult than doing so in a smaller model.
The researchers were able to develop new defense methods to protect LLMs from some “extraction attacks” — purposeful attempts by bad actors to use prompting to circumvent a model’s guardrails in order to make it output sensitive information
However, as the researchers write, “the problem of deleting sensitive information may be one where defense methods are always playing catch-up to new attack methods.”