Skip to content
/ PAC Public

The official implementation of the paper "Data Contamination Calibration for Black-box LLMs" (ACL 2024)

License

Notifications You must be signed in to change notification settings

yyy01/PAC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Data Contamination Calibration for Black-box LLMs

Wentao Ye1,   Jiaqi Hu1,   Liyao Li1,   Haobo Wang1,   Gang Chen1,   Junbo Zhao1

1Zhejiang University

Paper | StackMIA Dataset | StackMIAsub Benchmark | Polarized Augment Calibration Method (i.e. the following repo)

News 🔥

  • [2024/05/21] We release our paper on Arxiv.
  • [2024/05/19] We release our code and benchmark.
  • [2024/05/16] Our paper is accepted by ACL 2024! 🎉

Overview

The rapid advancements of Large Language Models tightly associate with the expansion of the training data size. However, the unchecked ultra-large-scale training sets introduce a series of potential risks like data contamination. To trackle this challenge, we propose a holistic method named Polarized Augment Calibration (PAC) along with a brand-new dataset named StackMIA to detect the contaminated data and diminish the contamination effect. Remarkably, PAC is plug-and-play that can be integrated with most current white- and black-box models.

StackMIAsub benchmark

The StakcMIAsub dataset serves as a benchmark, which supports most white- and black-box models, to evaluate membership inference attack (MIA) methods:

  • Black-box OpenAI models:
    • Davinci-002
    • Baggage-002
    • ...
  • White-box models:
    • LLaMA and LLaMA2
    • Pythia
    • OPT
    • ...

Access our Hugging Face repo for more details.

Detect data contamination with PAC

Data preparation

📌 Please ensure the data to be detected is formatted as a jsonlines file in the following manner:

{"snippet": "SNIPPET1", "label": 1 or 0}
{"snippet": "SNIPPET2", "label": 1 or 0}
...
  • label is an optional field for labeled detection.
  • label 1 denotes to members, while label 0 denotes to non-members.

Run PAC using black-box OpenAI models

Set your API key and target model engine to run PAC on OpenAI models (increase num_threads for batch processing):

python attack.py --dataset_path "DATASET_PATH" --api_key "YOUR_API_KEY" --model_engine "TARGET_MODEL_ENGINE" 

Note: the extra probabilistic tracking step will be performed for GPT-3.5 and GPT-4 models.

Run PAC using white-box models

Use the following command to run PAC on local white box models:

python attack.py --dataset_path "DATASET_PATH" --model_path "MODEL_PATH"

Acknowledgement

Thanks for the following repos:

Cite our work

⭐️ If you find our implementation and paper helpful, please kindly cite our work :

@misc{ye2024data,
      title={Data Contamination Calibration for Black-box LLMs}, 
      author={Wentao Ye and Jiaqi Hu and Liyao Li and Haobo Wang and Gang Chen and Junbo Zhao},
      year={2024},
      eprint={2405.11930},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

About

The official implementation of the paper "Data Contamination Calibration for Black-box LLMs" (ACL 2024)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages