Skip to main content

DP-MLM: Differentially Private Text Rewriting Using Masked Language Models

Project description

DP-MLM

PyPI version GitHub stars License

This is the code and package repository for the ACL 2024 Findings paper: DP-MLM: Differentially Private Text Rewriting Using Masked Language Models

Setup

Installation

You can install the package directly using:

pip install dpmlm

Optionally, you can install from source. In this repository, you will find a requirements.txt file, which contains all necessary Python dependencies.

Resource Bootstrapping

Before running the mechanism, you need to download the necessary NLTK libraries:

from dpmlm.utils import setup_resources

setup_resources()

Usage of DP-MLM

The core logic resides in the DPMLM class. You can now initialize it with custom calibration bounds to ensure the DP privatization is tuned to your specific model (and bounding strategy).

from dpmlm import DPMLM
from dpmlm.utils import calculate_logit_bounds, cleanup

# 1. (Optional) Calibrate bounds for your specific model (e.g., RoBERTa)
bounds = calculate_logit_bounds("FacebookAI/roberta-base")

# 2. Instantiate the mechanism
M = DPMLM(MODEL="FacebookAI/roberta-base", calibration=bounds, bound_strategy=None)

# 3. Rewrite text
private_text = M.dpmlm_rewrite("Hello world, this is a private text.", epsilon=25)

# 4. Cleanup (optional, when finished)
cleanup(model_instances=[M])

If you want to set a bounding strategy for the clip bounds (beyond simple min/max selection), you can do so by passing a lambda function:

# strategy as used in the paper
strategy = lambda mean, std, low, high: (mean, mean + 4*std)
M = DPMLM(MODEL="FacebookAI/roberta-base", calibration=bounds, bound_strategy=strategy)

DP-MLM Batched Mode

For longer documents, the batched mode provides significant performance increases by parallelizing masked token predictions on the GPU.

To use batching, simply run:

M.dpmlm_rewrite_batch("Large document text...", epsilon=25, batch_size=16)

Depending on your setup, you may need to tweak the batch_size parameter for the most optimal performance gains.

Input Document Length

As of the newest 2025 release, DP-MLM no longer has the shortcoming of the 512 token context window (256 with concatentation), which was due to the limitations of MLM context windows.

Now, DP-MLM operates with a sliding window, where the maximum context is given, centered around the target word to be privatized. Thus, DP-MLM now works on arbitrarily long documents!

Usage of other evaluated models

There is one other included file for replication of the paper, which is easily importable and reusable:

  • LLMDP.py: implementations of both DP-Paraphrase and DP-Prompt. Note that for DP-Prompt, you will need to download the corresponding LMs, i.e., from Hugging Face.

M = LLMDP.DPPrompt()

M.privatize("hello world", epsilon=100)

Important note

In order to use LLMDP.DPParaphrase, you must download the fine-tuned model directory. This can be found at the following link: Model

Citation

Please consider citing the original work that introduced DP-MLM. Thank you!

@inproceedings{meisenbacher-etal-2024-dp,
    title = "{DP}-{MLM}: Differentially Private Text Rewriting Using Masked Language Models",
    author = "Meisenbacher, Stephen  and
      Chevli, Maulik  and
      Vladika, Juraj  and
      Matthes, Florian",
    editor = "Ku, Lun-Wei  and
      Martins, Andre  and
      Srikumar, Vivek",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.findings-acl.554/",
    doi = "10.18653/v1/2024.findings-acl.554",
    pages = "9314--9328"
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dpmlm-1.0.6.tar.gz (265.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dpmlm-1.0.6-py3-none-any.whl (265.8 kB view details)

Uploaded Python 3

File details

Details for the file dpmlm-1.0.6.tar.gz.

File metadata

  • Download URL: dpmlm-1.0.6.tar.gz
  • Upload date:
  • Size: 265.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for dpmlm-1.0.6.tar.gz
Algorithm Hash digest
SHA256 ec352cdac4e8c96dabd07b7f56191972461f6ff2a14ac1bbdc2e3a432b031686
MD5 565050f049027fcccf900b56a7dced1f
BLAKE2b-256 b961cb8e74133516eb727fe20b2f041acf6d8ed2e72a642ca2105f8e120b6c7b

See more details on using hashes here.

File details

Details for the file dpmlm-1.0.6-py3-none-any.whl.

File metadata

  • Download URL: dpmlm-1.0.6-py3-none-any.whl
  • Upload date:
  • Size: 265.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for dpmlm-1.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 2d057884fc0d95cc36a64f227228aa01f1fae278664deaeb8aefc10897d4ac5d
MD5 93205f5856dedbaa6b701b4d76f0f358
BLAKE2b-256 bebbe966adef9f3ae674d91cb1048bb72f9a136b9779c71386759034a1ea7bc0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page