Skip to main content

PetHarbor is a Python package designed for anonymizing datasets using either a pre-trained model or a hash-based approach. It provides two main classes for anonymization: lite and advance.

Project description

PetHarbor

PetHarbor is a Python package designed for anonymizing datasets using either a pre-trained model or a hash-based approach. It provides two main classes for anonymization: lite and advance.

We introduce two anonymisation models to address the critical need for privacy protection in veterinary EHRs: PetHarbor Advanced and PetHarbor Lite. These models minimise the risk of re-identification in free-text clinical notes by identifying and pseudonymising sensitive information using custom-built private lists. The models differ by:

PetHarbor-Advanced: A state-of-the-art solution for clinical note anonymisation, leveraging an ensemble of two specialised large language models (LLMs). Each model is tailored to detect and process distinct types of identifiers within the text. Trained extensively on a diverse corpus of authentic veterinary EHR notes, these models are adept at parsing and understanding the unique language and structure of veterinary documentation. Due to its high performance and comprehensive approach, PetHarbor Advanced is our recommended solution for data sharing beyond controlled laboratory environments.

model overview

PetHarbor-Lite: A lightweight alternative to accommodate organisations with limited computational resources. This solution employs a two-step pipeline: first, trusted partners use shared lookup hash list derived from the SAVSNET dataset to remove common identifiers. These hash lists utilise a one-way cryptographic hashing algorithm (SHA-256) with an additional protected salt. Therefore, this hash list can be made available and shared with approved research groups without the need for raw text to be transfered or viewed by end users. Second, a spaCy-based model identifies and anonymises any remaining sensitive information. This approach drastically reduces computational requirements while maintaining effective anonymisation.

Installation

To install the required dependencies, run:

git clone https://github.com/seanfarr788/petharbor

pip install -r requirements.txt

pip install .

Models

Lite Anonymization

The lite anonymization class uses a hash-based approach to anonymize text data. Here is an example of how to use it:

Arguments

dataset_path : (str) The path to the dataset file. Can be a Arrow Dataset (uses the test split) or a .csv file

hash_table : (str) The path to the hash table file.

salt : (str), [optional] An optional salt value for hashing (default is None).

cache : (bool), [optional] Whether to use caching for the dataset processing (default is True).

use_spacy : (bool), [optional] Whether to use spaCy for additional text processing (default is False).

spacy_model : (str), [optional] The spaCy model to use for text processing (default is "en_core_web_sm").

text_column : (str), [optional] The name of the text column in the dataset (default is "item_text").

output_dir : (str), [optional] The directory where the output files will be saved (default is "testing/out/").

Methods

annonymise(): Anonymizes the dataset by hashing the text data and optionally using spaCy for additional processing.

Usage

from petharbor.lite import Annonymiser

lite = Annonymiser(
        dataset_path="testing/data/test.csv",
        hash_table="petharbor/data/pet_names_hashed.txt",
        salt="savsnet",
        text_column="item_text",
        cache=False,
        use_spacy=False,
        output_dir="testing/data/out/lite.csv",
    )
lite.annonymise()

Advanced Anonymization

The advance anonymization class uses a pre-trained model to anonymize text data. Here is an example of how to use it:

Arguments

dataset_path : (str) The path to the dataset file. Can be a Arrow Dataset (uses the test split) or a .csv file

model_path : (str) The path to the pre-trained model file. Accepts any Flair model

text_column : (str), optional The name of the text column in the dataset (default is "text").

output_dir : (str), optional The directory where the output files will be saved (default is "testing/out/").

cache : (bool), [optional] Whether to use cached data (default is True).

logs : str, [optional] The directory where logs will be saved (default is None).

device: (str), [optional] The device to run the model on (default is None) (options: "gpu", "cuda", "cpu", "none").

Methods

annonymise(): Anonymizes the text data in the dataset.

predict(): Generates predictions on the text data in the dataset.

train(): Placeholder method for training the model (not implemented).

eval(): Placeholder method for evaluating the model (not implemented).

from petharbor.advance import Annonymiser

    advance = Annonymiser(
        dataset_path="testing/data/out/predictions.csv",
        model_path="testing/models/best-model.pt",
        text_column="item_text",
        cache=True,
        logs="logs/",
        output_dir="testing/data/out/predictions.csv",
    )
    advance.annonymise()

Configuration

Device Configuration

The device (CPU or CUDA) can be configured by passing the device parameter to the anonymization classes. If not specified, the package will automatically configure the device.

Caching

Both methods have a caching feature such that records already annonnymised will not be annonymised again. Therefore, after the initial application of the model downstream annonymisation should be quicker. We apply a 'annonymised' flag to the dataset, if a record is marked '1' in this field we skip it, and add it back to the complete dataset at the end.

Logging

Logging is set up using the logging module. Logs will provide information about the progress and status of the anonymization process.

Contributing

Contributions are welcome! Please open an issue or submit a pull request on GitHub.

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

petharbor-0.1.1.tar.gz (10.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

petharbor-0.1.1-py3-none-any.whl (11.0 kB view details)

Uploaded Python 3

File details

Details for the file petharbor-0.1.1.tar.gz.

File metadata

  • Download URL: petharbor-0.1.1.tar.gz
  • Upload date:
  • Size: 10.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for petharbor-0.1.1.tar.gz
Algorithm Hash digest
SHA256 cb3c7fc3ccdb062a47c820fd92fdf51f8750f04585d2d4ded93e1185b782cb13
MD5 711ea3efa4358bee7b4fe0bc00781fb9
BLAKE2b-256 73ea545df48d546b09e349cbb36e3257751ae2a11bbb4e7f143d68b986f2d5e7

See more details on using hashes here.

File details

Details for the file petharbor-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: petharbor-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 11.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for petharbor-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e01121860fe138139d5e2fd732f76e3ca1378420cf37d38f4da17eb1c68a162e
MD5 0b16fbcff8805aef1e9922efb0f5ed4f
BLAKE2b-256 0cb229be8890043bbca4a3fe79c3ad4d9d1c56c3e00cd562f8485192543450e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page