Skip to main content

This library provides a set of tools for data augmentation, including text generation, data processing, and integration with multiple AI providers.

Project description

Data Augmenter Banner


Data Augmenter

License MIT PyPI Latest Release PyPI Downloads Last commit Contributors Coverage

Data Augmenter has been created to take advantage of the potential of foundational models by allowing us to generate new data from a small sample. Thanks to Data Augmenter we will be able to increase the size of our datasets by including variability in the data. In addition, we can extract structured datasets ready for fine-tuning of unstructured information.

Installation

It is recommended to use conda environments to manage and install dependencies, but if you prefer to ignore it, skip directly to point 3.

  1. Create an environment You can create a new environment using the conda create command. Replace myenv with your desired environment name and specify the Python version if needed.

    conda create --name myenv python=3.11
    
  2. Activate the environment After creating the environment, activate it using the following command:

    conda activate myenv  
    

    You should now be working with the activated environment.

  3. Installing dependencies Install ir from PyPI directly using pip:

    pip install python-data-augmenter
    

    At this point Data Augmenter is ready to use.

Modules

This library consists of two modules, augmentation and document_chunker.

Document Chunker

This module contains the DocumentChunker class. This utility has been designed to load and process specific types of files (markdown, txt, pdf and jsonl) by chunking them and inserting them in a dataframe.

Usage

  1. Initialize the DocumentChunker:

    from document_chunker import DocumentChunker
    chunker = DocumentChunker(chunk_size, chunk_overlap, separator)
    
  2. Process a File:

    file_path = "path/to/your/file.txt"  # Can be .txt, .md, .pdf or .jsonl
    dataset = chunker.process_file(file_path)
    

    The output will be an augmentation-ready dataframe. In case you prefer to prepare your own dataset for augmentation, it should be a Pandas dataframe with a column named "document":

    docs = [
        "It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity.",
        "Call me Ishmael. Some years ago—never mind how long precisely—having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world.",
        "All human beings should try to learn before they die what they are running from, and to, and why.",
        "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
        "To be, or not to be, that is the question: Whether 'tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles and by opposing end them."
    ]
    dataset = pd.DataFrame({"document": docs})
    

Augmentation

This module consists of two main types of classes: Augmenters and Datasets. Augmenters interface with Large Language Models (LLMs) through specified endpoints, providing the functionality to generate new data based on input documents. Datasets handle the dataset structure and offer methods for augmenting, filtering, and storing query-answer pairs relevant to the provided document.

The input dataset should be in the form of a DataFrame with a single column named "document" that contains chunks of your source document. The output will be a .jsonl file, where each entry includes a generated question-answer pair along with the corresponding document chunk. If filtering is applied, each entry will also include the cosine similarity score between the QA pair and its source chunk.

Usage

For the following usage example, we have used a Ollama client exposed at localhost:11434 port 80 with the tinyllama 1.1b model.

  1. Initialize TGIAugmenter:

    from augmenter import TGIAugmenter
    augmenter = OllamaAugmenter("http://localhost:11434/api/generate", model='tinyllama:1.1b')
    
  2. Initialize DatasetAugmenter:

    from augmenter import DatasetAugmenter
    dataset_augmenter = DatasetAugmenter(augmenter=augmenter, dataset=dataset)
    

    After the process is finished, the dataset will be saved in the 'augmented_dataset.jsonl' file by default.

  3. Generate the question and answer pairs:

    Optionally, filter the augmented dataset:

    dataset_augmenter.filter_dataset(cosine_similarity_threshold=0.45, cross_cosine_similarity_threshold=0.85)
    

    This will automatically process the embeddings and filter the dataset based on the set thresholds. Alternatively it can be done manually:

    dataset_augmenter.get_embeddings()
    dataset_augmenter.get_cosine_similarity()
    dataset_augmenter.get_cross_cosine_similarity()
    dataset_augmenter.filter_dataset(cosine_similarity_threshold=0.45, cross_cosine_similarity_threshold=0.85)
    

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

python_data_augmenter-0.0.3.tar.gz (1.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

python_data_augmenter-0.0.3-py3-none-any.whl (12.4 kB view details)

Uploaded Python 3

File details

Details for the file python_data_augmenter-0.0.3.tar.gz.

File metadata

  • Download URL: python_data_augmenter-0.0.3.tar.gz
  • Upload date:
  • Size: 1.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for python_data_augmenter-0.0.3.tar.gz
Algorithm Hash digest
SHA256 67c4b572f64593733f1ea666b04f5dd5bb5ef9a39f7fbed1e849cbf9ce0f6562
MD5 805b1098d2b226e276df63cae7b9fdb7
BLAKE2b-256 7155e0c1ba2dfea8c9adebf8bcfdab668160bb16a1546e7f83395bd067840b93

See more details on using hashes here.

File details

Details for the file python_data_augmenter-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for python_data_augmenter-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 8a9640b85179948a078bf169bb15bfb3d27da612cf646fc4179b743559ae2d20
MD5 11f4ba4bdb8289297aba6e252a93979d
BLAKE2b-256 60f742a896a5fb8551b9d75598fc3eff0c9b4a450933839acb4c82695b0806aa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page