Skip to main content

Text Machina: Seamless Generation of Machine-Generated Text Datasets

Project description

TextMachina

license Documentation Contributor Covenant Pypi version Downloads

Unifying strategies to build MGT datasets in a single framework

icon TextMachina is a modular and extensible Python framework, designed to aid in the creation of high-quality, unbiased datasets to build robust models for MGT-related tasks such as:

  • 🔎 Detection: detect whether a text has been generated by an LLM.
  • 🕵️‍♂️ Attribution: identify what LLM has generated a text.
  • 🚧 Boundary detection: find the boundary between human and generated text.
  • 🎨 Mixcase: ascertain whether specific text spans are human-written or generated by LLMs.

icon TextMachina provides a user-friendly pipeline that abstracts away the inherent intricacies of building MGT datasets:

  • 🦜 LLM integrations: easily integrates any LLM provider. Currently, icon supports LLMs from Anthropic, Cohere, OpenAI, Google Vertex AI, Amazon Bedrock, AI21, Azure OpenAI, models deployed on VLLM and TRT inference servers, and any model from HuggingFace deployed either locally or remotely through Inference API or Inference Endpoints. See models to implement your own LLM provider.

  • ✍️ Prompt templating: just write your prompt template with placeholders and let icon extractors to fill the template and prepare a prompt for an LLM. See extractors to implement your own extractors and learn more about the placeholders for each extractor.

  • 🔒 Constrained decoding: automatically infer LLM decoding hyper-parameters from the human texts to improve the quality and reduce the biases of your MGT datasets. See constrainers to implement your own constrainers.

  • 🛠️ Post-processing: post-process functions aimed to improve the quality of any MGT dataset and prevent common biases and artifacts. See postprocessing to add new postprocess functions.

  • 🌈 Bias mitigation: icon is built with bias prevention in mind and helps you across all the pipeline to prevent introducing spurious correlations in your datasets.

  • 📊 Dataset exploration: explore the generated datasets and quantify its quality with a set of metrics. See metrics and interactive to implement your own metrics and visualizations.

The following diagram depicts the icon's pipeline.

TextMachina Pipeline

🔧 Installation


You can install all the dependencies with pip:

pip install text-machina[all]

or just with specific dependencies for an specific LLM provider or development dependencies (see setup.py):

pip install text-machina[anthropic,dev]

You can also install directly from source:

pip install .[all]

If you're planning to modify the code for specific use cases, you can install icon in development mode:

pip install -e .[dev]

👀 Quick Tour


Once installed, you are ready to use icon for building MGT datasets either using the CLI or programmatically.

📟 Using the CLI

The first step is to define a YAML configuration file or a directory tree containing YAML files. Read the examples/learning files to learn how to define configuration using different providers and extractors for different tasks. Take a look to examples/use_cases to see configurations for specific use cases.

Then, we can call the explore and generate endpoints of icon's CLI. The explore endpoint allows to inspect a small generated dataset using an specific configuration through an interactive interface. For instance, let's suppose we want to check how an MGT detection dataset generated using XSum news articles and gpt-3.5-turbo-instruct looks like, and compute some metrics:

text-machina explore --config-path etc/examples/xsum_gpt-3-5-turbo-instruct_openai.yaml \
--task-type detection \
--metrics-path etc/metrics.yaml \
--max-generations 10

CLI interface showing generated and human text for detection

Great! Our dataset seems to look great, no artifacts, no biases, and high-quality text using this configuration. Let's now generate a whole dataset for MGT detection using that config file. The generate endpoint allows you to do that:

text-machina generate --config-path etc/examples/xsum_gpt-3-5-turbo-instruct_openai.yaml \
--task-type detection

A run name will be assigned to your execution and icon will cache results behind the scenes. If your run is interrupted at any point, you can use --run-name <run-name> to recover the progress and continue generating your dataset.

👩‍💻 Programmatically

You can also use icon programmatically. To do that, instantiate a dataset generator by calling get_generator with a Config object, and run its generate method. The Config object must contain the input, model, and generation configs, together with the task type for which the MGT dataset will be generated. Let's replicate the previous experiment programmatically:

from text_machina import get_generator
from text_machina import Config, InputConfig, ModelConfig

input_config = InputConfig(
    domain="news",
    language="en",
    quantity=10,
    random_sample_human=True,
    dataset="xsum",
    dataset_text_column="document",
    dataset_params={"split": "test"},
    template=(
        "Write a news article whose summary is '{summary}'"
        "using the entities: {entities}\n\nArticle:"
    ),
    extractor="combined",
    extractors_list=["auxiliary.Auxiliary", "entity_list.EntityList"],
    max_input_tokens=256,
)

model_config = ModelConfig(
    provider="openai",
    model_name="gpt-3.5-turbo-instruct",
    api_type="COMPLETION",
    threads=8,
    max_retries=5,
    timeout=20,
)

generation_config = {"temperature": 0.7, "presence_penalty": 1.0}

config = Config(
    input=input_config,
    model=model_config,
    generation=generation_config,
    task_type="detection",
)
generator = get_generator(config)
dataset = generator.generate()

🛠️ Supported tasks


icon can generate datasets for MGT detection, attribution, boundary detection, and mixcase detection:

CLI interface showing generated and human text for detection

Example from a detection task.


CLI interface showing generated and human text for attribution

Example from an attribution task.


CLI interface showing generated and human text for boundary

Example from a boundary detection task.


CLI interface showing generated and human text for sentence-based mixcase

Example from a mixcase task (tagging), interleaving generated sentences with human texts.


CLI interface showing generated and human text for word-span-based mixcase

Example from a mixcase task (tagging), interleaving generated word spans with human texts.


However, the users can build datasets for other tasks not included in icon just by leveraging the provided task types. For instance, datasets for mixcase classification can be built from datasets for mixcase tagging, or datasets for mixcase attribution can be built using the generation model name as label.

🔄 Common Use Cases


There is a set of common use cases with icon. Here's how to carry them out using the explore and generate endpoints.

Use case Command
Explore a dataset of 10 samples for MGT detection and show metrics
text-machina explore \ 
--config-path config.yaml \
--task-type detection \
--max-generations 10 \
--metrics_path metrics.yaml
Explore an existing dataset for MGT detection and show metrics
text-machina explore \ 
--config-path config.yaml \
--run-name greedy-bear \
--task-type detection \
--metrics_path metrics.yaml
Generate a dataset for MGT detection
text-machina generate \ 
--config-path config.yaml \
--task-type detection
Generate a dataset for MGT attribution
text-machina generate \ 
--config-path config.yaml \
--task-type attribution
Generate a dataset for boundary detection
text-machina generate \ 
--config-path config.yaml \
--task-type boundary
Generate a dataset for mixcase detection
text-machina generate \ 
--config-path config.yaml \
--task-type mixcase
Generate a dataset for MGT detection using config files in a directory tree
text-machina generate \ 
--config-path configs/ \
--task-type detection

💾 Caching

icon TextMachina caches each dataset it generates through the CLI endpoints under a run name. The specific run name is given as the last message in the logs, and can be used with --run-name <run-name> to continue from interrupted runs. The default cache dir used by icon TextMachina is /tmp/text_machina_cache. It can be modified by setting TEXT_MACHINA_CACHE_DIR to a different path.

⚠️ Notes and Limitations


  • Although you can use any kind of extractor to build boundary detection datasets, it is highly recommended to use the sentence_prefix or word_prefix extractors with a random number of sentences/words to avoid biases that lead boundary detection models to just count sentences or words.

  • icon attempts to remove disclosure patterns (e.g., "As an AI language model ...") with a limited set of regular expressions, but they depend on the LLM and the language. We strictly recommend to first explore your dataset looking for these biases, and modify the postprocessing or the prompt template accordingly to remove them.

  • Generating multilingual datasets is not well supported yet. At this moment, we recommend to generate independent datasets for each language and combine them together out of icon.

  • Generating machine-generated code datasets is not well supported yet.

📖 Citation


@misc{sarvazyan2024textmachina,
      title={TextMachina: Seamless Generation of Machine-Generated Text Datasets}, 
      author={Areg Mikael Sarvazyan and José Ángel González and Marc Franco-Salvador},
      year={2024},
      eprint={2401.03946},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

🤝 Contribute


Feel free to contribute to icon by raising an issue.

Please install and use the dev-tools for correctly formatting the code when contributing to this repo.

🏭 Commercial Purposes


Please, contact stuart.winter-tear@genaios.ai and marc.franco@genaios.ai if you are interested in using TextMachina for commercial purposes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

text_machina-0.2.12.tar.gz (66.7 kB view details)

Uploaded Source

Built Distribution

text_machina-0.2.12-py3-none-any.whl (97.5 kB view details)

Uploaded Python 3

File details

Details for the file text_machina-0.2.12.tar.gz.

File metadata

  • Download URL: text_machina-0.2.12.tar.gz
  • Upload date:
  • Size: 66.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for text_machina-0.2.12.tar.gz
Algorithm Hash digest
SHA256 b479a65c129f679e280f91b3f673b3a9ba97f7d257a66588b533aeac7002e48b
MD5 976d195e979b7bd558bef1ed2608231c
BLAKE2b-256 58e659797c07a5b1cea075622da0a9d39082ae30ff3e9d73c131174ed32c9ce7

See more details on using hashes here.

File details

Details for the file text_machina-0.2.12-py3-none-any.whl.

File metadata

File hashes

Hashes for text_machina-0.2.12-py3-none-any.whl
Algorithm Hash digest
SHA256 db1a144c09123f05d5d74cfd2ec1db15a3c37dfc72406b2ac8e920f33ce0d734
MD5 3bfdb8d791085328b1ac4084e3d2199f
BLAKE2b-256 a2ce5229d0cff2bb15ce441f674932f472150aa38a8b6b840c750f0ca2f69936

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page