Skip to main content

LangChain integration for Tonic Textual PII redaction

Project description

langchain-textual

PyPI version CI License: MIT Python 3.10+

PII detection and transformation tools for LangChain, powered by Tonic Textual.

Detect sensitive data in text, JSON, HTML, and files — then synthesize it with realistic fakes, tokenize it with reversible placeholders, or extract the raw entities. Drop them into any LangChain chain or agent as standard tools.

Installation

pip install langchain-textual

Quick start

export TONIC_TEXTUAL_API_KEY="your-api-key"
from langchain_textual import TonicTextualRedactText

tool = TonicTextualRedactText()
tool.invoke("My name is John Smith and my email is john@example.com.")
# "My name is [NAME_GIVEN_xxxx] [NAME_FAMILY_xxxx] and my email is [EMAIL_ADDRESS_xxxx]."

Tools

Tool Input Use for
TonicTextualRedactText Plain text string Synthesize or tokenize PII in raw text, .txt file contents
TonicTextualRedactJson JSON string Synthesize or tokenize PII in raw JSON, .json file contents
TonicTextualRedactHtml HTML string Synthesize or tokenize PII in raw HTML, .html/.htm file contents
TonicTextualRedactFile File path Synthesize or tokenize PII in PDFs, images (JPG, PNG), CSVs, TSVs
TonicTextualExtractEntities Plain text string Extract detected PII entities with type, value, location, and confidence
TonicTextualPiiTypes None List all supported PII entity types

Text

from langchain_textual import TonicTextualRedactText

tool = TonicTextualRedactText()
tool.invoke("My name is John Smith and my email is john@example.com.")
# "My name is [NAME_GIVEN_xxxx] [NAME_FAMILY_xxxx] and my email is [EMAIL_ADDRESS_xxxx]."

JSON

from langchain_textual import TonicTextualRedactJson

tool = TonicTextualRedactJson()
tool.invoke('{"name": "John Smith", "email": "john@example.com"}')
# '{"name": "[NAME_GIVEN_xxxx] [NAME_FAMILY_xxxx]", "email": "[EMAIL_ADDRESS_xxxx]"}'

HTML

from langchain_textual import TonicTextualRedactHtml

tool = TonicTextualRedactHtml()
tool.invoke("<p>Contact John Smith at john@example.com</p>")
# "<p>Contact [NAME_GIVEN_xxxx] [NAME_FAMILY_xxxx] at [EMAIL_ADDRESS_xxxx]</p>"

Files

from langchain_textual import TonicTextualRedactFile

tool = TonicTextualRedactFile()
tool.invoke({"file_path": "/path/to/scan.pdf"})
# "/path/to/scan_redacted.pdf"

tool.invoke({"file_path": "/path/to/photo.jpg", "output_path": "/tmp/redacted.jpg"})
# "/tmp/redacted.jpg"

For .txt, .json, and .html/.htm files, read the file contents and pass them to the corresponding text, JSON, or HTML tool instead.

Entity extraction

from langchain_textual import TonicTextualExtractEntities

tool = TonicTextualExtractEntities()
tool.invoke("My name is John Smith and my email is john@example.com.")
# '[{"label": "NAME_GIVEN", "text": "John", "start": 11, "end": 15, "score": 0.9}, ...]'

Returns a JSON array of detected entities, each with label, text, start, end, and score fields.

Configuration

All tools share the same configuration options.

Synthesis mode — replace PII with realistic fake data instead of placeholders:

tool = TonicTextualRedactText(generator_default="Synthesis")
tool.invoke("Contact Jane Doe at jane.doe@example.com.")
# "Contact Maria Chen at maria.chen@gmail.com."

Per-entity control — set handling per PII type with generator_config:

tool = TonicTextualRedactText(
    generator_default="Off",
    generator_config={
        "NAME_GIVEN": "Synthesis",
        "NAME_FAMILY": "Synthesis",
        "EMAIL_ADDRESS": "Redaction",
    },
)
tool.invoke("Contact Jane Doe at jane.doe@example.com.")
# "Contact Maria Chen at chen@[EMAIL_ADDRESS_xxxx]."

Use TonicTextualPiiTypes to list all supported entity type names:

from langchain_textual import TonicTextualPiiTypes

TonicTextualPiiTypes().invoke("")
# "NUMERIC_VALUE, LANGUAGE, MONEY, ..., EMAIL_ADDRESS, NAME_GIVEN, NAME_FAMILY, ..."

Self-hosted deployment:

tool = TonicTextualRedactText(tonic_textual_base_url="https://textual.your-company.com")

Explicit API key (instead of env var):

tool = TonicTextualRedactText(tonic_textual_api_key="your-api-key")

Using with a LangChain agent

Every tool in this package is a standard LangChain tool, so they work anywhere tools do. Give your agent whichever combination it needs:

from langchain_textual import (
    TonicTextualRedactText,
    TonicTextualRedactJson,
    TonicTextualRedactFile,
    TonicTextualExtractEntities,
)
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

llm = ChatOpenAI(model="gpt-4o-mini")
tools = [
    TonicTextualRedactText(),
    TonicTextualRedactJson(),
    TonicTextualRedactFile(),
    TonicTextualExtractEntities(),
]
agent = create_react_agent(llm, tools)

Development

# install dependencies
uv sync --group dev --group test --group lint --group typing

# install pre-commit hooks (auto-runs ruff on each commit)
uv tool install pre-commit
pre-commit install

# run unit tests
make test

# run integration tests (requires TONIC_TEXTUAL_API_KEY)
make integration_tests

# lint & format (run from the project root)
make lint
make format

Note: All make commands must be run from the project root (langchain-textual/), not from subdirectories like examples/.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_textual-1.2.0.tar.gz (346.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_textual-1.2.0-py3-none-any.whl (9.2 kB view details)

Uploaded Python 3

File details

Details for the file langchain_textual-1.2.0.tar.gz.

File metadata

  • Download URL: langchain_textual-1.2.0.tar.gz
  • Upload date:
  • Size: 346.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for langchain_textual-1.2.0.tar.gz
Algorithm Hash digest
SHA256 646bcfd26b9c8a3e2d11bae86508d79e24e35973c39ff06d86350f7035da1a87
MD5 46d03cbc5b986cc5aab22c25cd0f8c61
BLAKE2b-256 cdb9c709246a82b27b4335dc6cb3fea2f29b26f75061d3bd765c47b8a92df6f1

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_textual-1.2.0.tar.gz:

Publisher: publish.yml on TonicAI/langchain-textual

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file langchain_textual-1.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_textual-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 356e83a0aa2fde5f674f425b590ea869c99fee948381d577060f019cb256a85f
MD5 4b246318c8c3472f78318c47d48f54c1
BLAKE2b-256 0cdf6ae30ec4b3395ea0d32297505ee1e853661be1febdaf7149f47eed729ef3

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_textual-1.2.0-py3-none-any.whl:

Publisher: publish.yml on TonicAI/langchain-textual

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page