Skip to main content

ISCC - Semantic Code Text

Project description

ISCC - Semantic Text-Code

Tests Version Downloads

[!CAUTION] This is a proof of concept. All releases with version numbers below v1.0.0 may break backward compatibility and produce incompatible Semantic Text-Codes. The algorithms of this iscc-sct repository are experimental and not part of the official ISO 24138:2024 standard.

iscc-sct is a Semantic-Code Text implementation for the ISCC (International Standard Content Code). The Semantic-Code Text is a new ISCC-UNIT for semantic text identification. The algorithm creates simmilar (low hamming distance) codes for semantically similar text inputs across different languages. The SCT ISCC-UNIT is a compact binary code created from a binarized document-vector text-embedding.

What is the ISCC

The ISCC is a combination of various similarity preserving fingerprints and an identifier for digital media content.

ISCCs are generated algorithmically from digital content, just like cryptographic hashes. However, instead of using a single cryptographic hash function to identify data only, the ISCC uses various algorithms to create a composite identifier that exhibits similarity-preserving properties (soft hash or Simprint).

The component-based structure of the ISCC identifies content at multiple levels of abstraction. Each component is self-describing, modular, and can be used separately or with others to aid in various content identification tasks. The algorithmic design supports content deduplication, database synchronization, indexing, integrity verification, timestamping, versioning, data provenance, similarity clustering, anomaly detection, usage tracking, allocation of royalties, fact-checking and general digital asset management use-cases.

What is ISCC Semantic Text-Code?

The ISCC framework already includes a Text-Code based on lexical similarity for near-duplicate matching. The ISCC Semantic Text-Code is a planned additional ISCC-UNIT focused on capturing a more abstract and broader semantic similarity. It is engineered to be robust against a wide range of variations and, most remarkably, translations of text that cannot be matched based on lexical similarity alone.

Translation Matching

One of the most interesting aspects of the Semantic Text-Code is its ability to generate (near)-identical codes for translations of the same text. This means that the same content, expressed in different languages, can be identified and linked, opening up new possibilities for cross-lingual content identification and similarity detection.

Key Features

  • Semantic Similarity: Utilizes deep learning models to generate codes that reflect the semantic essence of text.
  • Translation Matching: Creates nearly identical codes for text translations, enabling cross-lingual content identification.
  • Bit-Length Flexibility: Supports generating codes of various bit lengths (up to 256 bits), allowing for adjustable granularity in similarity detection.
  • ISCC Compatible: Generates codes fully compatible with the ISCC specification, facilitating seamless integration with existing ISCC-based systems.

Installation

Ensure you have Python 3.9 or newer installed on your system. Install the library using:

pip install iscc-sct

For systems with GPU CUDA support, enhance performance by installing with:

pip install iscc-sct[gpu]

Usage

Generate a Semantic Text-Code using the create function:

>>> import iscc_sct as sct
>>> text = "This is some sample text. It can be a longer document or even an entire book."
>>> sct.create(text, bits=256)
{
  "iscc": "ISCC:CADV3GG6JH3XEVRNSVYGCLJ7AAV3BOT5J7EHEZKPFXEGRJ2CTWACGZI",
  "characters": 77
}

For granular (per chunk) feature outputs:

>>> import iscc_sct as sct
>>> text = "This is some sample text. It can be a longer document or even an entire book."
>>> sct.create(text, bits=256, granular=True)
{
  "iscc": "ISCC:CADV3GG6JH3XEVRNSVYGCLJ7AAV3BOT5J7EHEZKPFXEGRJ2CTWACGZI",
  "characters": 77,
  "features": [
    {
      "maintype": "semantic",
      "subtype": "text",
      "version": 0,
      "simprints": [
        {
          "simprint": "XZjeSfdyVi0",
          "offset": 0,
          "size": 77,
          "content": "This is some sample text. It can be a longer document or even an entire book."
        }
      ]
    }
  ]
}

The installation also provides a sct command-line tool:

usage: sct [-h] [-b BITS] [-g] [-d] [path]

Generate Semantic Text-Codes for text files.

positional arguments:
  path                  Path to text files (supports glob patterns) or 'gui' to launch Gradio demo.

options:
  -h, --help            show this help message and exit
  -b BITS, --bits BITS  Bit-Length of Code (default 256)
  -g, --granular        Activate granular processing.
  -d, --debug           Show debugging messages.

How It Works

iscc-sct employs the following process:

  1. Splits the text into overlaping chunks (using syntactically sensible breakpoints).
  2. Uses a pre-trained deep learning model for text embedding.
  3. Generates feature vectors capturing essential characteristics of the chunks.
  4. Aggregates these vectors and binarizes them to produce a Semantic Text-Code.
  5. Prefixes the binarized vector with the matching ISCC header, encodes it with base32, and adds the "ISCC:" prefix.

This process ensures robustness to variations and translations, enabling cross-lingual matching based on a short Simprint.

Development and Contributing

We welcome contributions to enhance the capabilities and efficiency of this proof of concept. For development, install the project in development mode using Poetry:

git clone https://github.com/iscc/iscc-sct.git
cd iscc-sct
poetry install

If you have suggestions for improvements or bug fixes, please open an issue or pull request. For major changes, please open an issue first to discuss your ideas.

We particularly welcome recommendations for other multilingual text embedding models trained with Matryoshka Representation Learning (MRL) and optimized for binarization. Such contributions could significantly improve the performance and efficiency of the ISCC Semantic Text-Code generation.

Gradio Demo

This repository also provides an interactive Gradio demo that allows you to explore the capabilities of ISCC Semantic Text-Code. The demo showcases:

  • Generation of ISCC Semantic Text-Codes for input texts
  • Comparison of two texts and their similarity based on the generated codes
  • Visualization of text chunking and granular matches
  • Adjustable parameters like ISCC bit-length and maximum tokens per chunk

You can access the live version of the Gradio demo at: https://huggingface.co/spaces/iscc/iscc-sct

Running the Gradio Demo Locally

To run the Gradio demo locally, you first need to install the iscc-sct package with the optional demo dependency:

pip install iscc-sct[demo]

This will ensure that Gradio and other necessary dependencies for the demo are installed.

After installation, you can use the sct command-line tool that comes with the package:

sct gui

This command will launch the Gradio interface in your default web browser, allowing you to interact with the demo on your local machine.

Suported Languages:

Arabic, Armenian, Bengali, Bosnian, Bulgarian, Burmese, Catalan, Chinese (China), Chinese (Taiwan), Croatian, Czech, Danish, Dutch, English, Estonian, Farsi, Finnish, French, French (Canada), Galician, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Korean, Kurdish, Latvian, Lithuanian, Macedonian, Malay, Malayalam, Marathi, Mongolian, Norwegian Bokmål, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Sinhala, Slovak, Slovenian, Spanish, Swedish, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Vietnamese.

Future Work

Shift Resistant Semantic Chunking

The current chunking strategy uses tries to maximize chunk sizes (up to 127 tokens) wheil still splitting at lexically sensible boundaries with an overlap of up to 48 tokens. See text-splitter.

Cross document chunk matching via granular Simprints can likely be improved significantly with a semantically aware and shift resistant chunking strategy. Better shift resistance would improve the chances that the bounderies detected for semantically similar text sequences in different documents are aligned.

MRL based Embeddings

A text embedding model trained with Matryoshka Representation Learning may yield better results with short 64-bit Semantic Text-Codes.

Larger Chunk Sizes

A text embedding model with support for a larger max_token size (currently 128) may yield higher-order granular simprints based on larger chunks of text.

Acknowledgements

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

iscc_sct-0.1.2.tar.gz (3.5 MB view details)

Uploaded Source

Built Distribution

iscc_sct-0.1.2-py3-none-any.whl (3.6 MB view details)

Uploaded Python 3

File details

Details for the file iscc_sct-0.1.2.tar.gz.

File metadata

  • Download URL: iscc_sct-0.1.2.tar.gz
  • Upload date:
  • Size: 3.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.11.1 Windows/10

File hashes

Hashes for iscc_sct-0.1.2.tar.gz
Algorithm Hash digest
SHA256 3f27d015d02760de27d7d5d3f1bb6f174d8ecfa377aa0c43777db958dd5e87a6
MD5 15fd8ec7e2256772783bc8af83dee724
BLAKE2b-256 b73901a903887cc7fb881cef000e8557cfe3c193571248206469db9603571f82

See more details on using hashes here.

File details

Details for the file iscc_sct-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: iscc_sct-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 3.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.11.1 Windows/10

File hashes

Hashes for iscc_sct-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c5c060af220951f3dca9a50e8b918da22d27b081694697322a5b4ca84e8d508b
MD5 6cb61f97c66c57d3ce90eee39ec997c7
BLAKE2b-256 85176f1a7d59194005ea4462e9c359ed351894d1017240ec4341328e26876a70

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page