Skip to main content

A package for sentence splitting using a pre-trained transformer model.

Project description

Sentence Splitter

A Python package for sentence splitting using a pre-trained transformer model.

Description

Sentence Splitter is a Python package that provides accurate sentence segmentation using a transformer-based token classification model. The model is bundled with the package, eliminating the need for additional downloads or configurations. It's designed to handle long texts efficiently and supports GPU acceleration if available.

Features

  • Transformer-Based Model: Leverages a pre-trained transformer model for high-accuracy sentence splitting.
  • Bundled Model: The model and tokenizer are included with the package—no extra downloads required.
  • Easy to Use: Simple quick integration into your projects.
  • Handles Long Texts: Efficiently processes long texts by splitting them into manageable chunks.
  • GPU Acceleration: Automatically utilizes CUDA if available for faster processing.

Installation

Install the package via pip:

pip install iges-sentence-splitter

or for cpu-only:

pip install iges-sentence-splitter[cpu]

Requirements

  • Python 3.6 or higher
  • torch
  • transformers

Note: These dependencies will be installed automatically when you install the package via pip.

Usage

Basic Example

from sentence_splitter.splitter import SentenceSplitter

# Initialize the splitter
splitter = SentenceSplitter()

# Input text
text = "This is a test. Here is another sentence. And yet another one!"

# Get sentences
sentences = splitter.split(text)

print(sentences)

Output:

['This is a test.', 'Here is another sentence.', 'And yet another one!']

Processing Long Texts

The split method can handle long texts by splitting them into chunks. You can adjust the parameters as needed:

sentences = splitter.split(
    text,
    max_seq_len=512,   # Maximum sequence length for each chunk
    stride=100,        # Overlap between chunks to preserve context
    batch_size=24       # Number of chunks to process at once
)

Reference

SentenceSplitter

A class for splitting text into sentences using a pre-trained transformer model.

Initialization

splitter = SentenceSplitter(device=None, efficient_mode=False)
  • Parameters:
    • device (str, optional): The device to run the model on ('cuda' or 'cpu'). Defaults to 'cuda' if available, otherwise 'cpu'.
    • efficient_mode (bool, optional): Whether to run the model in 8-bit precision for faster computing

Methods

  • split(text, max_seq_len=512, stride=100, batch_size=4)

    Splits the input text into sentences.

    • Parameters:
      • text (str): The text to split.
      • max_seq_len (int, optional): Maximum sequence length for the model. Defaults to 512.
      • stride (int, optional): Number of tokens to overlap between chunks. Defaults to 100.
      • batch_size (int, optional): Number of chunks to process simultaneously. Defaults to 24.
    • Returns:
      • List[str]: A list of sentences.

How It Works

The package uses a token classification model that labels each token as:

  • B: Beginning of a sentence.
  • E: End of a sentence.
  • I: Inside a sentence.

By processing the tokens and their predicted labels, the splitter reconstructs the sentences accurately, even in complex texts.

Example: Splitting Complex Text

text = """
Despite the rain, the match continued. Players were determined; fans were cheering. 
"Unbelievable!" shouted the commentator. It's a night to remember.
"""

sentences = splitter.split(text)

for i, sentence in enumerate(sentences, 1):
    print(f"Sentence {i}: {sentence}")

Output:

Sentence 1: Despite the rain, the match continued.
Sentence 2: Players were determined; fans were cheering.
Sentence 3: "Unbelievable!" shouted the commentator.
Sentence 4: It's a night to remember.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Author

Acknowledgments

Contact

For any questions or suggestions, feel free to reach out via email.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

iges_sentence_splitter-0.1.10.tar.gz (6.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

iges_sentence_splitter-0.1.10-py3-none-any.whl (6.1 kB view details)

Uploaded Python 3

File details

Details for the file iges_sentence_splitter-0.1.10.tar.gz.

File metadata

  • Download URL: iges_sentence_splitter-0.1.10.tar.gz
  • Upload date:
  • Size: 6.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 colorama/0.4.4 importlib-metadata/4.6.4 keyring/23.5.0 pkginfo/1.8.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.25.1 rfc3986/1.5.0 tqdm/4.64.1 urllib3/1.26.5 CPython/3.10.6

File hashes

Hashes for iges_sentence_splitter-0.1.10.tar.gz
Algorithm Hash digest
SHA256 9bcda8b9d6022a180181f882b291e17615f08d5de825eb197d8a365542c5313e
MD5 e53d895b4769f6f962f5ffdf29ded192
BLAKE2b-256 63d7c8140b4125aceb429a5a487cd1f30fb236b5057d733d8bf3c4fd955d48ef

See more details on using hashes here.

File details

Details for the file iges_sentence_splitter-0.1.10-py3-none-any.whl.

File metadata

  • Download URL: iges_sentence_splitter-0.1.10-py3-none-any.whl
  • Upload date:
  • Size: 6.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 colorama/0.4.4 importlib-metadata/4.6.4 keyring/23.5.0 pkginfo/1.8.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.25.1 rfc3986/1.5.0 tqdm/4.64.1 urllib3/1.26.5 CPython/3.10.6

File hashes

Hashes for iges_sentence_splitter-0.1.10-py3-none-any.whl
Algorithm Hash digest
SHA256 5cd1f9fc7d47f674dc2b1d34d5de15eb4f5bd9df841178678a5c9a103f42c0bd
MD5 ccba85790c6b12fd922e6ccf4bd69ef6
BLAKE2b-256 6dd59072aa1bc26041968038261aef9f8517a0bd7266b24d67814fa7aca9bb68

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page