Skip to main content

A comprehensive toolkit for Sanskrit text processing

Project description

Vedika - Sanskrit NLP Toolkit

Vedika is a comprehensive toolkit for Sanskrit text processing, offering deep learning-based tools for sandhi splitting and joining, text normalization, sentence splitting, syllabification, and tokenization.

Features

  • Sandhi Processing
    • Split compound Sanskrit words using attention-based neural networks
    • Join Sanskrit words with proper sandhi rules
    • Support for beam search to get multiple suggestions
  • Text Processing
    • Syllabification
    • Tokenization
    • Sentence splitting
    • Text normalization

Installation

# Install from PyPI (soon)
pip install vedika

# Install from source
git clone https://github.com/tanuj437/vedika.git
cd vedika
pip install -e .

Requirements

  • Python >= 3.8
  • PyTorch >= 1.9.0
  • NumPy >= 1.19.0
  • Pandas >= 1.3.0
  • tqdm >= 4.62.0
  • regex >= 2021.8.3

Quick Start

Sandhi Splitting

from vedika import SanskritSplit

# Initialize splitter
splitter = SanskritSplit()

# Split a single word
result = splitter.split("रामायणम्")
print(result['split'])  # Output: राम + अयन + अम्

# Batch processing
words = ["रामायणम्", "गीतागोविन्दम्"]
results = splitter.split_batch(words)
for result in results:
    print(f"{result['input']}{result['split']}")

Sandhi Joining

from vedika import SandhiJoiner

# Initialize joiner
joiner = SandhiJoiner()

# Join split words
result = joiner.join("राम+अस्ति")
print(result)  # Output: रामास्ति

# Batch processing
texts = ["राम+अस्ति", "गच्छ+अमि"]
results = joiner.join_batch(texts)
print(results)  # ['रामास्ति', 'गच्छामि']

Advanced Usage

Beam Search for Multiple Suggestions

# Get multiple suggestions with beam search
result = splitter.split("रामायणम्", beam_size=3)
print(f"Best split: {result['split']}")
print(f"Confidence: {result['confidence']}")
print("Alternatives:")
for alt in result['alternatives']:
    print(f"- {alt['split']} (confidence: {alt['confidence']})")

Model Information

# Get model details
info = splitter.get_model_info()
print(f"Vocabulary size: {info['vocabulary_size']}")
print(f"Device: {info['device']}")
print(f"Configuration: {info['model_config']}")

Project Structure

vedika/
├── __init__.py
├── normalizer.py
├── sandhi_join.py
├── sandhi_split.py
├── sentence_splitter.py
├── syllabification.py
├── tokenizer.py
└── data/
    ├── cleaned_metres.json
    ├── sandhi_joiner.pth
    └── sandhi_split.pth

Model Architecture

The sandhi processing models use:

  • Bidirectional LSTM encoder
  • GRU decoder with attention
  • Multi-head attention mechanism
  • Character-level processing

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Authors

  • Tanuj Saxena
  • Soumya Sharma

Citation

If you use Vedika in your research, please cite:

@software{vedika2025,
  title={Vedika: A Sanskrit Text Processing Toolkit},
  author={Saxena, Tanuj and Sharma, Soumya},
  year={2025},
  url={https://github.com/username/vedika}
}

Contact

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vedika-0.0.10.tar.gz (32.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vedika-0.0.10-py2.py3-none-any.whl (35.8 kB view details)

Uploaded Python 2Python 3

File details

Details for the file vedika-0.0.10.tar.gz.

File metadata

  • Download URL: vedika-0.0.10.tar.gz
  • Upload date:
  • Size: 32.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.1

File hashes

Hashes for vedika-0.0.10.tar.gz
Algorithm Hash digest
SHA256 da47c7a802b5e0eb3763a66dd65fff6119e51350ede396a5d7fbc0b7d8261a25
MD5 cf8c271f5cbc57fe1f392cbe4dfdacef
BLAKE2b-256 d063f0b1876e19288e20038a6f872227c620b9b8ce73cf576e28cf48c5897d4b

See more details on using hashes here.

File details

Details for the file vedika-0.0.10-py2.py3-none-any.whl.

File metadata

  • Download URL: vedika-0.0.10-py2.py3-none-any.whl
  • Upload date:
  • Size: 35.8 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.1

File hashes

Hashes for vedika-0.0.10-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 fa16ba6f65c59a2b83db45eace0c0a5f9ed6906e9ebeb7a3d07419b5f6ef637e
MD5 d88b64124d6c5446945dcf8cd79a82d3
BLAKE2b-256 3bd4a08686948a134f1f61d0176047b0a2adf93e76f18452c7a03c6379d08de2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page