Skip to main content

Python SDK for creating verse-based content sites with AI translations, multimedia (images, audio), semantic search, RAG-grounded Puranic context, and deployment

Project description

Sanatan Verse SDK - Python SDK for Spiritual Verse Collections

Complete toolkit for generating rich multimedia content for spiritual text collections (Hanuman Chalisa, Sundar Kaand, etc.)

Features

  • 🔄 Complete Workflow: Generate media and embeddings from canonical sources - all in one command
  • 📖 Canonical Sources: Local YAML files ensure text accuracy and quality
  • 🎨 AI Images: Generate themed images with DALL-E 3
  • 🎵 Audio Pronunciation: Full and slow-speed audio with ElevenLabs
  • 🔍 Semantic Search: Vector embeddings for intelligent verse discovery
  • 📚 Multi-Collection: Organized support for multiple verse collections
  • 🎨 Theme System: Customizable visual styles (modern, traditional, kids-friendly, etc.)

Quick Start

Start here: End-to-End Workflow

Fastest Bootstrap

# Brand new project directory
mkdir my-verse-project
cd my-verse-project

# 1) Create and activate virtualenv
python3 -m venv .venv
source .venv/bin/activate

# 2) Install SDK
pip install sanatan-verse-sdk

# 3) Scaffold project
verse-init --collection hanuman-chalisa

# 4) Initialize git repo (after scaffolding to avoid non-empty prompt)
git init

See full command docs: verse-init

New Project Setup (Recommended)

# 1. Install
pip install sanatan-verse-sdk

# 2. Create project with collection templates
verse-init --project-name my-verse-project --collection hanuman-chalisa
cd my-verse-project

# 3. Configure API keys
cp .env.example .env
# Edit .env and add your API keys from:
# - OpenAI: https://platform.openai.com/api-keys
# - ElevenLabs: https://elevenlabs.io/app/settings/api-keys

# 4. Add canonical Devanagari text
# Edit data/verses/hanuman-chalisa.yaml with actual verse text

# 5. Validate setup
verse-validate

# 6. Generate multimedia content
verse-generate --collection hanuman-chalisa --verse 1

What you get: Verse file, AI-generated image, audio (full + slow speed), and search embeddings!

Existing Project

# Validate and fix structure
verse-validate --fix

# Generate content
verse-generate --collection hanuman-chalisa --verse 15

# Check status
verse-status --collection hanuman-chalisa

Advanced Usage

# Multiple collections at once
verse-init --collection hanuman-chalisa --collection sundar-kaand

# Custom number of sample verses
verse-init --collection my-collection --num-verses 10

# Generate specific components only
verse-generate --collection sundar-kaand --verse 3 --image
verse-generate --collection sundar-kaand --verse 3 --audio

# Skip embeddings update (faster)
verse-generate --collection hanuman-chalisa --verse 15 --no-update-embeddings

What Gets Generated

Each verse generation creates:

  • 🎨 Image: images/{collection}/{theme}/verse-01.png (DALL-E 3)
  • 🎵 Audio (full): audio/{collection}/verse-01-full.mp3 (ElevenLabs)
  • 🎵 Audio (slow): audio/{collection}/verse-01-slow.mp3 (0.75x speed)
  • 🔍 Embeddings: data/embeddings/collections/{collection}.json + data/embeddings/collections/index.json (for semantic search)

Text Source: Canonical Devanagari text from data/verses/{collection}.yaml (Local Verses Guide)

Migration Note: Legacy combined embeddings (data/embeddings.json) are no longer written by default. Use verse-embeddings --legacy-output if you still need the combined file.

Puranic Context Generation

Enrich verse pages with grounded story references from indexed sacred texts. Two-stage workflow:

Stage 1 — Index a Source Text

verse-index-sources --file data/sources/ananda-ramayana.txt

This command:

  1. Splits the source text into ~4000-char chunks
  2. Parses each chunk into discrete named episodes (keywords, type, summary in English + Hindi)
  3. Generates embeddings for each episode
  4. Writes outputs:
    • data/puranic-index/{key}.yml — human-readable episode index with _meta section
    • data/embeddings/puranic/{key}.json — embedding vectors for RAG retrieval
    • data/puranic-references.yml — registry of indexed sources

Only needs to run once per source, or when the source file changes.

# Use Bedrock Cohere for better Sanskrit/Hindi accuracy
verse-index-sources --file data/sources/shiv-puran.txt --provider bedrock-cohere

# If Bedrock input exceeds limits, use truncation policy
verse-embeddings --provider bedrock-cohere --truncate-policy chunk

# Larger chunk size for dense Puranic prose
verse-index-sources --file data/sources/valmiki-ramayana.pdf --chunk-size 6000

Stage 2 — Generate Puranic Context per Verse

verse-puranic-context --collection hanuman-chalisa --all

For each verse this command:

  1. Embeds the verse text using the same provider as the indexed source
  2. Runs cosine similarity search across all indexed sources to find the most relevant episodes
  3. Filters to episodes involving the collection's subject (configured in _data/collections.yml)
  4. Passes top episodes + verse text to GPT-4o with citation constraints
  5. Post-validates each entry: drops entries where the subject is not an active participant
  6. Writes puranic_context: block into the verse's .md frontmatter
# Skip verses that already have context (default)
verse-puranic-context --collection hanuman-chalisa --all

# Regenerate all existing entries
verse-puranic-context --collection hanuman-chalisa --all --regenerate

# Single verse
verse-puranic-context --collection hanuman-chalisa --verse chaupai-06

Collection Subject Configuration

The subject filter is resolved via a two-level hierarchy — no CLI flag needed:

Option A — Project-level default (single-subject projects): set once in _data/verse-config.yml, applies to all collections:

# _data/verse-config.yml
defaults:
  subject: Hanuman
  subject_type: deity

Option B — Collection-level override: set per collection in _data/collections.yml (takes priority over project default):

# _data/collections.yml
hanuman-chalisa:
  subject: Hanuman      # overrides or supplements project default
  subject_type: deity

krishna-bhajans:
  subject: Krishna      # different subject for this collection
  subject_type: deity

Resolution order: collection-level → project default → error if neither is set and indexed sources exist.

Multiple Sources

Multiple indexed sources are automatically combined in RAG retrieval:

_data/verse-config.yml         ← set defaults.subject here

data/sources/
  shiv-puran-part1.txt
  ananda-ramayana.txt        ← add new sources here

data/puranic-index/
  shiv-puran-part1.yml       ← auto-generated episode index
  ananda-ramayana.yml

data/embeddings/
  puranic/
    shiv-puran-part1.json      ← auto-generated embedding vectors
    ananda-ramayana.json

See verse-index-sources and verse-puranic-context for full documentation.

Migration Note: Puranic embeddings now live under data/embeddings/puranic/. If you have legacy files in data/embeddings/{source}.json, move them or re-run verse-index-sources to regenerate.

Installation

pip install sanatan-verse-sdk

Commands

Project Setup

  • verse-init - Initialize new project with recommended structure
  • verse-validate - Validate project structure and configuration

Content Generation

Puranic Context

  • verse-index-sources - Index Puranic source texts (PDFs, TXTs) into episodes and embeddings for RAG retrieval
  • verse-puranic-context - Generate Puranic context boxes for verses (RAG-grounded or GPT-4o free recall)

Project Management

  • verse-add - Add new verse entries to collections (supports multi-chapter formats)
  • verse-status - Check status, completion, and validate text against canonical source
  • verse-sync - Sync verse text with canonical source (fix mismatches)
  • verse-deploy - Deploy Cloudflare Worker for API proxy

Embeddings Config

  • embeddings.yml - Shared defaults and precedence (CLI > config > env > defaults)

Configuration

Copy the example environment file and add your API keys:

cp .env.example .env
# Edit .env and add your API keys

See the End-to-End Workflow for the full lifecycle, and the Usage Guide for advanced workflows and best practices.

Documentation

Example Project

Hanuman GPT - Multi-collection project with Hanuman Chalisa, Sundar Kaand, and Sankat Mochan Hanumanashtak

Requirements

  • Python 3.8+
  • OpenAI API key (for text/images/embeddings)
  • ElevenLabs API key (for audio)

License

MIT License - See LICENSE file for details

Support

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sanatan_verse_sdk-0.68.0.tar.gz (141.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sanatan_verse_sdk-0.68.0-py3-none-any.whl (160.8 kB view details)

Uploaded Python 3

File details

Details for the file sanatan_verse_sdk-0.68.0.tar.gz.

File metadata

  • Download URL: sanatan_verse_sdk-0.68.0.tar.gz
  • Upload date:
  • Size: 141.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for sanatan_verse_sdk-0.68.0.tar.gz
Algorithm Hash digest
SHA256 eda16c51ec902d5dd403a454bbd9d88afd64dc20872f91cae800878449a54d32
MD5 0a793a7865f899a4161bdf91d42bbf15
BLAKE2b-256 50804a19006d49ef378f77d476b2cda5500088562c217c1442890e8e5f2b5c92

See more details on using hashes here.

File details

Details for the file sanatan_verse_sdk-0.68.0-py3-none-any.whl.

File metadata

File hashes

Hashes for sanatan_verse_sdk-0.68.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3f5e3398d6f770a92f7c1f9d7b2961542824746a095ae484e1b8d602141af0e9
MD5 3cad2f5d61d53a1e1d8aa89469ea0a1c
BLAKE2b-256 94b2645ff4ce9613219f6a3c41402866cee5b7c2770dc3480f5e2324e8dcc054

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page