Extract Table of Contents from Tibetan texts and return section start indices
Project description
ai-text-outline
Overview
ai-text-outline is a simple Python package that extracts Table of Contents (དཀར་ཆག) from Tibetan text and returns character indices where each section begins.
Perfect for:
- 📚 Digital publishing - Index Tibetan manuscripts automatically
- 🔍 Text analysis - Locate sections in large Tibetan documents
- 🤖 Backend integration - Add ToC extraction to your pipeline
- 📱 Web applications - Power frontend outlining tools
Features
✨ Simple & Fast
- Send first 1/5 of text to Gemini
- Get ToC titles back as JSON
- Find titles in full text (skip first, use second occurrence)
- Return sorted character indices
🌍 Tibetan Native
- Full Unicode Tibetan support
- Handles དཀར་ཆག section markers
- Preserves original Tibetan text
💰 Cost Efficient
- Uses only Google Gemini
- Sends minimal text (1/5 of document)
- ~$0.0001 per extraction
Installation
pip install ai-text-outline
Requires: Python 3.9+, Google Generative AI SDK (installed automatically)
Quick Start
1. Get Gemini API Key
Get a free key at https://ai.google.dev/
2. Set Environment Variable
export GEMINI_API_KEY="your-api-key"
3. Extract ToC
from ai_text_outline import extract_toc_indices
# From file
indices = extract_toc_indices(file_path='tibetan_text.txt')
# Or from text string
text = open('tibetan_text.txt', encoding='utf-8').read()
indices = extract_toc_indices(text=text)
print(indices) # [150, 2450, 5200, ...]
API Reference
extract_toc_indices()
def extract_toc_indices(
file_path: str | None = None,
text: str | None = None,
*,
gemini_api_key: str | None = None,
) -> list[int]
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
file_path |
str | None | None | Path to Tibetan text file (UTF-8) |
text |
str | None | None | Raw text string (mutually exclusive with file_path) |
gemini_api_key |
str | None | None | Gemini API key. Falls back to GEMINI_API_KEY env var if not provided |
Returns
list[int] - Sorted character indices where each ToC section begins. Empty list [] if no ToC found.
Raises
| Exception | When |
|---|---|
ValueError |
Neither or both file_path and text provided; or no API key found |
FileNotFoundError |
file_path doesn't exist |
UnicodeDecodeError |
File is not UTF-8 encoded |
ImportError |
google-generativeai SDK not installed |
Example
from ai_text_outline import extract_toc_indices
text = open('book.txt', encoding='utf-8').read()
indices = extract_toc_indices(text=text)
# Use indices to extract sections
for i, start_idx in enumerate(indices):
end_idx = indices[i+1] if i+1 < len(indices) else len(text)
section = text[start_idx:end_idx]
print(f"Section {i+1}: {len(section)} chars")
How It Works
Input Text (file or string)
│
▼
Load text
│
▼
Extract first 1/5 of text (with context-aware fallback)
If context limit exceeded:
├─ Retry with 1/10 of text
└─ If still exceeded, retry with 1/100 of text
│
▼
Send to Gemini API
→ Extracts ToC titles
→ Returns JSON: {"toc": {"Title": page_num, ...}}
│
▼
For each title:
Find all matches in full text (limit 10 per title)
├── 2+ matches → use matches[1].start() (skip ToC itself)
└── 0 or 1 match → skip
│
▼
Return sorted list of indices
Context Overflow Handling
For very large texts (>5MB), the extraction automatically handles Gemini API context limits:
- First attempt: Send first 1/5 of text (default)
- If context exceeded: Automatically retry with first 1/10 of text
- If still exceeded: Retry with first 1/100 of text
- If all fail: Return empty list (no ToC found)
This ensures the package works with texts of any size without manual intervention.
Examples
Example 1: Extract from File
from ai_text_outline import extract_toc_indices
import os
os.environ['GEMINI_API_KEY'] = 'AIzaSy...'
indices = extract_toc_indices(file_path='texts/book.txt')
print(f"Found {len(indices)} sections")
print(indices) # [0, 450, 2100, 5800, ...]
Example 2: Extract Sections
from ai_text_outline import extract_toc_indices
indices = extract_toc_indices(file_path='book.txt')
text = open('book.txt', encoding='utf-8').read()
# Split into sections
sections = []
for i, start_idx in enumerate(indices):
end_idx = indices[i+1] if i+1 < len(indices) else len(text)
sections.append(text[start_idx:end_idx])
for i, section in enumerate(sections):
print(f"Section {i}: {len(section)} chars")
Example 3: With Custom API Key
from ai_text_outline import extract_toc_indices
# Pass API key directly instead of env var
indices = extract_toc_indices(
file_path='text.txt',
gemini_api_key='AIzaSy...'
)
Example 4: Flask Backend
from flask import Flask, request, jsonify
from ai_text_outline import extract_toc_indices
app = Flask(__name__)
@app.post('/api/extract-toc')
def extract_toc():
"""Extract ToC from uploaded text file."""
data = request.json
file_path = data.get('file_path')
text_content = data.get('text')
try:
indices = extract_toc_indices(
file_path=file_path,
text=text_content,
)
return {
'success': True,
'indices': indices,
'count': len(indices),
}
except ValueError as e:
return {'error': str(e)}, 400
except Exception as e:
return {'error': f'Extraction failed: {str(e)}'}, 500
Error Handling
No API Key Found
ValueError: No Gemini API key. Set GEMINI_API_KEY env var or pass gemini_api_key=
Solution:
export GEMINI_API_KEY="your-key"
Or pass directly:
extract_toc_indices(text=text, gemini_api_key='your-key')
File Not Found
FileNotFoundError: [Errno 2] No such file or directory: 'text.txt'
Solution: Check file path exists:
from pathlib import Path
assert Path('text.txt').exists()
Empty Result
If extraction returns [], the text may not have a clear ToC structure that Gemini can extract.
Performance
| Text Size | Time | Notes |
|---|---|---|
| < 100 KB | 0.5-1s | API latency dominant |
| 100 KB - 1 MB | 1-2s | First 1/5 sent to Gemini |
| 1-5 MB | 2-3s | Faster processing |
| > 5 MB | 3-5s | Auto-fallback to 1/10 or 1/100 slice if needed |
Cost: ~$0.0001 per extraction (using Gemini Flash model)
Context Limits: The package automatically handles Gemini's context window limits by progressively reducing the text slice (1/5 → 1/10 → 1/100) if needed. Works reliably with texts up to 50MB+.
Testing
Run tests:
pip install -e ".[dev]"
pytest
pytest --cov=ai_text_outline
Tests: 32 passing (including 8 new context overflow tests)
Test Coverage
- Parsing tests: JSON response handling with edge cases
- Integration tests: Full extraction pipeline with mocked Gemini
- Context overflow tests:
- Retry mechanism with progressive text slice reduction (1/5 → 1/10 → 1/100)
- Success on first attempt stops retrying
- Non-context errors are properly raised
- All attempts exhausted returns empty list
Requirements
- Python 3.9 or higher
- Google Gemini API key (free tier available)
- Internet connection (for Gemini API calls)
License
MIT License - See LICENSE file for details.
Support
- 📖 Documentation: See this README
- 🐛 Issues: GitHub Issues
- 💬 Discussions: GitHub Discussions
Citation
If you use this package in research:
@software{ai_text_outline,
title={ai-text-outline: Extract Table of Contents from Tibetan texts},
author={OpenPecha},
url={https://github.com/OpenPecha/ai-text-outline},
year={2026},
license={MIT}
}
Changelog
v0.2.1 (Current)
- 🔄 Context overflow handling: Automatic retry with progressive text slice reduction (1/5 → 1/10 → 1/100)
- 🧪 Enhanced tests: 32 passing tests including 8 new context overflow tests
- 📚 Improved documentation: Added context handling explanation to README
- 🛡️ Robust error handling: Detect and handle context/quota/token limit errors
v0.2.0
- 🎉 Complete simplification: Gemini-only, no multi-provider support
- ⚡ Regex-based index finding (no fuzzy matching)
- 💪 Minimal dependencies: only
google-generativeai - 🧪 14 passing tests
- 📖 Simplified API with clear documentation
v0.1.1
- ✨ Multi-provider LLM support
- 🔍 Fuzzy matching with position ranking
- 📚 Comprehensive documentation
v0.1.0
- 🎉 Initial release
- དཀར་ཆག detection and parsing
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_text_outline-0.2.1.tar.gz.
File metadata
- Download URL: ai_text_outline-0.2.1.tar.gz
- Upload date:
- Size: 18.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
36e4c162d62d3cebe1e18b0ab52d9c3c3030c23ce92e38af40c54a75c398c8e5
|
|
| MD5 |
fb9c94d7fb317944cd0d7cfd8806e7bc
|
|
| BLAKE2b-256 |
bed29446b49ab60c1e78ea9c902e1798ca27ff18ed323df6f49c94f6e2f2bdec
|
File details
Details for the file ai_text_outline-0.2.1-py3-none-any.whl.
File metadata
- Download URL: ai_text_outline-0.2.1-py3-none-any.whl
- Upload date:
- Size: 12.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec792fdf228b3c058cc28fa6f24487f012a89581bf1106f846ee65ca76deb21e
|
|
| MD5 |
78e32968b1d4108311ba747a33a042ee
|
|
| BLAKE2b-256 |
8be15367ea89ffbaf2b61b45d9c839caa0802b7c273452869527210c23a9df81
|