DYGEST: Document Insights Generator
Project description
๐ dygest: Document Insights Generator
[!NOTE] dygest is a text analysis tool that extracts insights from
.txtfiles, generating summaries, keywords, TOCs, and performing Named Entity Recognition (NER).
Info
dygest was created to gain fast insights into longer transcripts of audio and video content by retrieving relevant topics and providing an easy to use HTML interface with short cuts from summaries to corresponding text chunks. NER processing further enhances those insights by identifying names of individuals, organisations, locations etc.
Features ๐งฉ
-
Text insights
Generate concise insights for your text files using various LLM services by creating summaries, keywords, table of contents (TOC) and named entities (NER). -
Unified LLM Interface
dygest uses litellm and provides integration for various LLM service providers:OpenAI,Anthropic,HuggingFace,Groq,Ollamaetc. Check the complete provider list for all available services. -
Token Friendly
dygest performs token-heavy text analysis and summarization tasks. Therefore, the underlying LLM pipeline can be tailored to your needs and specific rate limits using a mixed experts approach. -
Mixed Experts Approach
dygest utilizes two fully customizable LLMs to handle different processing tasks. The first, referred to as thelight_model, is designed for lighter tasks such as summarization and keyword extraction. The second, called theexpert_model, is optimized for more complex tasks like constructing Tables of Contents (TOCs).This flexibility allows for various pipeline configurations. For example, the
light_modelcan run locally usingOllama, while theexpert_modelcan leverage an external API service likeOpenAIorGroq. This approach ensures efficiency and adaptability based on specific requirements.
[!TIP] As the
expert_modelis dealing with a lot of input content it is recommended to use a larger LLM (>=32B) for this task. Smaller LLMs (3Bto7B) perform well aslight_model.
-
Named Entity Recognition (NER)
Named Entity Recognition via fast and reliableflairframework (identifies persons, organisations, locations etc.). -
User-friendly HTML Editor
By defaultdygestwill create a.htmlfile that can be viewed in standard browsers and combines summaries, keywords, TOC and NER for your text. It features a text editor for you to make further changes. -
Export Formats:
.json.csv.html
Requirements
- ๐ Python
>=3.10 - ๐ API keys for LLM services like
OpenAI,AnthropicandGroqand / or a runningOllamainstance
[!NOTE] API Keys have to be stored in your environment (e.g.
export $OPENAI_API_KEY=skj....)
Installation
Install with pip
Create a Python virtual environment
python3 -m venv venv
Activate the environment
source venv/bin/activate
Install dygest
pip install dygest
Install from source
Clone this repository
git clone https://github.com/tsmdt/dygest.git
cd dygest
Create a Python virtual environment
python3 -m venv venv
Activate the environment
source venv/bin/activate
Install dygest
pip install .
Usage
Configuration
Customize the dygest LLM pipeline by running the dygest config command:
Usage: dygest config [OPTIONS]
Configure LLMs, Embeddings and Named Entity Recognition.
โญโ Options โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ --light_model -l TEXT LLM model name for lighter tasks (summarization, keywords) [default: None] โ
โ --expert_model -x TEXT LLM model name for heavier tasks (TOCs). [default: None] โ
โ --embedding_model -e TEXT Embedding model name. [default: None] โ
โ --temperature -t FLOAT Temperature of LLM. [default: None] โ
โ --sleep -s FLOAT Pause LLM requests to prevent rate limit errors (in seconds). [default: None] โ
โ --chunk_size -c INTEGER Maximum number of tokens per chunk. [default: None] โ
โ --ner --no-ner Enable Named Entity Recognition (NER). Defaults to False. [default: no-ner] โ
โ --precise --fast Enable precise mode for NER. Defaults to fast mode. [default: fast] โ
โ --lang -lang TEXT Language of file(s) for NER. Defaults to auto-detection. [default: None] โ
โ --api_base -api TEXT Set custom API base url for providers like Ollama and Hugginface. [default: None] โ
โ --view_config -v View loaded config parameters. โ
โ --help Show this message and exit. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
The configuration is saved as dygest_config.yaml in the project directory. The .yaml config looks like this:
light_model: ollama/mistral:latest
expert_model: groq/llama-3.3-70b-versatile
embedding_model: ollama/nomic-embed-text:latest
temperature: 0.4
chunk_size: 1000
ner: true
language: auto
precise: false
api_base: null
sleep: 0
Processing
Run the dygest LLM pipeline with the dygest run command:
Usage: dygest run [OPTIONS]
Create insights for your documents (summaries, keywords, TOCs).
โญโ Options โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ --files -f TEXT Path to the input folder or .txt file. [default: None] โ
โ --output_dir -o TEXT If not provided, outputs will be saved in the input folder. [default: None] โ
โ --export_format -ex [all|json|csv|html] Set the data format for exporting. [default: html] โ
โ --toc -t Create a Table of Contents (TOC) for the text. Defaults to False. โ
โ --summarize -s Include a short summary for the text. Defaults to False. โ
โ --keywords -k Create descriptive keywords for the text. Defaults to False. โ
โ --sim_threshold -sim FLOAT Similarity threshold for removing duplicate topics. [default: 0.85] โ
โ --verbose -v Enable verbose output. Defaults to False. โ
โ --export_metadata -meta Enable exporting metadata to output file(s). Defaults to False. โ
โ --help Show this message and exit. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Export formats
Example for JSON output
{
"filename": "Embeddings_What_they_are_and_why_they_matter_en",
"output_filepath": "output/Embeddings_What_they_are_and_why_they_matter_en.json",
"input_text": "So Simon Wilson is going to be our last speaker before our break. So let's delve into ...", # Full input text is abbreviated in this example
"language": "en",
"chunk_size": 1000,
"token_count": 8705,
"light_model": "ollama/qwen2.5:3b",
"expert_model": "groq/llama-3.3-70b-versatile",
"summary": "Embeddings, a technology used in large language models, represent text as numerical vectors, allowing for the understanding of concepts and relationships between them. This technology is utilized in various applications, including semantic searches, content recommendation, and text analysis. Researchers and developers, such as Simon Wilson, have demonstrated the potential of embeddings in creating systems that serve up related content and identify similar articles based on their similarity scores. The use of embeddings has also been explored in other areas, including geospatial SQL queries, image embedding, and audio incorporation. Overall, embeddings have become a fundamental component in the development of language models and AI frameworks, enabling advanced capabilities such as retrieval augmented generation and dimension reduction.",
"keywords": [
"large language models",
"GeoPoly",
"SQLite",
"text representation",
"Embedding vectors",
"Similarity scores",
"clustering",
"geospatial SQL",
"text clustering",
"Serverless hosting",
"text search",
"OpenAI API",
"Word2Vec",
"semantic search",
"browser compatibility",
"related content",
"retrieval augmented generation",
"vibebased search",
"code functions",
"ImageBind",
"function lookup",
"Faucet Finder",
"Clip",
"vector databases",
"LangChain",
"PCA",
"blog question answering",
"concrete vs abstract",
"TIL blog",
"LLM",
"GeoPackage",
"GitHub Actions",
"multimodal"
],
"toc": [
{
"headline": "Embeddings and Similarity",
"topics": [
{
"summary": "Understanding Embeddings in Data Exploration",
"location": "S4"
},
{
"summary": "Using Embeddings for Similarity Analysis",
"location": "S85"
},
{
"summary": "OpenAI API for Embeddings",
"location": "S86"
},
{
"summary": "Analyzing Text for Clusters and Embeddings",
"location": "S316"
},
{
"summary": "Flexible Embedding Tools",
"location": "S407"
}
]
},
{
"headline": "Data Journalism and Tools",
"topics": [
{
"summary": "Simon Wilson's Background and Achievements",
"location": "S20"
}
]
},
{
"headline": "Geospatial and SQL",
"topics": [
{
"summary": "Geospatial SQL Queries Overview",
"location": "S43"
}
]
},
{
"headline": "Serverless and Hosting",
"topics": [
{
"summary": "Bake-to-Data Architecture Pattern",
"location": "S122"
}
]
},
{
"headline": "Language Models and Search",
"topics": [
{
"summary": "LLM Command-Line Utility",
"location": "S123"
},
{
"summary": "Vibe-Based Semantic Search for Readmes",
"location": "S222"
},
{
"summary": "Image-Text Similarity in Browser",
"location": "S223"
},
{
"summary": "Vibes-based search for faucets and other items",
"location": "S265"
}
]
},
{
"headline": "Databases and Indexing",
"topics": [
{
"summary": "Specialized Indexing Solutions",
"location": "S355"
}
]
},
{
"headline": "Multimodal and Browser",
"topics": [
{
"summary": "Fascinating Multimodal Space",
"location": "S408"
},
{
"summary": "Smaller and More Accessible",
"location": "S426"
}
]
}
]
}
Acknowledgments
dygest uses great python packages:
litellm: https://github.com/BerriAI/litellmflair: https://github.com/flairNLP/flairtyper: https://github.com/fastapi/typerjson_repair: https://github.com/mangiucugna/json_repair
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dygest-0.4.tar.gz.
File metadata
- Download URL: dygest-0.4.tar.gz
- Upload date:
- Size: 34.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a0d4ccba365d57224774418e20a99e23018c0661a4e3cf026abe924a68a55ed6
|
|
| MD5 |
55956d5880ef5f3eaeff24d6ff0e6c4e
|
|
| BLAKE2b-256 |
a13b717fea94a53a6ee3ec45d14d627c1f49dbaa5dfff6eb639f25bdb9602dc0
|
File details
Details for the file dygest-0.4-py3-none-any.whl.
File metadata
- Download URL: dygest-0.4-py3-none-any.whl
- Upload date:
- Size: 33.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b15bc0605a3c9de972705c936e982cf12b09ee53961262940daf544eda9b97ed
|
|
| MD5 |
28d1aee9df30f1a9fecc0b70e0b434b1
|
|
| BLAKE2b-256 |
0b64af29d9e0490cd8506bdd07cee5a109336426f1eb489d7a9831087f26d169
|