A collection of tools for LLM interactions like prompt refinement and idea generation. It also provides a console chat client with speech to text feature.
Project description
sokrates
A comprehensive framework for Large Language Model (LLM) interactions, featuring advanced prompt refinement, system monitoring, extensive CLI tools, and a robust task queue system. Designed to facilitate working with LLMs through modular components, well-documented APIs, and production-ready utilities.
Table of Contents
Description
sokrates is a comprehensive framework for working with Large Language Models (LLMs). It provides a complete toolkit for developers and researchers to interact with LLMs efficiently and effectively.
Core Capabilities:
- Advanced Prompt Engineering: Sophisticated prompt refinement tools that optimize LLM input/output for better performance
- Voice-Enabled Chat: Interactive command-line chat interface with optional voice input/output using OpenAI Whisper
- Task Queue System: Robust background task processing with persistence, error handling, and retry mechanisms
- Multi-stage Workflows: Complex task breakdown, idea generation, and sequential task execution
- Python coding tools: A set of useful tools for python coding
- generate code reviews
- generate test cases
- summarize functionality
- Extensive CLI Interface: 15+ specialized commands for rapid experimentation and automation
Key Features:
- Modular Architecture: Easily extensible components with clean separation of concerns
- OpenAI-Compatible API: Works with any OpenAI-compatible endpoint (LocalAI, Ollama, LM Studio, etc.)
- Configuration Management: Flexible configuration system with environment variable support
- Output Processing: Advanced text cleaning and formatting utilities for LLM-generated content
- Performance Analytics: Detailed timing metrics and token generation statistics
- File Management: Comprehensive file handling for context loading and result storage
- CLI Tools: For interacting with LLMs (for details: see below)
Installation
Prerequisites
- Python 3.9 or higher
- Optional: FFmpeg (for voice features)
- Optional: Whisper-cpp (for enhanced voice processing)
Install Prerequisites for Voice Features (Optional)
# On macOS
brew install ffmpeg
brew install whisper-cpp
brew install espeak-ng
# On Ubuntu/Debian
sudo apt-get install ffmpeg
sudo apt-get install whisper-cpp
sudo apt-get install espeak-ng
Install from PyPI
pip install sokrates
# or using uv (recommended)
## basic version:
uv pip install sokrates
## voice enabled version
uv pip install sokrates[voice]
# Test the installation (this expects you to have an OpenAI compatbile endpoint running on localhost:1234/v1 , e.g. via LM Studio or ollama)
sokrates-list-models --api-endpoint http://localhost:1234/v1
Install for Development
git clone https://github.com/Kubementat/sokrates.git
cd sokrates
uv sync # for basic version
uv sync --all-extras # for voice support enabled version
Dependencies
For a list of all dependencies view the pyproject.toml file.
Configuration
You can configure the library via a configuration file in $HOME/.sokrates/.env
# Copy
cp .env.example $HOME/.sokrates/.env
# adjust to your needs
vim $HOME/.sokrates/.env
Usage
Basic Command Structure
Most commands follow this structure:
command --option1 value1 --option2 value2
You can always display the help via:
command --help
e.g.
uv run sokrates-list-models --help
# for listing all available commands run:
uv run | grep sokrates
Available Commands
Core LLM Operations
sokrates-list-models: List available LLM models from your endpointsokrates-send-prompt: Send a prompt to an LLM APIsokrates-chat: Interactive chat interface with LLMs (supports voice mode)sokrates-refine-prompt: Refine prompts for better LLM performancesokrates-refine-and-send-prompt: Combine refinement and execution in one command
Task Management
sokrates-breakdown-task: Break down complex tasks into manageable sub-taskssokrates-execute-tasks: Execute tasks sequentially from JSON task listssokrates-task-add: Add tasks to the background task queuesokrates-task-list: List queued tasks with status and prioritysokrates-task-status: Check detailed status of specific taskssokrates-task-remove: Remove tasks from the queuesokrates-daemon: Start/stop/restart the task queue daemon
Idea Generation & Content Creation
sokrates-idea-generator: Generate ideas using multi-stage workflows with topic categorizationsokrates-generate-mantra: Generate mantras or affirmationssokrates-fetch-to-md: Fetch web content and convert to markdownsokrates-merge-ideas: Merge multiple documents or ideas into a coherent output
Benchmarking & Analysis
sokrates-benchmark-model: Benchmark LLM models with performance metricssokrates-benchmark-results-merger: Merge multiple benchmark resultssokrates-benchmark-results-to-markdown: Convert benchmark results to formatted markdown
Python coding tools
sokrates-code-summarize: Parse a directory with python sources and generate a summary including all present classes and functions with signatures and documentationsokrates-code-review: Parse a directory or a list of files with python sources and generate code reviews for each provided filesokrates-code-generate-tests: Parse a directory or a list of files with python sources and generate tests for the code
Task Queuing System
The task queue system allows you to queue LLM processing tasks in JSON format for reliable execution:
# Add a new task to the queue
sokrates-task-add tasks/new_task.json --priority high
# List all queued tasks
sokrates-task-list --status pending --priority high
# Get detailed status of specific task
sokrates-task-status task-123 --verbose
# Remove a task from the queue
sokrates-task-remove task-789 --force
# Start the daemon to process queued tasks
sokrates-daemon start
# Restart the daemon
sokrates-daemon restart
# Stop the daemon
sokrates-daemon stop
Example Usage
Basic LLM Operations
# List available models
sokrates-list-models --api-endpoint http://localhost:1234/v1
# Send a simple prompt
sokrates-send-prompt --model qwen3-4b-instruct-2507-mlx --prompt "Explain quantum computing in simple terms"
# Interactive chat with voice support
sokrates-chat --model qwen3-4b-instruct-2507-mlx --voice # Enable voice mode
sokrates-chat --model qwen3-4b-instruct-2507-mlx --context-files ./docs/context.md
# Refine a prompt for better performance
sokrates-refine-prompt --prompt "Write a story about a robot" --model qwen3-4b-instruct-2507-mlx
sokrates-chat Commands
The sokrates-chat interface provides several special commands that enhance the chat experience:
/voice
Toggles between voice mode and text mode during the chat session.
- Usage: Type
/voicein the chat interface - Description: When enabled, voice mode allows you to speak your inputs instead of typing them. The system will use speech-to-text capabilities to transcribe your voice input. This requires the voice dependencies to be installed (FFmpeg and Whisper-cpp).
- Example:
You: /voice [System message: Switched to voice mode.]
/talk
Enables text-to-speech functionality for the AI's responses.
- Usage: Type
/talkin the chat interface - Description: When activated, the AI will speak its responses aloud using text-to-speech capabilities. This is useful for hands-free interaction or when you prefer to listen rather than read the responses.
- Example:
You: /talk [System message: Text-to-speech enabled for AI responses.]
/add <Filepath>
Adds additional context to the conversation from a file.
- Usage: Type
/add <path/to/file>in the chat interface - Description: Loads content from the specified file and adds it to the conversation history as a system message. This allows you to provide additional context or reference material during the conversation without restarting the chat.
- Example:
You: /add ./docs/project-context.md [System message: Added context from ./docs/project-context.md] - Note: The file path can be absolute or relative to the current working directory.
Task Management
# Break down a complex project into tasks
sokrates-breakdown-task --task "Build a web application for task management" --output project-tasks.json
# Execute the generated tasks sequentially
sokrates-execute-tasks --task-file project-tasks.json --output-dir ./results
# Add a task to the background queue
sokrates-task-add tasks/feature_request.json --priority high
# Start the task queue daemon
sokrates-daemon start
# Check task status
sokrates-task-status --task-id abc123 --verbose
# List all pending tasks
sokrates-task-list --status pending --priority high
Idea Generation & Content Creation
# Generate creative ideas with topic categorization
sokrates-idea-generator --topic "AI in healthcare" --output-dir ./healthcare-ideas --idea-count 5
# Generate mantras for motivation
sokrates-generate-mantra -o my_mantra.md
# Convert web content to markdown
sokrates-fetch-to-md --url "https://example.com/article" --output article.md
# Merge multiple documents or ideas
sokrates-merge-ideas --source-documents 'docs/idea1.md,docs/idea2.md' --output-file merged-ideas.md
Benchmarking & Analysis
# Benchmark model performance
sokrates-benchmark-model --model qwen3-4b-instruct-2507-mlx --iterations 10 --temperature 0.7
# Convert benchmark results to markdown
sokrates-benchmark-results-to-markdown --input benchmark_results.json --output benchmark_report.md
Python coding tools
# Summarize python source dode classes and functions in the `src` directory and write the result to `docs/code_summary.md`
sokrates-code-summarize --source-directory src/ --output docs/code_summary.md
# Perform a code review for a list of code files or a directory
sokrates-code-review --files src/sokrates/config.py --verbose -o docs/code_reviews
Features
🚀 Core LLM Capabilities
- Advanced Prompt Refinement: Multi-stage prompt optimization with context awareness
- Streaming Responses: Real-time token streaming with performance metrics
- Multi-model Support: Compatible with any OpenAI-compatible LLM endpoint
- Context Management: Flexible context loading from files, directories, or text
- Response Processing: Intelligent cleaning and formatting of LLM outputs
🎯 Task Management & Workflows
- Task Queue System: Background task processing with SQLite persistence
- Sequential Task Execution: Complex multi-step task automation
- Task Breakdown: AI-powered task decomposition into manageable sub-tasks
- Priority Queue: Task prioritization and status tracking
- Error Handling: Comprehensive error recovery and logging
💬 Interactive Features
- Voice-Enabled Chat: Speech-to-text and text-to-speech capabilities using Whisper
- Interactive CLI: Rich command-line interface with colorized output
- Conversation Logging: Automatic chat history logging with timestamps
- Context Switching: Dynamic context addition during conversations
📊 System Monitoring & Analytics
- Real-time Monitoring: CPU, memory, and resource usage tracking
- Performance Metrics: Token generation speed, response times, and throughput
- Benchmarking Tools: Comprehensive model performance analysis
- Logging Infrastructure: Structured logging with configurable levels
🔧 Developer Tools
- Modular Architecture: Clean, extensible component design
- Configuration Management: Flexible environment-based configuration
- File Management: Comprehensive file handling utilities
- Testing Framework: Integrated pytest with comprehensive test coverage
- Documentation: Extensive inline documentation and examples
🎨 User Experience
- Rich CLI Output: Colorized, formatted output with progress indicators
- Help System: Comprehensive help and usage instructions for all commands
- Error Handling: User-friendly error messages and recovery suggestions
- Cross-platform: Works on macOS, Linux, and Windows
Contributing
We welcome contributions! Please follow these steps:
- Fork the repository and create a new branch for your feature
- Make your changes with appropriate tests and documentation
- Run the test suite to ensure everything works correctly
- Submit a pull request with a clear description of your changes
Development Setup
git clone https://github.com/Kubementat/sokrates.git
cd sokrates
uv sync --all-extras
uv pip install -e .
source .venv/bin/activate
Run the testsuite
For the testsuite we expect a locally running LM Studio instance with the default model qwen3-4b-instruct-2507-mlx available and ready for execution. For details for setting up LM Studio visit their documentation.
# run all unit tests
uv run python -m pytest tests
# run only unit tests (without LLM interactions)
uv run python -m pytest tests --ignore=tests/integration_tests
# run the integartion pytest testsuite
uv run python -m pytest tests/integration_tests
# run integration tests using the commands
uv run test_all_commands.py
# for options check
uv run test_all_commands.py --help
Guidelines
- Follow the existing code style and conventions
- Add tests for new functionality
- Update documentation for significant changes
- Ensure all existing tests pass
Please read CONTRIBUTING.md for detailed contribution guidelines.
License
This project is licensed under the MIT License. See LICENSE for details.
Contact
Julian Weber - Creator and Maintainer
- 📧 Email: julianweberdev@gmail.com
- 🐙 GitHub: @julweber
- 💼 LinkedIn: Julian Weber
Project Links:
- 🏠 Homepage: https://github.com/Kubementat/sokrates
- 📚 Documentation: See docs/ directory for detailed documentation
- 🐛 Issues: GitHub Issues
- 💬 Discussions: GitHub Discussions
Changelog
View our CHANGELOG.md for a detailed changelog.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sokrates-0.9.2.tar.gz.
File metadata
- Download URL: sokrates-0.9.2.tar.gz
- Upload date:
- Size: 343.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e9f4074bca0b64efec5d51ac90d2522740912d1f1a58e37c4226701a9e5ec190
|
|
| MD5 |
eebfe5ca2565303cc657cae5cf1880c0
|
|
| BLAKE2b-256 |
23214e45ea7bcb86dcf6b6879ba926e5d6430075908f3e27671f6d08590481c1
|
File details
Details for the file sokrates-0.9.2-py3-none-any.whl.
File metadata
- Download URL: sokrates-0.9.2-py3-none-any.whl
- Upload date:
- Size: 164.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5aa91d3f0d84ee4c5bada8855dd4daf1c515c15e5db4dd2e7b8b97e269a70a49
|
|
| MD5 |
e47fd5e2f46aa4126ebf4eeb7eabc927
|
|
| BLAKE2b-256 |
c1ce30cf1322a7e214050914d647093ae453564c55287002e089ddf02eb99335
|