Foundation library for geepers multi-agent system - LLM providers, config management, and utilities
Project description
dr-eamer-ai-shared
Unified AI Development Infrastructure
Overview
dr-eamer-ai-shared is the foundational library powering the Dreamwalker MCP ecosystem. It provides:
- 10+ LLM Providers — Unified interface for Anthropic, OpenAI, xAI, Mistral, Cohere, Gemini, Perplexity, Groq, and more
- Multi-Agent Orchestration — Dream Cascade (hierarchical research) and Dream Swarm (parallel search) patterns
- 15+ Data Sources — Structured API clients for arXiv, Semantic Scholar, Census, GitHub, NASA, and more
- MCP Server Infrastructure — Model Context Protocol servers exposing tools via stdio/HTTP
- Document Generation — Professional PDF, DOCX, and Markdown output with citations
Status: Production-ready, actively developed
Package Name: dr-eamer-ai-shared (published on PyPI)
Documentation: dr.eamer.dev/dreamwalker
Quick Start
Installation
# Clone and install in editable mode
git clone https://github.com/lukeslp/kernel
cd kernel/shared
pip install -e .
# Install with all provider dependencies
pip install -e .[all]
# Install specific providers only
pip install -e .[anthropic,xai,openai]
Basic Usage
1. LLM Provider Abstraction
from llm_providers import ProviderFactory
# Unified interface across 10+ providers
provider = ProviderFactory.create_provider('xai', model='grok-3')
response = provider.complete(messages=[
{'role': 'user', 'content': 'Explain quantum computing'}
])
print(response)
2. Multi-Agent Research (Dream Cascade)
from orchestration import DreamCascadeOrchestrator
# Hierarchical research with 8 agents
orchestrator = DreamCascadeOrchestrator(
provider_name='anthropic',
model='claude-sonnet-4'
)
result = await orchestrator.execute(
task="Comprehensive analysis of AI safety research 2023-2025",
enable_drummer=True, # Mid-level synthesis
enable_camina=True # Executive summary
)
print(result.final_report)
3. Data Fetching (dream_of_ tools)*
from data_fetching import ClientFactory
# Academic papers
arxiv = ClientFactory.create_client('arxiv')
papers = arxiv.search(query='quantum computing', max_results=10)
# US Census demographics
census = ClientFactory.create_client('census_acs')
data = census.get_demographics(geography='state:06') # California
Architecture
Dreamwalker Naming Convention
The library uses semantic, descriptive naming (moved away from codename-based naming in November 2025):
| Pattern | Prefix | Examples |
|---|---|---|
| Orchestration Workflows | dream-* |
dream-cascade, dream-swarm |
| Data Tools | dream_of_* |
dream_of_arxiv, dream_of_census_acs |
| Management Tools | dreamwalker_* |
dreamwalker_status, dreamwalker_cancel |
| Provider Tools | dreamer_* |
dreamer_anthropic, dreamer_openai (deferred) |
Classes:
DreamCascadeOrchestrator— Implements dream-cascade pattern (hierarchical research)DreamSwarmOrchestrator— Implements dream-swarm pattern (parallel search)
Package Structure
shared/
├── llm_providers/ # 10+ provider implementations
│ ├── base_provider.py # BaseLLMProvider abstract class
│ ├── factory.py # ProviderFactory
│ ├── anthropic_provider.py
│ ├── openai_provider.py
│ ├── xai_provider.py
│ └── ...
├── orchestration/ # Multi-agent workflow patterns
│ ├── dream_cascade.py # Hierarchical research
│ ├── dream_swarm.py # Parallel search
│ ├── sequential.py # Staged execution
│ ├── conditional.py # Branching logic
│ └── iterative.py # Refinement loops
├── mcp/ # Model Context Protocol servers
│ ├── unified_server.py # Main orchestration (port 5060)
│ ├── providers_server.py
│ ├── data_server.py
│ └── ...
├── data_fetching/ # 15+ structured API clients
│ ├── dream_of_arxiv.py
│ ├── dream_of_semantic_scholar.py
│ ├── dream_of_census_acs.py
│ └── ...
├── document_generation/ # PDF, DOCX, Markdown output
├── config.py # Multi-source configuration
└── naming.py # Naming registry
Features
LLM Providers (10+)
Unified interface across providers with automatic model selection, cost tracking, and failover:
- Anthropic — Claude Opus, Sonnet, Haiku
- OpenAI — GPT-4, GPT-4-Turbo, DALL-E 3
- xAI — Grok-3, Grok-3-mini, Aurora (vision + image gen)
- Mistral — Large, Medium, Small
- Cohere — Command R+
- Google — Gemini Pro, Ultra
- Perplexity — pplx-70b-online (web search)
- Groq — Llama 3.1 (ultra-fast inference)
- HuggingFace — Various open models
- DeepSeek — R1 reasoning model
Complexity Router: Automatically selects cheap models for simple tasks, expensive for complex.
Orchestration Patterns
dream-cascade (Hierarchical Research)
- 8 parallel workers (specialized agents)
- Mid-level synthesis (Drummer)
- Executive synthesis (Camina)
- Use case: Academic literature reviews, market research, due diligence
dream-swarm (Parallel Search)
- 5+ specialized agents execute in parallel
- Domain-specific: Academic, News, Technical, Financial
- Use case: Broad exploratory research, competitive analysis
Sequential/Conditional/Iterative
- Staged execution with per-step handlers
- Runtime branch selection
- Looped refinement with success predicates
Data Sources (15+)
Academic & Research:
dream_of_arxiv— Academic papersdream_of_semantic_scholar— Citation analysisdream_of_openlibrary— Book metadatadream_of_wikipedia— Encyclopedia summaries
News & Media:
dream_of_news— News articles (NewsAPI)dream_of_youtube— Video metadata
Technical & Code:
dream_of_github— Repository data, commits, users
Government & Demographics:
dream_of_census_acs— US Census American Community Surveydream_of_census_saipe— Poverty estimates
Science & Space:
dream_of_nasa— APOD, Mars photos, Earth imagery
Location & Weather:
- Weather current conditions, forecasts, air quality
Finance:
- Stock quotes, company fundamentals
Document Generation
Professional output in multiple formats:
- PDF — With citations, table of contents, formatting
- DOCX — Editable Microsoft Word format
- Markdown — Portable, version-control friendly
Configuration
API Keys
Create .env file or export environment variables:
# Core providers (at least one required)
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
XAI_API_KEY=xai-...
# Optional providers
MISTRAL_API_KEY=...
COHERE_API_KEY=...
GEMINI_API_KEY=...
PERPLEXITY_API_KEY=...
GROQ_API_KEY=...
# Data sources (optional)
YOUTUBE_API_KEY=...
GITHUB_TOKEN=ghp_...
NASA_API_KEY=...
NEWS_API_KEY=...
# Infrastructure (optional)
REDIS_HOST=localhost
REDIS_PORT=6379
Configuration Precedence
defaults → .app file → .env → environment variables → CLI args
(lowest priority) (highest priority)
MCP Integration
Running MCP Servers
# Main orchestration server (port 5060)
cd /home/coolhand/shared/mcp
python unified_server.py
# Or via service manager
/home/coolhand/service_manager.py start mcp-orchestrator
Available MCP Tools
Orchestration:
dream_research— Dream Cascade hierarchical researchdream_search— Dream Swarm parallel search
Management:
dreamwalker_status— Check workflow progressdreamwalker_cancel— Stop running workflowsdreamwalker_patterns— List available patterns
Data Fetching:
dream_of_arxiv,dream_of_census_acs,dream_of_github, etc.
See MCP Guide for comprehensive documentation.
Testing
# Run all tests
pytest
# Run with coverage
pytest --cov=. --cov-report=html
# Run specific test file
pytest tests/test_providers.py
# Run single test
pytest -v -k "test_anthropic_provider"
Current Coverage: 91%
Development
Code Style
- Black — Code formatting (100 char lines)
- isort — Import sorting
- Type hints — Required for public APIs
- Docstrings — Google style
Pre-commit Hooks
# Install hooks
pip install pre-commit
pre-commit install
# Run manually
pre-commit run --all-files
Documentation
Comprehensive Guides:
- Provider Matrix — LLM provider comparison
- Data Fetching Guide — Data source catalog
- Vision Guide — Image analysis and generation
- MCP Guide — MCP server reference
- Documentation Hub — Central navigation
In-Repo Docs:
CLAUDE.md— Repository guide for Claude Codeorchestration/ORCHESTRATOR_GUIDE.md— Building custom orchestratorsorchestration/ORCHESTRATOR_SELECTION_GUIDE.md— Choosing patternsorchestration/ORCHESTRATOR_BENCHMARKS.md— Performance data
Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make changes with clear commit messages
- Add tests for new features
- Update documentation as needed
- Run tests:
pytest tests/ - Submit a Pull Request
Areas that could use help:
- New orchestrator patterns (graph-based, recursive, hybrid)
- Additional data sources (more API clients)
- Provider integrations (new LLM providers)
- Performance optimizations (caching strategies)
- Documentation improvements (tutorials, examples)
- Testing (integration tests, edge cases)
License
MIT License
Copyright (c) 2025 Luke Steuber
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Acknowledgments
Built with:
- Model Context Protocol by Anthropic
- Claude by Anthropic
- OpenAI GPT
- xAI Grok
- And many other open-source libraries (see
requirements.txt)
Author: Luke Steuber
Repository: github.com/lukeslp/kernel
Website: dr.eamer.dev
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file geepers_core-1.0.2.tar.gz.
File metadata
- Download URL: geepers_core-1.0.2.tar.gz
- Upload date:
- Size: 486.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0127245d727b3105f2806f9ae5dc76739efcd2f304b221c3dcc9ebffdc25b49b
|
|
| MD5 |
33f771d4bedb417d572573072fdbc79c
|
|
| BLAKE2b-256 |
802c4a20ee1fc0ce75b92642793944fd0d452f14af198400ba92c523799fff0d
|
File details
Details for the file geepers_core-1.0.2-py3-none-any.whl.
File metadata
- Download URL: geepers_core-1.0.2-py3-none-any.whl
- Upload date:
- Size: 545.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ba88d55ea99b1688174649d27b026f5b84bedf5419e74d20b14de8f1320e77c2
|
|
| MD5 |
22118150ce68673f81c98bb3f2dbcfb6
|
|
| BLAKE2b-256 |
4374c3cc66ec296491b46674c70ed0c4d0a3270e4bc6213e957ab75f642703f9
|