Enterprise-grade AI orchestration platform with declarative YAML flows, cryptographic signing, and multi-model support.
Project description
KORE Platform
AI-powered software generation from a single idea.
KORE orchestrates 9 specialized AI agents to transform a natural language description into a structured Python project: source code, tests, documentation, deployment configs, and security scans.
pip install kore-platform
kore "Build a REST API for managing TODO items with FastAPI"
Honesty note: KORE is a v1.0 project. The generated code is a starting point that requires human review — not a production-ready product. Quality depends heavily on the LLM model used. See Limitations.
What It Actually Does
- Analyzes your idea — extracts requirements, entities, and technical constraints
- Designs architecture — components, data models, API specification
- Generates code — complete Python files with imports, types, and docstrings
- Writes tests — pytest unit tests for generated modules
- Validates — runs generated tests against generated code, auto-fixes errors (up to 3 attempts)
- Documents — README, architecture docs, API reference
- Configures deployment — Dockerfile, docker-compose
- Reviews — code quality and security scans
All of this runs locally on your machine. No server, no account, no data collection.
Quick Start
1. Install
pip install kore-platform
For the web interface:
pip install kore-platform[web]
Or from source:
git clone https://github.com/iafiscal1212/kore-platform.git
cd kore-platform
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[all]"
2. Set Up an LLM
KORE is provider-agnostic. Choose one:
KORE Coder (recommended, free, local, fine-tuned for KORE)
pip install kore-platform
kore --setup
Downloads the fine-tuned model (~4.5 GB) and configures Ollama automatically. Requires Ollama installed.
Ollama (generic, free, local)
ollama pull qwen2.5-coder:7b-instruct
export KORE_LLM_PROVIDER=ollama
export KORE_LLM_MODEL=qwen2.5-coder:7b-instruct
OpenAI
export KORE_LLM_PROVIDER=openai
export KORE_LLM_MODEL=gpt-4o
export KORE_LLM_API_KEY=sk-...
Anthropic
export KORE_LLM_PROVIDER=anthropic
export KORE_LLM_MODEL=claude-sonnet-4-5-20250929
export KORE_LLM_API_KEY=sk-ant-...
3. Generate
CLI:
kore "Build a REST API for managing books with FastAPI"
Web UI:
kore --ui
Opens http://localhost:8080 with a local web interface.
Output goes to /tmp/kore/workspace/.
Features
10 Specialized Agents (29 skills)
| Agent | Role | Skills |
|---|---|---|
| Specs | Requirements analyst | analyze_requirements, generate_user_stories |
| Architect | Software architect | design_architecture, generate_schemas, create_api_spec |
| Coder | Developer | implement_feature, refactor, write_code |
| Tester | QA engineer | write_unit_tests, write_integration_tests, run_tests |
| Reviewer | Code reviewer | code_review, performance_review, security_review |
| Security | Security engineer | sast_scan, dependency_scan, secret_scan |
| Deployer | DevOps | deploy_staging, deploy_production, rollback |
| Documenter | Technical writer | generate_docs, generate_manual, generate_video_script |
| Monitor | SRE | setup_monitoring, create_alerts, analyze_metrics |
| Complexity | Structural analyzer | analyze_structure, detect_bottlenecks |
The Complexity agent is the only one that does not use an LLM — it analyzes dependency graphs using Clifford algebra (qubit-algebra, optional). Runs in parallel with Reviewer and Security for DEV_SENIOR+ levels.
Self-Validation Loop
KORE doesn't just generate code — it tests it:
Generate code -> Generate tests -> Run pytest -> If fail -> Fix errors -> Retry (up to 3x)
The validation loop uses real subprocess execution with resource.setrlimit for safety. When tests fail, error output is fed back to the Coder agent for targeted fixes.
Limitation: Only works for Python/pytest. Fixes syntax and import errors well; logic errors less so.
Few-Shot Learning
5 curated examples improve LLM output quality:
fastapi_crud— REST API with SQLAlchemy + Pydanticpytest_example— pytest suite with fixtures and async testscli_tool— Click CLI applicationetl_pipeline— Extract/Transform/Load pipelinepython_library— Reusable Python package
Examples are selected automatically based on keyword matching against your idea.
Technical Level Adaptation
KORE detects your technical level from vocabulary and adapts:
| Level | Detection | Pipeline steps |
|---|---|---|
| NO_DEV | No technical terms | 3 (specs + arch + code) |
| DEV_JUNIOR | Basic concepts (python, api, crud) | 5 (+ tests + docs) |
| DEV_SENIOR | Advanced patterns (fastapi, redis, jwt) | 8 (+ review + security + deploy) |
| ENGINEER | Enterprise (kubernetes, cqrs, mtls) | 10 (+ sast + monitoring) |
Intelligent Interview
KORE supports 3 modes for collecting project requirements:
| Mode | How | When |
|---|---|---|
| Interactive CLI (default) | User answers questions via keyboard | kore "idea" |
| Interactive Web | User fills interview form in browser | kore --ui |
| Auto (LLM) | LLM simulates user answers | kore --auto "idea" |
The system detects your technical level, selects appropriate questions, and adapts the interview depth. The LLM enriches context by inferring implicit requirements.
Note: In --auto mode, the LLM invents context (target users, business reqs) that may not reflect your actual needs. The answer_source field in UserContext tracks the origin.
Security (AEGIS Guardian)
- Command whitelist (only
python,pip,pytest,git, etc.) - 60+ dangerous pattern detectors (eval, exec, subprocess, os.system, etc.)
- Obfuscation detection (base64 decode chains, getattr tricks, chr() building)
- Symlink escape detection in workspaces
pip install --only-binaryto prevent setup.py attacks (with sdist fallback + warning)resource.setrlimitfor CPU/memory/process limits in subprocesses- Immutable append-only audit log
- Explicit threat model (5 adversaries documented in code)
- CSRF token + Origin header validation in web UI
i18n (Spanish + English)
export KORE_LANG=en # or "es" (default)
CLI messages, interview questions, and web UI adapt to the selected language.
Local Web UI
kore --ui
- Two-step flow: idea -> interview questions -> generate
- Embedded HTML/CSS/JS (no CDNs, no external deps)
- CSRF token + Origin header protection
- Idempotency keys for duplicate prevention
- Graceful shutdown for running jobs
- Binds to 127.0.0.1 only
Configuration
| Variable | Default | Description |
|---|---|---|
KORE_LLM_PROVIDER |
ollama |
ollama, openai, anthropic |
KORE_LLM_MODEL |
qwen2.5-coder:7b-instruct |
Model name |
KORE_LLM_BASE_URL |
http://localhost:11434 |
LLM server URL |
KORE_LLM_API_KEY |
— | API key (OpenAI/Anthropic) |
KORE_LLM_TIMEOUT |
600 |
Timeout per LLM call (seconds) |
KORE_WORKSPACE_PATH |
/tmp/kore/workspace |
Output directory |
KORE_VALIDATION_MAX_RETRIES |
3 |
Auto-correction attempts |
KORE_VALIDATION_TIMEOUT |
120 |
Validation timeout (seconds) |
KORE_LANG |
es |
Language: es or en |
KORE_LOG_LEVEL |
INFO |
Logging level |
Environment variables override ~/.kore/config.yaml, which overrides defaults. Running kore --setup writes the config file automatically.
Fine-Tune Pipeline
KORE includes a complete pipeline for fine-tuning a Qwen 7B model on high-quality Python projects:
tools/dataset/pipeline.py # Run all stages
tools/dataset/pipeline.py --stage scrape # Only scrape GitHub
tools/dataset/pipeline.py --stats # Print dataset statistics
Phase 1 — Dataset curation: Scrapes GitHub repos (stars >50, permissive license), filters by quality (ast.parse for type hints, docstrings, test ratio), analyzes with ComplexityAgent (only A/B rated), extracts (prompt, completion) pairs.
Phase 2 — Fine-tuning: LoRA/QLoRA on Qwen/Qwen2.5-Coder-7B-Instruct with all hyperparameters in tools/training/config.yaml. Evaluates with structural metrics (syntax, imports, completeness), not string matching.
Phase 3 — Distribution: kore --setup downloads the GGUF from HuggingFace, creates an Ollama model, and configures KORE automatically.
Training dependencies are separate: pip install kore-platform[training]
Architecture
Hexagonal architecture (ports & adapters) with dependency injection:
kore-platform/
src/kore_platform/
domain/ # Entities, value objects, events
application/ # Use cases, ports (inbound/outbound), DTOs
agents/ # 10 agents, 29 skills, few-shot examples
base/ # BaseAgent, BaseSkill, few_shot, llm_utils
examples/ # 5 curated few-shot examples
specs/architect/coder/ # Specialized agents
tester/reviewer/security/
deployer/documenter/monitor/
orchestration/ # Orchestrator, ValidationLoop, SubprocessRunner
interview/ # Interviewer, LevelDetector, ContextBuilder
delivery/ # DeliveryFactory (Simple/Technical/Enterprise)
infrastructure/ # Config, DI container, Guardian, i18n
web.py # Local web UI (FastAPI)
tools/
dataset/ # Fine-tune dataset curation pipeline
scraper.py # GitHub Search API scraper
filter.py # Quality filters (ast.parse, not regex)
analyzer.py # ComplexityAgent structural analysis
extractor.py # (prompt, completion) pair generation
pipeline.py # Orchestrator: scrape -> filter -> analyze -> extract
training/ # Fine-tune training tools
train.py # LoRA/QLoRA on Qwen 7B
eval.py # Structural quality metrics
merge.py # LoRA weight merge
quantize.py # GGUF quantization for Ollama
data/ # Dataset files (gitignored)
tests/ # 174 tests
flows/ # YAML pipeline definitions
Key design decisions:
- All domain objects are immutable (
frozen=True,slots=True) - LLM calls use exponential backoff retry with token budget management
- Partial JSON repair for malformed LLM outputs
- Cost tracking (estimated, not precise)
- Settings loaded with priority: env vars >
~/.kore/config.yaml> defaults
Limitations
Be aware of these before using KORE:
-
Generated code is NOT production-ready. It's a structured starting point. Always review before deploying.
-
Quality depends on the LLM. With Ollama 7B models: ~30-40% of generated code is directly usable. With GPT-4o: ~60-70%. These are rough estimates, not formal measurements.
-
Self-validation is basic. It catches import errors and syntax issues. It does NOT detect logic errors, race conditions, or security vulnerabilities.
-
No real sandboxing. Resource limits via
setrlimit, but no Docker/seccomp/namespace isolation. The Guardian uses pattern matching, not semantic analysis. -
Python only. The validation loop only runs Python/pytest. Generated Dockerfiles and K8s manifests are not validated.
-
Auto-generated context may be wrong. In
--automode, the LLM invents target users, business requirements, and technical constraints that may not match your actual needs. Use interactive mode (default) for more accurate results. -
Jobs are in-memory. If the process dies, all running jobs are lost. No persistence, no queue, no distributed workers.
-
No telemetry. We don't know how many projects succeed or fail in the wild. Performance claims are estimates.
-
Limited few-shot coverage. 5 example types. Projects outside these categories (ML, games, embedded) get no relevant example.
-
With OpenAI/Anthropic, your data is sent to their APIs. Only Ollama keeps everything local.
For the full list, see the White Paper section 18.
Tests
pytest tests/ -q # 131 tests, <0.3s
The 131 tests verify KORE's infrastructure (pipeline, agents, guardian, interview, orchestration). They do NOT test the quality of generated code — that depends on the LLM and has not been formally benchmarked.
Development
git clone https://github.com/iafiscal1212/kore-platform.git
cd kore-platform
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytest tests/ -q
Documentation
- Libro Blanco (ES) — Technical white paper in Spanish
- White Paper (EN) — Technical white paper in English
License
MIT -- Copyright (c) 2026 IAFiscal
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kore_platform-1.0.1.tar.gz.
File metadata
- Download URL: kore_platform-1.0.1.tar.gz
- Upload date:
- Size: 265.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ca34a8c7cf4c1110eb5eeba09a88da3a0b2f95a04ad69814a4eb62fb98f4674e
|
|
| MD5 |
f670f7605a67bb00383fae1643fa9d40
|
|
| BLAKE2b-256 |
4322c1802d0768e2206870cf7f81bcdb71bffcdab2adc1ef828718bc8ef3fb77
|
File details
Details for the file kore_platform-1.0.1-py3-none-any.whl.
File metadata
- Download URL: kore_platform-1.0.1-py3-none-any.whl
- Upload date:
- Size: 198.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
95dfb2d557c7eaa6c75ca180a0ea4ecc5be5b8c4d080a6cd853ca2bd991e8e01
|
|
| MD5 |
cbc41fda292b9c034ab25ff83638b9aa
|
|
| BLAKE2b-256 |
072e4f1a7b1bf25ebc4bec7a22c84f15fe03b94d483104459176da32810ce8c7
|