A flexible, self-hosted RAG chatbot framework for containerized deployments.
Project description
Kondoo 🦙
Kondoo is not just a chatbot; it is a framework for building autonomous digital minds. Its name is inspired by the word “condominium,” a system of independent dwellings that share the same structure. Similarly, Kondoo allows multiple bots to operate independently, each with its own personality and knowledge base, but sharing the same robust, containerized framework.
This project was born with a “self-hosted first” philosophy, giving you complete control over your data and the models you use, from a local tinyllama to cloud APIs such as Gemini.
Kondoo: Your knowledge, your rules, your assistants.
🚀 Key Features
- Framework Agnostic: Not tied to a specific provider. Use an
ANSWER_LLM_PROVIDERto choose your answer engine (Gemini, OpenAI, Ollama) and aKNOWLEDGE_PROVIDERfor your embeddings (Ollama, local, OpenAI). - Containerized by Design: Built on Podman and
compose, ensuring maximum portability and clean, repeatable deployment. - Self-Hosted First: Designed to run 100% locally, using Ollama for both embeddings and response generation, giving you full control and privacy.
- Flexible: Easily configure each bot's personality through a simple
personality.txtfile. - Extensible: The
src/structure makes it an installable Python package, ready to be imported into larger projects.
🏛️ Project Structure
Kondoo is structured as a Python framework, separating reusable code from implementation examples:
src/kondoo/: The source code for thekondooframework (installable viapip).example/example_bot/: A complete and functional example bot that shows how to use the framework. This is your starting point.pyproject.toml: Defines the project and all its dependencies..env.example: A universal template with all available environment variables.
⚡ Quickstart Guide
Try Kondoo in 5 minutes using the sample bot.
1. Prerequisites
- Podman and
podman-compose. - Python 3.9+
- Your own Ollama service (local or remote) or an API Key (e.g., Google Gemini).
- SynapsIA to create the knowledge base.
2. Clone the Repository
git clone https://github.com/sysadminctl-services/kondoo.git
cd kondoo
3. Set Up the Example Bot
Navigate to the example directory:
cd example/example_bot
Create your personal configuration file from the root template:
cp ../../.env.example .env
Edit the .env file and fill in the variables. For a 100% local test with Ollama:
# example/example_bot/.env
ANSWER_LLM_PROVIDER=ollama_compatible
KNOWLEDGE_PROVIDER=ollama
LLM_MODEL_NAME="tinyllama"
LLM_BASE_URL="http://host.containers.internal:11434/v1"
LLM_API_KEY="ollama"
EMBEDDING_MODEL_NAME="mxbai-embed-large"
OLLAMA_BASE_URL="http://host.containers.internal:11434"
4. Create the Knowledge Base
Create the directories for the documents and the knowledge base:
mkdir docs
mkdir knowledge
echo “Kondoo is a RAG chatbot framework.” > docs/info.txt
Use SynapsIA to process your documents:
python synapsia.py --docs ../Kondoo/example/example_bot/docs/ --knowledge ../Kondoo/example/example_bot/knowledge/
5. Launch the Container
Return to the bot directory and run podman-compose:
# While in example/example_bot/
podman-compose up --build
6. Test the Bot
Open a new terminal and send a query using curl:
curl -X POST \
-H “Content-Type: application/json” \
-d ‘{“query”: “What is Kondoo?”}’ \
http://localhost:5000/query
You should receive a JSON response generated by your local tinyllama.
⚙️ Configuration (.env)
All configuration variables are documented in the .env.example file. Variables are loaded from .env in your bot's directory (e.g., example/example_bot/.env).
1. Provider Selection
These variables act as "switches" to choose which services to use.
ANSWER_LLM_PROVIDER: Choose your response (LLM) engine.gemini: (Cloud) Google Gemini (requiresLLM_API_KEY).openai: (Cloud) OpenAI (requiresLLM_API_KEY).ollama_compatible: (Self-Hosted) Any OpenAI-compatible API, like Ollama (requiresLLM_BASE_URLandLLM_MODEL_NAME).
KNOWLEDGE_PROVIDER: Choose your embeddings (knowledge) engine.ollama: (Self-Hosted) Use an Ollama service (requiresOLLAMA_BASE_URLandEMBEDDING_MODEL_NAME).local: (Local) Use a HuggingFace model on the CPU/GPU (requiresEMBEDDING_MODEL_NAME).openai: (Cloud) Use OpenAI's embeddings API (requiresLLM_API_KEY).
2. Provider-Specific Settings
These are the "control knobs" required by the providers you selected above.
Answer Engine (LLM) Settings
LLM_API_KEY:- Required by:
gemini,openai. - Description: Your secret API key for the chosen cloud service.
- Required by:
LLM_MODEL_NAME:- Required by:
gemini,openai,ollama_compatible. - Description: The specific model name to use for generating answers.
- Examples:
models/gemini-1.5-flash,gpt-4o,tinyllama.
- Required by:
LLM_BASE_URL:- Required by:
ollama_compatible. - Description: The full base URL of your self-hosted LLM's OpenAI-compatible API.
- Example (Ollama):
http://host.containers.internal:11434/v1
- Required by:
Knowledge (Embedding) Settings
EMBEDDING_MODEL_NAME:- Required by:
ollama,local,openai. - Description: The specific model name to use for embeddings.
- Examples:
mxbai-embed-large,nomic-embed-text.
- Required by:
OLLAMA_BASE_URL:- Required by:
ollama(provider). - Description: The base URL of your Ollama service (the non-
/v1endpoint). - Example:
http://host.containers.internal:11434
- Required by:
3. Bot Configuration
These variables control the bot's identity and data paths.
BOT_PERSONALITY_FILE:- Description: The path inside the container to the text file that defines the bot's personality.
- Default:
/app/personality.txt(as set by theContainerfile).
KNOWLEDGE_DIR:- Description: The path inside the container where the bot will load its knowledge base from.
- Default:
/app/knowledge(as set by thecompose.yamlvolume).
⚖️ License
This project is licensed under the MIT License. See the LICENSE file for more details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kondoo-0.1.0.tar.gz.
File metadata
- Download URL: kondoo-0.1.0.tar.gz
- Upload date:
- Size: 7.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
23762cc5a5b23616d258636e7f910e3032e10b4d528487b143fb1fabf979673c
|
|
| MD5 |
534ff15f4ca3666d66d6cbd504d8f9eb
|
|
| BLAKE2b-256 |
c8c31d5a7f5140b5905d38329693a2d53e016b090b1c69a5e2d88d737a51f9f2
|
File details
Details for the file kondoo-0.1.0-py3-none-any.whl.
File metadata
- Download URL: kondoo-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d55a2fe3bfadeb9d4532a11dd93fd4f4f965029525055346f23e56306b6f1f7b
|
|
| MD5 |
0a679b1d1e2cb2333be42a4af3e41ff8
|
|
| BLAKE2b-256 |
98f4fa40772be91b30129ce9ac3535233d2f07c47d10a01ef757cbd0861d18f8
|