Skip to main content

A flexible, self-hosted RAG chatbot framework for containerized deployments.

Project description

Kondoo 🦙

Kondoo is not just a chatbot; it is a framework for building autonomous digital minds. Its name is inspired by the word “condominium,” a system of independent dwellings that share the same structure. Similarly, Kondoo allows multiple bots to operate independently, each with its own personality and knowledge base, but sharing the same robust, containerized framework.

This project was born with a “self-hosted first” philosophy, giving you complete control over your data and the models you use, from a local tinyllama to cloud APIs such as Gemini.

Kondoo: Your knowledge, your rules, your assistants.


🚀 Key Features

  • Framework Agnostic: Not tied to a specific provider. Use an ANSWER_LLM_PROVIDER to choose your answer engine (Gemini, OpenAI, Ollama) and a KNOWLEDGE_PROVIDER for your embeddings (Ollama, local, OpenAI).
  • Containerized by Design: Built on Podman and compose, ensuring maximum portability and clean, repeatable deployment.
  • Self-Hosted First: Designed to run 100% locally, using Ollama for both embeddings and response generation, giving you full control and privacy.
  • Flexible: Easily configure each bot's personality through a simple personality.txt file.
  • Extensible: The src/ structure makes it an installable Python package, ready to be imported into larger projects.

🏛️ Project Structure

Kondoo is structured as a Python framework, separating reusable code from implementation examples:

  • src/kondoo/: The source code for the kondoo framework (installable via pip).
  • example/example_bot/: A complete and functional example bot that shows how to use the framework. This is your starting point.
  • pyproject.toml: Defines the project and all its dependencies.
  • .env.example: A universal template with all available environment variables.

⚡ Quickstart Guide

Try Kondoo in 5 minutes using the sample bot.

1. Prerequisites

  • Podman and podman-compose.
  • Python 3.9+
  • Your own Ollama service (local or remote) or an API Key (e.g., Google Gemini).
  • SynapsIA to create the knowledge base.

2. Clone the Repository

git clone https://github.com/sysadminctl-services/kondoo.git
cd kondoo

3. Set Up the Example Bot

Navigate to the example directory:

cd example/example_bot

Create your personal configuration file from the root template:

cp ../../.env.example .env

Edit the .env file and fill in the variables. For a 100% local test with Ollama:

# example/example_bot/.env
ANSWER_LLM_PROVIDER=ollama_compatible
KNOWLEDGE_PROVIDER=ollama

LLM_MODEL_NAME="tinyllama"
LLM_BASE_URL="http://host.containers.internal:11434/v1"
LLM_API_KEY="ollama"

EMBEDDING_MODEL_NAME="mxbai-embed-large"
OLLAMA_BASE_URL="http://host.containers.internal:11434"

4. Create the Knowledge Base

Create the directories for the documents and the knowledge base:

mkdir docs
mkdir knowledge
echo “Kondoo is a RAG chatbot framework.” > docs/info.txt

Use SynapsIA to process your documents:

python synapsia.py --docs ../Kondoo/example/example_bot/docs/ --knowledge ../Kondoo/example/example_bot/knowledge/

5. Launch the Container

Return to the bot directory and run podman-compose:

# While in example/example_bot/
podman-compose up --build

6. Test the Bot

Open a new terminal and send a query using curl:

curl -X POST \
  -H “Content-Type: application/json” \
  -d {“query”: “What is Kondoo?”} \
  http://localhost:5000/query

You should receive a JSON response generated by your local tinyllama.

⚙️ Configuration (.env)

All configuration variables are documented in the .env.example file. Variables are loaded from .env in your bot's directory (e.g., example/example_bot/.env).

1. Provider Selection

These variables act as "switches" to choose which services to use.

  • ANSWER_LLM_PROVIDER: Choose your response (LLM) engine.
    • gemini: (Cloud) Google Gemini (requires LLM_API_KEY).
    • openai: (Cloud) OpenAI (requires LLM_API_KEY).
    • ollama_compatible: (Self-Hosted) Any OpenAI-compatible API, like Ollama (requires LLM_BASE_URL and LLM_MODEL_NAME).
  • KNOWLEDGE_PROVIDER: Choose your embeddings (knowledge) engine.
    • ollama: (Self-Hosted) Use an Ollama service (requires OLLAMA_BASE_URL and EMBEDDING_MODEL_NAME).
    • local: (Local) Use a HuggingFace model on the CPU/GPU (requires EMBEDDING_MODEL_NAME).
    • openai: (Cloud) Use OpenAI's embeddings API (requires LLM_API_KEY).

2. Provider-Specific Settings

These are the "control knobs" required by the providers you selected above.

Answer Engine (LLM) Settings

  • LLM_API_KEY:
    • Required by: gemini, openai.
    • Description: Your secret API key for the chosen cloud service.
  • LLM_MODEL_NAME:
    • Required by: gemini, openai, ollama_compatible.
    • Description: The specific model name to use for generating answers.
    • Examples: models/gemini-1.5-flash, gpt-4o, tinyllama.
  • LLM_BASE_URL:
    • Required by: ollama_compatible.
    • Description: The full base URL of your self-hosted LLM's OpenAI-compatible API.
    • Example (Ollama): http://host.containers.internal:11434/v1

Knowledge (Embedding) Settings

  • EMBEDDING_MODEL_NAME:
    • Required by: ollama, local, openai.
    • Description: The specific model name to use for embeddings.
    • Examples: mxbai-embed-large, nomic-embed-text.
  • OLLAMA_BASE_URL:
    • Required by: ollama (provider).
    • Description: The base URL of your Ollama service (the non-/v1 endpoint).
    • Example: http://host.containers.internal:11434

3. Bot Configuration

These variables control the bot's identity and data paths.

  • BOT_PERSONALITY_FILE:
    • Description: The path inside the container to the text file that defines the bot's personality.
    • Default: /app/personality.txt (as set by the Containerfile).
  • KNOWLEDGE_DIR:
    • Description: The path inside the container where the bot will load its knowledge base from.
    • Default: /app/knowledge (as set by the compose.yaml volume).

⚖️ License

This project is licensed under the MIT License. See the LICENSE file for more details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kondoo-0.1.2.tar.gz (7.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kondoo-0.1.2-py3-none-any.whl (7.4 kB view details)

Uploaded Python 3

File details

Details for the file kondoo-0.1.2.tar.gz.

File metadata

  • Download URL: kondoo-0.1.2.tar.gz
  • Upload date:
  • Size: 7.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for kondoo-0.1.2.tar.gz
Algorithm Hash digest
SHA256 dbf43274c786677e28c47baa93370947dfa431384f805c1abec2a15848322784
MD5 6e3ff603fd9110449e1d76e0010421fb
BLAKE2b-256 9b4aada6422ed0aee2da66a92b9031c3ede9931268b2826b3755ef1eebfee607

See more details on using hashes here.

Provenance

The following attestation bundles were made for kondoo-0.1.2.tar.gz:

Publisher: publish-to-pypi.yml on sysadminctl-services/kondoo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file kondoo-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: kondoo-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 7.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for kondoo-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 900d06400965f2e735bf0be47e28fcea07fc643a8994bf704a1a1f4b26c67710
MD5 63523b2f56ecfaf69e2e4f8ef0dc3e44
BLAKE2b-256 82c6b1be329f8c2f7a1a3c5005ffd1b289c184807af809ca46dd826fc36cb0a2

See more details on using hashes here.

Provenance

The following attestation bundles were made for kondoo-0.1.2-py3-none-any.whl:

Publisher: publish-to-pypi.yml on sysadminctl-services/kondoo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page