Skip to main content

KISS AI Stack's RAG builder core

Project description

KISS AI Stack Banner

KISS AI Stack - Core

Effortless AI Stack Building

Welcome to the core of the KISS AI Stack! This module helps you build a stack effortlessly using a simple YAML configuration file. Say goodbye to boilerplate code and embrace minimalism with the KISS principle (Keep It Simple, Stupid).


Features

  • Centralized Stack Management: Manage multiple session-based AI stacks with lifecycle support.
  • Minimal Dependencies: Built using simple, vanilla vendor libraries.
  • Tool Classification: Configure tools for your stack to handle specific tasks easily.
  • Supports RAG and Prompt-Based Models: Choose the model type that suits your needs.
  • Thread-Safe: Reliable operation in multi-threaded environments.

Installation

Install the core module using pip:

pip install kiss-ai-stack-core

Example Configuration

Here’s an example YAML configuration to set up an AI stack with different tools:

stack:
  decision_maker: # Required for tool classification
    name: decision_maker
    role: classify tools for given queries
    kind: prompt  # Choose from 'rag' or 'prompt'
    ai_client:
      provider: openai
      model: gpt-4
      api_key: <your-api-key>

  tools:
    - name: general_queries
      role: process other queries if no suitable tool is found.
      kind: prompt
      ai_client:
        provider: openai
        model: gpt-4
        api_key: <your-api-key>

    - name: document_tool
      role: process documents and provide answers based on them.
      kind: rag  # Retrieval-Augmented Generation
      embeddings: text-embedding-ada-002
      ai_client:
        provider: openai
        model: gpt-4
        api_key: <your-api-key>

  vector_db:
    provider: chroma
    kind: remote # Choose in-memory, storage, or remote options.
    host: 0.0.0.0
    port: 8000
    secure: false

Example Python Usage

Use the core module to build and interact with your AI stack:

from kiss_ai_stack import Stacks

async def main():
    try:
        # Initialize a stack in the stack
        await Stacks.bootstrap_stack(stack_id="my_stack", temporary=True)

        # Process a query
        response = await Stacks.generate_answer(stack_id="my_stack", query="What is Retrieval-Augmented Generation?")
        print(response.answer)

    except Exception as ex:
        print(f"An error occurred: {ex}")

# Run the example
import asyncio
asyncio.run(main())

How It Works

  1. Stack Initialization: Use Stack.bootstrap_stack to initialize stacks with their configuration and resources.
  2. Query Processing: Process queries with Stack.generate_answer, leveraging tools and AI clients defined in the YAML configuration.
  3. Tool Management: Define tools to handle specific tasks like document processing or query classification.
  4. Vector Database: Use the vector_db section to define how document embeddings are stored and retrieved for RAG-based tasks. Currently, only Chroma is supported.

Documentation

Key Methods

  • bootstrap_stack(stack_id: str, temporary: bool): Initialize a new stack session.
  • generate_answer(stack_id: str, query: Union[str, Dict, List]): Process a query and return a response.

Configuration Highlights

  • AI Client: Configure the provider, model, and API key for supported services like OpenAI.
  • Tools: Define tools such as general-purpose query handlers or document processors.
  • Vector Database: Set up in-memory or persistent storage for RAG-based tasks.

Contributing

We welcome contributions! Submit pull requests or open issues to improve this stack.


License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kiss_ai_stack_core-0.1.0a28.tar.gz (24.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kiss_ai_stack_core-0.1.0a28-py3-none-any.whl (33.4 kB view details)

Uploaded Python 3

File details

Details for the file kiss_ai_stack_core-0.1.0a28.tar.gz.

File metadata

  • Download URL: kiss_ai_stack_core-0.1.0a28.tar.gz
  • Upload date:
  • Size: 24.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.0 CPython/3.12.7

File hashes

Hashes for kiss_ai_stack_core-0.1.0a28.tar.gz
Algorithm Hash digest
SHA256 85f3438f597cee772aaf643ea5f702b1c56d34099530670c139fb6fa895bd752
MD5 3fd6fe04fa1ee5d14992b272531b9e5d
BLAKE2b-256 2b80f166d02aa9e8f5815594a6e273f6a7c102d6539a360b7caf4882d1268106

See more details on using hashes here.

File details

Details for the file kiss_ai_stack_core-0.1.0a28-py3-none-any.whl.

File metadata

File hashes

Hashes for kiss_ai_stack_core-0.1.0a28-py3-none-any.whl
Algorithm Hash digest
SHA256 4316b8a573a94badd3fad8a651f1ef06a8205b5017078b6a482d194b7841b231
MD5 580fa3887b235cdb22e44b0faac825be
BLAKE2b-256 edf05094ada3618ee302812cfc9f3e7b50c3af1187128eea35e70f334453efef

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page