Skip to main content

Chatformers is a Python package that simplifies chatbot development by automatically managing chat history using local vector databases like Chroma DB.

Project description

Chatformers

⚡ Chatformers is a Python package designed to simplify the development of chatbot applications that use Large Language Models (LLMs). It offers automatic chat history management using a local vector database (ChromaDB, Qdrant or Pgvector), ensuring efficient context retrieval for ongoing conversations.

Static Badge Release Notes

Install

pip install chatformers

Documentation-

https://coda.io/@chatformers/chatformers

Why Choose chatformers?

  1. Effortless History Management: No need to manage extensive chat history manually; the package automatically handles it.
  2. Simple Integration: Build a chatbot with just a few lines of code.
  3. Full Customization: Maintain complete control over your data and conversations.
  4. Framework Compatibility: Easily integrate with any existing framework or codebase.

Key Features

  1. Easy Chatbot Creation: Set up a chatbot with minimal code.
  2. Automated History Management: Automatically stores and fetches chat history for context-aware conversations.

How It Works

  1. Project Setup: Create a basic project structure.
  2. Automatic Storage: Chatformers stores your conversations (user inputs and AI outputs) in VectorDB.
  3. Contextual Conversations: The chatbot fetches relevant chat history whenever you engage with the LLM.

Prerequisites-

  1. Python: Ensure Python is installed on your system.
  2. GenAI Knowledge: Familiarity with Generative AI models.

Example Usage-

Read Documentation for advanced usage and understanding: https://coda.io/@chatformers/chatformers

    from chatformers.chatbot import Chatbot
    import os
    from openai import OpenAI

    system_prompt = None  # use the default
    metadata = None  # use the default metadata
    user_id = "Sam-Julia"
    chat_model_name = "llama-3.1-8b-instant"
    memory_model_name = "llama-3.1-8b-instant"
    max_tokens = 150  # len of tokens to generate from LLM
    limit = 4  # maximum number of memory to added during LLM chat
    debug = True  # enable to print debug messages

    os.environ["GROQ_API_KEY"] = ""
    llm_client = OpenAI(base_url="https://api.groq.com/openai/v1",
                        api_key="",
                        )  # Any OpenAI Compatible LLM Client, using groq here
    config = {
        "vector_store": {
            "provider": "chroma",
            "config": {
                "collection_name": user_id,
                "path": "db",
            }
        },
        "embedder": {
            "provider": "ollama",
            "config": {
                "model": "nomic-embed-text:latest"
            }
        },
        "llm": {
            "provider": "groq",
            "config": {
                "model": memory_model_name,
                "temperature": 0.1,
                "max_tokens": 1000,
            }
        },
    }

    chatbot = Chatbot(config=config, llm_client=llm_client, metadata=None, system_prompt=system_prompt,
                      chat_model_name=chat_model_name, memory_model_name=memory_model_name,
                      max_tokens=max_tokens, limit=limit, debug=debug)

    # Example to add buffer memory
    memory_messages = [
        {"role": "user", "content": "My name is Sam, what about you?"},
        {"role": "assistant", "content": "Hello Sam! I'm Julia."}
    ]
    chatbot.add_memories(memory_messages, user_id=user_id)

    # Buffer window memory, this will be acts as sliding window memory for LLM
    message_history = [{"role": "user", "content": "where r u from?"},
                       {"role": "assistant", "content": "I am from CA, USA"}]

    # Example to chat with the bot, send latest / current query here
    query = "Do you remember my name?"
    response = chatbot.chat(query=query, message_history=message_history, user_id=user_id, print_stream=True)
    print("Assistant: ", response)

    # Example to check memories in bot based on user_id
    # memories = chatbot.get_memories(user_id=user_id)
    # for m in memories:
    #     print(m)
    # print("================================================================")
    # related_memories = chatbot.related_memory(user_id=user_id,
    #                                           query="yes i am sam? what us your name")
    # print(related_memories)

FAQs-

  1. Can I customize LLM endpoints / Groq or other models?

    • Yes, any OpenAI-compatible endpoints and models can be used.
  2. Can I use custom hosted chromadb

    • Yes, you can specify custom endpoints for Chroma DB. If not provided, a Chroma directory will be created in your project's root folder.
  3. I don't want to manage history. Just wanted to chat.

    • Yes, set memory=False to disable history management and chat directly.
  4. Need help or have suggestions?

Star History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatformers-1.0.6.tar.gz (8.9 kB view details)

Uploaded Source

Built Distribution

chatformers-1.0.6-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file chatformers-1.0.6.tar.gz.

File metadata

  • Download URL: chatformers-1.0.6.tar.gz
  • Upload date:
  • Size: 8.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for chatformers-1.0.6.tar.gz
Algorithm Hash digest
SHA256 7c55609f34a9bdb139fc40ff4a6c7e64a99db25829d6f17727b11b23e93130c2
MD5 e95653336e1cf38ce8298413bf69218e
BLAKE2b-256 9223fc9e766d3dc4d620f413a721b13abd4ea1da440666efb244f95ba6f35fae

See more details on using hashes here.

File details

Details for the file chatformers-1.0.6-py3-none-any.whl.

File metadata

  • Download URL: chatformers-1.0.6-py3-none-any.whl
  • Upload date:
  • Size: 10.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for chatformers-1.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 df5664489b196f08296d06b8569e35a62e5f60343613905e03a53988f189208d
MD5 42c6663e96edf6d767c4fabf809f383a
BLAKE2b-256 aa803fbf6b11b19f81c84b2e7eed9e450f8fd5d33b2dbf46c8f2ab8b6b6cbeed

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page