Skip to main content

Chatformers is a Python package that simplifies chatbot development by automatically managing chat history using local vector databases like Chroma DB.

Project description

Chatformers

⚡ Chatformers is a Python package designed to simplify the development of chatbot applications that use Large Language Models (LLMs). It offers automatic chat history management using a local vector database (ChromaDB, Qdrant or Pgvector), ensuring efficient context retrieval for ongoing conversations.

Static Badge Release Notes

Install

pip install chatformers

Documentation-

https://chatformers.mintlify.app/introduction

Why Choose chatformers?

  1. Effortless History Management: No need to manage extensive chat history manually; the package automatically handles it.
  2. Simple Integration: Build a chatbot with just a few lines of code.
  3. Full Customization: Maintain complete control over your data and conversations.
  4. Framework Compatibility: Easily integrate with any existing framework or codebase.

Key Features

  1. Easy Chatbot Creation: Set up a chatbot with minimal code.
  2. Automated History Management: Automatically stores and fetches chat history for context-aware conversations.

How It Works

  1. Project Setup: Create a basic project structure.
  2. Automatic Storage: Chatformers stores your conversations (user inputs and AI outputs) in VectorDB.
  3. Contextual Conversations: The chatbot fetches relevant chat history whenever you engage with the LLM.

Prerequisites-

  1. Python: Ensure Python is installed on your system.
  2. GenAI Knowledge: Familiarity with Generative AI models.

Example Usage-

Read Documentation for advanced usage and understanding: https://coda.io/@chatformers/chatformers

    from chatformers.chatbot import Chatbot
    import os
    from openai import OpenAI

    system_prompt = None  # use the default
    metadata = None  # use the default metadata
    user_id = "Sam-Julia"
    chat_model_name = "llama-3.1-8b-instant"
    memory_model_name = "llama-3.1-8b-instant"
    max_tokens = 150  # len of tokens to generate from LLM
    limit = 4  # maximum number of memory to added during LLM chat
    debug = True  # enable to print debug messages

    os.environ["GROQ_API_KEY"] = ""
    llm_client = OpenAI(base_url="https://api.groq.com/openai/v1",
                        api_key="",
                        )  # Any OpenAI Compatible LLM Client, using groq here
    config = {
        "vector_store": {
            "provider": "chroma",
            "config": {
                "collection_name": user_id,
                "path": "db",
            }
        },
        "embedder": {
            "provider": "ollama",
            "config": {
                "model": "nomic-embed-text:latest"
            }
        },
        "llm": {
            "provider": "groq",
            "config": {
                "model": memory_model_name,
                "temperature": 0.1,
                "max_tokens": 1000,
            }
        },
    }

    chatbot = Chatbot(config=config, llm_client=llm_client, metadata=None, system_prompt=system_prompt,
                      chat_model_name=chat_model_name, memory_model_name=memory_model_name,
                      max_tokens=max_tokens, limit=limit, debug=debug)

    # Example to add buffer memory
    memory_messages = [
        {"role": "user", "content": "My name is Sam, what about you?"},
        {"role": "assistant", "content": "Hello Sam! I'm Julia."}
    ]
    chatbot.add_memories(memory_messages, user_id=user_id)

    # Buffer window memory, this will be acts as sliding window memory for LLM
    message_history = [{"role": "user", "content": "where r u from?"},
                       {"role": "assistant", "content": "I am from CA, USA"}]

    # Example to chat with the bot, send latest / current query here
    query = "Do you remember my name?"
    response = chatbot.chat(query=query, message_history=message_history, user_id=user_id, print_stream=True)
    print("Assistant: ", response)

    # Example to check memories in bot based on user_id
    # memories = chatbot.get_memories(user_id=user_id)
    # for m in memories:
    #     print(m)
    # print("================================================================")
    # related_memories = chatbot.related_memory(user_id=user_id,
    #                                           query="yes i am sam? what us your name")
    # print(related_memories)

FAQs-

  1. Can I customize LLM endpoints / Groq or other models?

    • Yes, any OpenAI-compatible endpoints and models can be used.
  2. Can I use custom hosted chromadb

    • Yes, you can specify custom endpoints for Chroma DB. If not provided, a Chroma directory will be created in your project's root folder.
  3. I don't want to manage history. Just wanted to chat.

    • Yes, set memory=False to disable history management and chat directly.
  4. Need help or have suggestions?

Star History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatformers-1.0.7.tar.gz (8.9 kB view details)

Uploaded Source

Built Distribution

chatformers-1.0.7-py3-none-any.whl (10.1 kB view details)

Uploaded Python 3

File details

Details for the file chatformers-1.0.7.tar.gz.

File metadata

  • Download URL: chatformers-1.0.7.tar.gz
  • Upload date:
  • Size: 8.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for chatformers-1.0.7.tar.gz
Algorithm Hash digest
SHA256 67225c0e6c49de87128dd85c995806a0bbbb34427d9d6507237eb38292594584
MD5 c9b0c45184273af6840631a92951974f
BLAKE2b-256 f8996965cb7c3340fd9ce8bfbaa4b2c448d491e255f4a5fdd7e816a729a2ef22

See more details on using hashes here.

File details

Details for the file chatformers-1.0.7-py3-none-any.whl.

File metadata

  • Download URL: chatformers-1.0.7-py3-none-any.whl
  • Upload date:
  • Size: 10.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for chatformers-1.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 262311af1d179ef0123fb5738a5f74dcbfc8687bac257e00c6a24c8ea2836037
MD5 8f40e7ee14ab385091cd124a6e319129
BLAKE2b-256 818c02c4cd89f244374f1f7c285f93e4bfb1bfccbb0de17cb9bd6066de4831b0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page