Skip to main content

Chatformers is a Python package that simplifies chatbot development by automatically managing chat history using local vector databases like Chroma DB.

Project description

Chatformers

⚡ Chatformers is a Python package designed to simplify the development of chatbot applications that use Large Language Models (LLMs). It offers automatic chat history management using a local vector database (ChromaDB, Qdrant or Pgvector), ensuring efficient context retrieval for ongoing conversations.

Static Badge Release Notes

Install

pip install chatformers

Documentation-

https://chatformers.mintlify.app/introduction

Why Choose chatformers?

  1. Effortless History Management: No need to manage extensive chat history manually; the package automatically handles it.
  2. Simple Integration: Build a chatbot with just a few lines of code.
  3. Full Customization: Maintain complete control over your data and conversations.
  4. Framework Compatibility: Easily integrate with any existing framework or codebase.

Key Features

  1. Easy Chatbot Creation: Set up a chatbot with minimal code.
  2. Automated History Management: Automatically stores and fetches chat history for context-aware conversations.

How It Works

  1. Project Setup: Create a basic project structure.
  2. Automatic Storage: Chatformers stores your conversations (user inputs and AI outputs) in VectorDB.
  3. Contextual Conversations: The chatbot fetches relevant chat history whenever you engage with the LLM.

Prerequisites-

  1. Python: Ensure Python is installed on your system.
  2. GenAI Knowledge: Familiarity with Generative AI models.

Example Usage-

Read Documentation for advanced usage and understanding: https://chatformers.mintlify.app/development

   from chatformers.chatbot import Chatbot
   import os
   from openai import OpenAI
   
   
   system_prompt = None  # use the default
   metadata = None  # use the default metadata
   user_id = "Sam-Julia"
   chat_model_name = "llama-3.1-70b-versatile"
   memory_model_name = "llama-3.1-70b-versatile"
   max_tokens = 150  # len of tokens to generate from LLM
   limit = 4  # maximum number of memory to added during LLM chat
   debug = True  # enable to print debug messages
   
   os.environ["GROQ_API_KEY"] = ""
   llm_client = OpenAI(base_url="https://api.groq.com/openai/v1",
                       api_key="",
                       )  # Any OpenAI Compatible LLM Client
   config = {
       "vector_store": {
           "provider": "chroma",
           "config": {
               "collection_name": "test",
               "path": "db",
           }
       },
       "embedder": {
           "provider": "ollama",
           "config": {
               "model": "nomic-embed-text:latest"
           }
       },
       "llm": {
           "provider": "groq",
           "config": {
               "model": memory_model_name,
               "temperature": 0.1,
               "max_tokens": 1000,
           }
       },
   }
   
   chatbot = Chatbot(config=config, llm_client=llm_client, metadata=None, system_prompt=system_prompt,
                     chat_model_name=chat_model_name, memory_model_name=memory_model_name,
                     max_tokens=max_tokens, limit=limit, debug=debug)
   
   # Example to add buffer memory
   memory_messages = [
       {"role": "user", "content": "My name is Sam, what about you?"},
       {"role": "assistant", "content": "Hello Sam! I'm Julia."},
       {"role": "user", "content": "What do you like to eat?"},
       {"role": "assistant", "content": "I like pizza"}
   ]
   chatbot.add_memories(memory_messages, user_id=user_id)
   
   # Buffer window memory, this will be acts as sliding window memory for LLM
   message_history = [{"role": "user", "content": "where r u from?"},
                      {"role": "assistant", "content": "I am from CA, USA"},
                      {"role": "user", "content": "ok"},
                      {"role": "assistant", "content": "hmm"},
                      {"role": "user", "content": "What are u doing on next Sunday?"},
                      {"role": "assistant", "content": "I am all available"}
                      ]
   # Example to chat with the bot, send latest / current query here
   query = "Could you remind me what do you like to eat?"
   response = chatbot.chat(query=query, message_history=message_history, user_id=user_id, print_stream=True)
   print("Assistant: ", response)
   
   # # Example to check memories in bot based on user_id
   # memories = chatbot.get_memories(user_id=user_id)
   # for m in memories:
   #     print(m)
   # print("================================================================")
   # related_memories = chatbot.related_memory(user_id=user_id,
   #                                           query="yes i am sam? what us your name")
   # print(related_memories)

FAQs-

  1. Can I customize LLM endpoints / Groq or other models?

    • Yes, any OpenAI-compatible endpoints and models can be used.
  2. Can I use custom hosted chromadb, or any other vector db.

    • Yes, read documentation.
  3. Need help or have suggestions?

Star History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatformers-1.0.8.tar.gz (9.6 kB view details)

Uploaded Source

Built Distribution

chatformers-1.0.8-py3-none-any.whl (10.4 kB view details)

Uploaded Python 3

File details

Details for the file chatformers-1.0.8.tar.gz.

File metadata

  • Download URL: chatformers-1.0.8.tar.gz
  • Upload date:
  • Size: 9.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for chatformers-1.0.8.tar.gz
Algorithm Hash digest
SHA256 ffadc123d7b7ef49ba41a6c00b084855794dbc8c77a40e52d0b2ec31e0e3e635
MD5 5dc814d1c42bb3674608375a179b4ae1
BLAKE2b-256 222d6859a48d850e2f0c9bb4e46c5f8a5de563a976de322c2bfc9bb31cfe5a4c

See more details on using hashes here.

File details

Details for the file chatformers-1.0.8-py3-none-any.whl.

File metadata

  • Download URL: chatformers-1.0.8-py3-none-any.whl
  • Upload date:
  • Size: 10.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for chatformers-1.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 9e22b1d1e51e3832738c8e358334573d3f04479599d6006c1cdf30dca1661d3a
MD5 f1678fcbb5137fad3b13943cd50697ec
BLAKE2b-256 af3c9c79d76b8aefe578fb6f1c37397fc49a4893e0de206681975733c37a5b6d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page