Chatformers is a Python package that simplifies chatbot development by automatically managing chat history using local vector databases like Chroma DB.
Project description
Chatformers
⚡ Chatformers is a Python package designed to simplify the development of chatbot applications that use Large Language Models (LLMs). It offers automatic chat history management using a local vector database (ChromaDB, Qdrant or Pgvector), ensuring efficient context retrieval for ongoing conversations.
Install
pip install chatformers
Documentation-
https://chatformers.mintlify.app/introduction
Why Choose chatformers?
- Effortless History Management: No need to manage extensive chat history manually; the package automatically handles it.
- Simple Integration: Build a chatbot with just a few lines of code.
- Full Customization: Maintain complete control over your data and conversations.
- Framework Compatibility: Easily integrate with any existing framework or codebase.
Key Features
- Easy Chatbot Creation: Set up a chatbot with minimal code.
- Automated History Management: Automatically stores and fetches chat history for context-aware conversations.
How It Works
- Project Setup: Create a basic project structure.
- Automatic Storage: Chatformers stores your conversations (user inputs and AI outputs) in VectorDB.
- Contextual Conversations: The chatbot fetches relevant chat history whenever you engage with the LLM.
Prerequisites-
- Python: Ensure Python is installed on your system.
- GenAI Knowledge: Familiarity with Generative AI models.
Example Usage-
Read Documentation for advanced usage and understanding: https://chatformers.mintlify.app/development
from chatformers.chatbot import Chatbot
import os
from openai import OpenAI
system_prompt = None # use the default
metadata = None # use the default metadata
user_id = "Sam-Julia"
chat_model_name = "llama-3.1-70b-versatile"
memory_model_name = "llama-3.1-70b-versatile"
max_tokens = 150 # len of tokens to generate from LLM
limit = 4 # maximum number of memory to added during LLM chat
debug = True # enable to print debug messages
os.environ["GROQ_API_KEY"] = ""
llm_client = OpenAI(base_url="https://api.groq.com/openai/v1",
api_key="",
) # Any OpenAI Compatible LLM Client
config = {
"vector_store": {
"provider": "chroma",
"config": {
"collection_name": "test",
"path": "db",
}
},
"embedder": {
"provider": "ollama",
"config": {
"model": "nomic-embed-text:latest"
}
},
"llm": {
"provider": "groq",
"config": {
"model": memory_model_name,
"temperature": 0.1,
"max_tokens": 1000,
}
},
}
chatbot = Chatbot(config=config, llm_client=llm_client, metadata=None, system_prompt=system_prompt,
chat_model_name=chat_model_name, memory_model_name=memory_model_name,
max_tokens=max_tokens, limit=limit, debug=debug)
# Example to add buffer memory
memory_messages = [
{"role": "user", "content": "My name is Sam, what about you?"},
{"role": "assistant", "content": "Hello Sam! I'm Julia."},
{"role": "user", "content": "What do you like to eat?"},
{"role": "assistant", "content": "I like pizza"}
]
chatbot.add_memories(memory_messages, user_id=user_id)
# Buffer window memory, this will be acts as sliding window memory for LLM
message_history = [{"role": "user", "content": "where r u from?"},
{"role": "assistant", "content": "I am from CA, USA"},
{"role": "user", "content": "ok"},
{"role": "assistant", "content": "hmm"},
{"role": "user", "content": "What are u doing on next Sunday?"},
{"role": "assistant", "content": "I am all available"}
]
# Example to chat with the bot, send latest / current query here
query = "Could you remind me what do you like to eat?"
response = chatbot.chat(query=query, message_history=message_history, user_id=user_id, print_stream=True)
print("Assistant: ", response)
# # Example to check memories in bot based on user_id
# memories = chatbot.get_memories(user_id=user_id)
# for m in memories:
# print(m)
# print("================================================================")
# related_memories = chatbot.related_memory(user_id=user_id,
# query="yes i am sam? what us your name")
# print(related_memories)
FAQs-
-
Can I customize LLM endpoints / Groq or other models?
- Yes, any OpenAI-compatible endpoints and models can be used.
-
Can I use custom hosted chromadb, or any other vector db.
- Yes, read documentation.
-
Need help or have suggestions?
- Raise an issue or contact me at dipeshpal17@gmail.com
Star History
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file chatformers-1.0.8.tar.gz
.
File metadata
- Download URL: chatformers-1.0.8.tar.gz
- Upload date:
- Size: 9.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ffadc123d7b7ef49ba41a6c00b084855794dbc8c77a40e52d0b2ec31e0e3e635 |
|
MD5 | 5dc814d1c42bb3674608375a179b4ae1 |
|
BLAKE2b-256 | 222d6859a48d850e2f0c9bb4e46c5f8a5de563a976de322c2bfc9bb31cfe5a4c |
File details
Details for the file chatformers-1.0.8-py3-none-any.whl
.
File metadata
- Download URL: chatformers-1.0.8-py3-none-any.whl
- Upload date:
- Size: 10.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9e22b1d1e51e3832738c8e358334573d3f04479599d6006c1cdf30dca1661d3a |
|
MD5 | f1678fcbb5137fad3b13943cd50697ec |
|
BLAKE2b-256 | af3c9c79d76b8aefe578fb6f1c37397fc49a4893e0de206681975733c37a5b6d |