Production Ready LangChain
Project description
LongTrainer - Production-Ready LangChain
Official Documentation
Explore the comprehensive LongTrainer Documentation for detailed instructions on installation, features, and API usage.
Installation
Introducing LongTrainer, a sophisticated extension of the LangChain framework designed specifically for managing multiple bots and providing isolated, context-aware chat sessions. Ideal for developers and businesses looking to integrate complex conversational AI into their systems, LongTrainer simplifies the deployment and customization of LLMs.
pip install longtrainer
Installation Instructions for Required Libraries and Tools
1. Linux (Ubuntu/Debian)
To install the required packages on a Linux system (specifically Ubuntu or Debian), you can use the apt package manager. The following command installs several essential libraries and tools:
sudo apt install libmagic-dev poppler-utils tesseract-ocr qpdf libreoffice pandoc
2. macOS
On macOS, you can install these packages using brew, the Homebrew package manager. If you don't have Homebrew installed, you can install it from brew.sh.
brew install libmagic poppler tesseract qpdf libreoffice pandoc
Features 🌟
- ✅ Long Memory: Retains context effectively for extended interactions.
- ✅ Multi-Bot Management: Easily configure and manage multiple bots within a single framework, perfect for scaling across various use cases
- ✅ Isolated Chat Sessions: Each bot operates within its own session, ensuring interactions remain distinct and contextually relevant without overlap.
- ✅ Context-Aware Interactions: Utilize enhanced memory capabilities to maintain context over extended dialogues, significantly improving user experience
- ✅ Scalable Architecture: Designed to scale effortlessly with your needs, whether you're handling hundreds of users or just a few.
- ✅ Enhanced Customization: Tailor the behavior to fit specific needs.
- ✅ Memory Management: Efficient handling of chat histories and contexts.
- ✅ GPT Vision Support: Integration Context Aware GPT-powered visual models.
- ✅ Different Data Formats: Supports various data input formats.
- ✅ VectorStore Management: Advanced management of vector storage for efficient retrieval.
Diverse Use Cases:
- ✅ Enterprise Solutions: Streamline customer interactions, automate responses, and manage multiple departmental bots from a single platform.
- ✅ Educational Platforms: Enhance learning experiences with AI tutors capable of maintaining context throughout sessions.
- ✅ Healthcare Applications: Support patient management with bots that provide consistent, context-aware interactions.
Works for All Langchain Supported LLM and Embeddings
- ✅ OpenAI (default)
- ✅ VertexAI
- ✅ HuggingFace
- ✅ AWS Bedrock
- ✅ Groq
- ✅ TogetherAI
Example
VertexAI LLMs
from longtrainer.trainer import LongTrainer
from langchain_community.llms import VertexAI
llm = VertexAI()
trainer = LongTrainer(mongo_endpoint='mongodb://localhost:27017/', llm=llm)
TogetherAI LLMs
from longtrainer.trainer import LongTrainer
from langchain_community.llms import Together
llm = Together(
model="togethercomputer/RedPajama-INCITE-7B-Base",
temperature=0.7,
max_tokens=128,
top_k=1,
# together_api_key="..."
)
trainer = LongTrainer(mongo_endpoint='mongodb://localhost:27017/', llm=llm)
Usage Example 🚀
Here's a quick start guide on how to use LongTrainer:
from longtrainer.trainer import LongTrainer
import os
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "sk-"
# Initialize LongTrainer
trainer = LongTrainer(mongo_endpoint='mongodb://localhost:27017/', encrypt_chats=True)
bot_id = trainer.initialize_bot_id()
print('Bot ID: ', bot_id)
# Add Data
path = 'path/to/your/data'
trainer.add_document_from_path(path, bot_id)
# Initialize Bot
trainer.create_bot(bot_id)
# Start a New Chat
chat_id = trainer.new_chat(bot_id)
# Send a Query and Get a Response
query = 'Your query here'
response = trainer.get_response(query, bot_id, chat_id)
print('Response: ', response)
Here's a guide on how to use Vision Chat:
chat_id = trainer.new_vision_chat(bot_id)
query = 'Your query here'
image_paths = ['nvidia.jpg']
response = trainer.get_vision_response(query, image_paths, str(bot_id), str(vision_id))
print('Response: ', response)
List Chats and Display Chat History:
trainer.list_chats(bot_id)
trainer.get_chat_by_id(chat_id=chat_id)
This project is still under active development. Community feedback and contributions are highly appreciated.
Citation
If you utilize this repository, please consider citing it with:
@misc{longtrainer,
author = {Endevsols},
title = {LongTrainer: Production-Ready LangChain},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ENDEVSOLS/Long-Trainer}},
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file longtrainer-0.3.1.tar.gz
.
File metadata
- Download URL: longtrainer-0.3.1.tar.gz
- Upload date:
- Size: 22.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e1876603f3272ab65b143e7f1e730b004f5ea2072088b661fa6363b0bab6ca9f |
|
MD5 | b8e7c20d671a242b6d1ca3f1ac85f874 |
|
BLAKE2b-256 | cb3af0595375d0f589fa25a8fafd5825be97181d238c33c134d51b2975ac34c2 |
File details
Details for the file longtrainer-0.3.1-py3-none-any.whl
.
File metadata
- Download URL: longtrainer-0.3.1-py3-none-any.whl
- Upload date:
- Size: 22.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 52c7a0d902e51e32ac613343a4c510e3725dfe92e01ab0de1077d7ede03e6622 |
|
MD5 | df7b84ad12df9dd69d5ed81c08808046 |
|
BLAKE2b-256 | 934bbc1b77195a518f00ae848fc1b592c4317c96295d7b58aacfce02a7d45a26 |