Production Ready LangChain
Project description
LongTrainer - Production-Ready LangChain
Official Documentation
Explore the comprehensive LongTrainer Documentation for detailed instructions on installation, features, and API usage.
Installation
Introducing LongTrainer, a sophisticated extension of the LangChain framework designed specifically for managing multiple bots and providing isolated, context-aware chat sessions. Ideal for developers and businesses looking to integrate complex conversational AI into their systems, LongTrainer simplifies the deployment and customization of LLMs.
pip install longtrainer
Installation Instructions for Required Libraries and Tools
1. Linux (Ubuntu/Debian)
To install the required packages on a Linux system (specifically Ubuntu or Debian), you can use the apt package manager. The following command installs several essential libraries and tools:
sudo apt install libmagic-dev poppler-utils tesseract-ocr qpdf libreoffice pandoc
2. macOS
On macOS, you can install these packages using brew, the Homebrew package manager. If you don't have Homebrew installed, you can install it from brew.sh.
brew install libmagic poppler tesseract qpdf libreoffice pandoc
Features 🌟
- ✅ Long Memory: Retains context effectively for extended interactions.
- ✅ Multi-Bot Management: Easily configure and manage multiple bots within a single framework, perfect for scaling across various use cases
- ✅ Isolated Chat Sessions: Each bot operates within its own session, ensuring interactions remain distinct and contextually relevant without overlap.
- ✅ Context-Aware Interactions: Utilize enhanced memory capabilities to maintain context over extended dialogues, significantly improving user experience
- ✅ Scalable Architecture: Designed to scale effortlessly with your needs, whether you're handling hundreds of users or just a few.
- ✅ Enhanced Customization: Tailor the behavior to fit specific needs.
- ✅ Memory Management: Efficient handling of chat histories and contexts.
- ✅ GPT Vision Support: Integration Context Aware GPT-powered visual models.
- ✅ Different Data Formats: Supports various data input formats.
- ✅ VectorStore Management: Advanced management of vector storage for efficient retrieval.
Diverse Use Cases:
- ✅ Enterprise Solutions: Streamline customer interactions, automate responses, and manage multiple departmental bots from a single platform.
- ✅ Educational Platforms: Enhance learning experiences with AI tutors capable of maintaining context throughout sessions.
- ✅ Healthcare Applications: Support patient management with bots that provide consistent, context-aware interactions.
Works for All Langchain Supported LLM and Embeddings
- ✅ OpenAI (default)
- ✅ VertexAI
- ✅ HuggingFace
- ✅ AWS Bedrock
- ✅ Groq
- ✅ TogetherAI
Example
VertexAI LLMs
from longtrainer.trainer import LongTrainer
from langchain_community.llms import VertexAI
llm = VertexAI()
trainer = LongTrainer(mongo_endpoint='mongodb://localhost:27017/', llm=llm)
TogetherAI LLMs
from longtrainer.trainer import LongTrainer
from langchain_community.llms import Together
llm = Together(
model="togethercomputer/RedPajama-INCITE-7B-Base",
temperature=0.7,
max_tokens=128,
top_k=1,
# together_api_key="..."
)
trainer = LongTrainer(mongo_endpoint='mongodb://localhost:27017/', llm=llm)
Usage Example 🚀
Here's a quick start guide on how to use LongTrainer:
from longtrainer.trainer import LongTrainer
import os
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "sk-"
# Initialize LongTrainer
trainer = LongTrainer(mongo_endpoint='mongodb://localhost:27017/', encrypt_chats=True)
bot_id = trainer.initialize_bot_id()
print('Bot ID: ', bot_id)
# Add Data
path = 'path/to/your/data'
trainer.add_document_from_path(path, bot_id)
# Initialize Bot
trainer.create_bot(bot_id)
# Start a New Chat
chat_id = trainer.new_chat(bot_id)
# Send a Query and Get a Response
query = 'Your query here'
response = trainer.get_response(query, bot_id, chat_id)
print('Response: ', response)
Here's a guide on how to use Vision Chat:
chat_id = trainer.new_vision_chat(bot_id)
query = 'Your query here'
image_paths = ['nvidia.jpg']
response = trainer.get_vision_response(query, image_paths, str(bot_id), str(vision_id))
print('Response: ', response)
List Chats and Display Chat History:
trainer.list_chats(bot_id)
trainer.get_chat_by_id(chat_id=chat_id)
This project is still under active development. Community feedback and contributions are highly appreciated.
Citation
If you utilize this repository, please consider citing it with:
@misc{longtrainer,
author = {Endevsols},
title = {LongTrainer: Production-Ready LangChain},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ENDEVSOLS/Long-Trainer}},
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file longtrainer-0.2.7.tar.gz
.
File metadata
- Download URL: longtrainer-0.2.7.tar.gz
- Upload date:
- Size: 22.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 19183f6e89d4cbe1e6d47a739f10a90f9a22513598b0ddf5f51a0fe32079d503 |
|
MD5 | d87c019b0118b03fe848a88fd6a8e749 |
|
BLAKE2b-256 | baf171cd5e473f7786199710458751141857513321dc68fa3884408ba2bf9b13 |
File details
Details for the file longtrainer-0.2.7-py3-none-any.whl
.
File metadata
- Download URL: longtrainer-0.2.7-py3-none-any.whl
- Upload date:
- Size: 21.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | abaa54a188e685d492b54537e5f3fe286ac0578ea1b44ddebf848342c6f45e28 |
|
MD5 | d6c257cf12ddf6dd2d73575d5bf37032 |
|
BLAKE2b-256 | bd578f9e714e07373162fd92784fa2c4161a4c0bb43ddf0885c92831188175c8 |