A powerful library for shipping production-grade AI chatbots in minutes
Project description
Ship production-grade AI chatbots in minutes
LangChat is a high-performance Python library designed to bridge the gap between "prototype" and "production." It unifies LLMs, vector databases, and session management into a single, modular interface.
Why LangChat?
Most AI frameworks are great for experiments but require massive boilerplate for production. LangChat handles the "hard parts" out of the box so you can focus on building features.
| Feature | LangChat | Other Libraries |
|---|---|---|
| Setup Time | Minutes | Days / Weeks |
| API Key Rotation | Built-in | Manual |
| Chat History | Automatic | Manual |
| Vector Search | Integrated | Separate |
| Reranking | Built-in | Manual |
| Production Ready | Yes | Depends |
Installation
pip install langchat
Quick Start
1 — Set your environment variables
# .env
OPENAI_API_KEY=sk-...
PINECONE_API_KEY=pc-...
SUPABASE_URL=https://yourproject.supabase.co
SUPABASE_KEY=your-key
2 — Build and chat
from langchat import LangChat
from langchat.providers import OpenAI, Pinecone, Supabase
lc = LangChat(
llm=OpenAI("gpt-4o"),
vector_db=Pinecone("my-index"),
db=Supabase(),
)
# Async
response = await lc.chat("What is RAG?", user_id="alice")
print(response.text) # typed response — no dict["response"] needed
# Sync (no asyncio boilerplate)
response = lc.chat_sync("Hello!", user_id="alice")
print(response.text)
All providers read credentials from the environment automatically. No need to pass keys explicitly unless you want to override them.
Providers
LLM providers
Every LLM provider follows the same pattern: model as the first argument, everything else keyword-only.
from langchat.providers import OpenAI, Anthropic, Gemini, Mistral, Cohere, Ollama
# Reads OPENAI_API_KEY from environment
OpenAI()
OpenAI("gpt-4o")
OpenAI("gpt-4o", temperature=0.2)
OpenAI(api_keys=["sk-1", "sk-2"]) # automatic key rotation
# Reads ANTHROPIC_API_KEY
Anthropic()
Anthropic("claude-opus-4-6")
# Reads GEMINI_API_KEY or GOOGLE_API_KEY
Gemini()
Gemini("gemini-1.5-pro")
# Reads MISTRAL_API_KEY
Mistral()
Mistral("mistral-large-latest")
# Reads COHERE_API_KEY
Cohere()
Cohere("command-r")
# No API key required — connects to a local Ollama server
Ollama()
Ollama("mistral")
Ollama("codellama", base_url="http://gpu-server:11434")
Vector database
from langchat.providers import Pinecone
# Reads PINECONE_API_KEY and OPENAI_API_KEY (for embeddings)
Pinecone("my-index")
Pinecone("my-index", embedding_model="text-embedding-3-small")
Pinecone("my-index", api_key="pc-...", embedding_api_key="sk-...")
History database
from langchat.providers import Supabase
# Reads SUPABASE_URL and SUPABASE_KEY
Supabase()
Supabase(url="https://yourproject.supabase.co", key="your-key")
Typed responses
chat() returns a ChatResponse dataclass — no more result["response"] key lookups.
response = await lc.chat("Summarise the docs", user_id="alice", platform="docs")
response.text # str — the answer
response.status # "success" | "error"
response.response_time # float — wall-clock seconds
response.timestamp # str — ISO-8601
response.user_id # str
response.platform # str
response.error # str | None — set when status == "error"
# Boolean protocol
if response:
print("OK:", response.text)
else:
print("Error:", response.error)
# Works directly with print / f-strings
print(response) # same as print(response.text)
print(f"Answer: {response}")
Document indexing
# Single file
lc.index("docs/guide.pdf")
# Multiple files at once
lc.index(["docs/guide.pdf", "docs/api.pdf", "data/faq.csv"])
# With options
lc.index(
"docs/guide.pdf",
chunk_size=500,
chunk_overlap=50,
namespace="v2",
prevent_duplicates=True, # default — safe to call multiple times
)
Custom prompt
PROMPT = """You are a helpful assistant for {platform}.
Use only the provided context to answer questions.
Context:
{context}
Chat history:
{chat_history}
Question: {question}
Answer:"""
lc = LangChat(
llm=OpenAI("gpt-4o"),
vector_db=Pinecone("my-index"),
db=Supabase(),
prompt_template=PROMPT,
)
As a FastAPI server
from langchat.api.app import create_app
from langchat.providers import OpenAI, Pinecone, Supabase
import uvicorn
app = create_app(
llm=OpenAI("gpt-4o"),
vector_db=Pinecone("my-index"),
db=Supabase(),
)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
Endpoints exposed automatically:
| Method | Path | Description |
|---|---|---|
POST |
/chat |
Send a message |
GET |
/health |
Health check |
GET |
/frontend |
Serves the built-in UI |
Use Cases
| Education | E-commerce | Enterprise |
|---|---|---|
| Intelligent tutoring and course Q&A | Customer support and product discovery | Internal knowledge base search |
Roadmap & Contributing
We are building the future of conversational AI infrastructure.
- Contributing: We welcome PRs! Please check CONTRIBUTING.md.
Built with ❤️ by NeuroBrain
GitHub • PyPI • Documentation
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchat-1.0.2.tar.gz.
File metadata
- Download URL: langchat-1.0.2.tar.gz
- Upload date:
- Size: 43.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
484e3dd8c43c69c90815548766cb71b35ad001513e9041893fe0b2d5f51b6c34
|
|
| MD5 |
0d9cf71cb229bea6c00e2199946867eb
|
|
| BLAKE2b-256 |
36d56381c7eef6ceb2041d9f2480cd4540a39db5a37d1ae0614da20628fe564b
|
File details
Details for the file langchat-1.0.2-py3-none-any.whl.
File metadata
- Download URL: langchat-1.0.2-py3-none-any.whl
- Upload date:
- Size: 54.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ffdadd905ea2f2241864df7fb4d681f4045bf36c21e346174197a47c550c9e74
|
|
| MD5 |
1254587fc1a7b744bf9a456315077333
|
|
| BLAKE2b-256 |
83999bdeb5e80118a5c081b5844025b48f8b9739b8593653c37af47c1941e204
|