An integration package connecting LangChain with AnyLLM
Project description
langchain-anyllm
One interface for every LLM.
This integration enables you to use any-llm's unified interface (supporting OpenAI, Anthropic, Gemini, local models, and more) as a standard LangChain ChatModel. See all any-llm supported providers here
No need to rewrite your provider-specific adapter code every time you want to test a new model. Switch between OpenAI, Anthropic, Gemini, and local models (via Ollama/LocalAI) just by changing a string.
Features
- Unified Interface: Use OpenAI, Anthropic, Google, or local models through a single API
- Streaming Support: Full support for both synchronous and asynchronous streaming
- Tool Calling: Native support for LangChain tool binding
Requirements
- Python 3.11, 3.12, or 3.13
Installation
From PyPI
pip install langchain-anyllm
or
uv add langchain-anyllm
Quick Start
Note: You need to have the appropriate API key available for your chosen provider. API keys can be passed explicitly via the api_key parameter, or set as environment variables (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.). See the any-llm documentation for provider-specific requirements.
Basic Chat
from langchain_anyllm import ChatAnyLLM
# Initialize with any supported model
llm = ChatAnyLLM(model="openai:gpt-4", temperature=0.7)
# Invoke for a single response
response = llm.invoke("Tell me a joke")
print(response.content)
Streaming
from langchain_anyllm import ChatAnyLLM
llm = ChatAnyLLM(model="openai:gpt-4")
# Stream responses
for chunk in llm.stream("Write a poem about the ocean"):
print(chunk.content, end="", flush=True)
Async Support
import asyncio
from langchain_anyllm import ChatAnyLLM
async def main():
llm = ChatAnyLLM(model="openai:gpt-4")
# Async invoke
response = await llm.ainvoke("What is the meaning of life?")
print(response.content)
# Async streaming
async for chunk in llm.astream("Count to 10"):
print(chunk.content, end="", flush=True)
asyncio.run(main())
Tool Calling
from langchain_anyllm import ChatAnyLLM
from langchain_core.tools import tool
@tool
def get_weather(location: str) -> str:
"""Get the weather for a location."""
return f"The weather in {location} is sunny!"
llm = ChatAnyLLM(model="openai:gpt-4")
llm_with_tools = llm.bind_tools([get_weather])
response = llm_with_tools.invoke("What's the weather in San Francisco?")
print(response.tool_calls)
Configuration
from langchain_anyllm import ChatAnyLLM
# Using model string with provider prefix
llm = ChatAnyLLM(
model="openai:gpt-4",
api_key="your-api-key", # Optional, reads from environment if not provided
api_base="https://custom-endpoint.com/v1", # Optional custom endpoint
temperature=0.7,
max_tokens=1000,
top_p=0.9,
)
# Or using separate provider parameter
llm = ChatAnyLLM(
model="gpt-4",
provider="openai",
temperature=0.7,
)
# Enable JSON mode
llm = ChatAnyLLM(
model="openai:gpt-4",
response_format={"type": "json_object"},
)
Parameters
model(str): The model to use. Can include provider prefix (e.g., "openai:gpt-4") or be used with separateproviderparameterprovider(str, optional): Provider name (e.g., "openai", "anthropic"). If not set, extracted from model stringapi_key(str, optional): API key for the provider. Reads from environment if not providedapi_base(str, optional): Custom API endpointtemperature(float, optional): Sampling temperature (0.0 to 2.0)max_tokens(int, optional): Maximum number of tokens to generatetop_p(float, optional): Nucleus sampling parameterresponse_format(dict, optional): Response format specification. Use{"type": "json_object"}for JSON modemodel_kwargs(dict, optional): Additional parameters to pass to the model
Supported Providers
any-llm supports a wide range of providers. See the full list here.
Development
Clone the repo
git clone https://github.com/mozilla-ai/langchain-any-llm.git
cd langchain-any-llm
Run Tests
uv run pytest tests/
Type Checking
mypy langchain_anyllm/
Linting
ruff check langchain_anyllm/
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_anyllm-0.0.1a2.tar.gz.
File metadata
- Download URL: langchain_anyllm-0.0.1a2.tar.gz
- Upload date:
- Size: 110.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9bf54cced89addc0c2cd9ef9e2e2552cb86bc6a070fded0cec05c91999ddb739
|
|
| MD5 |
676948b51c5f0b47803e5f48e9548c03
|
|
| BLAKE2b-256 |
87c907a1fd012a8a4ab7b09c5b9844a83aeccd331bab3c7ff4024a21c5ad5004
|
File details
Details for the file langchain_anyllm-0.0.1a2-py3-none-any.whl.
File metadata
- Download URL: langchain_anyllm-0.0.1a2-py3-none-any.whl
- Upload date:
- Size: 11.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
07c7a1615ccee2a75731a8f1c02c5e4b900bb0ca5cca8bd3722d2f282869e3b3
|
|
| MD5 |
76a1d83c0ea6c556a1c34bedccd575ab
|
|
| BLAKE2b-256 |
73c58e14e3597b443710d2c21ceabb2fe3e608014242f2c4883e54b4fed57a8a
|