Skip to main content

An integration package connecting LangChain with AnyLLM

Project description

langchain-anyllm

One interface for every LLM.

This integration enables you to use any-llm's unified interface (supporting OpenAI, Anthropic, Gemini, local models, and more) as a standard LangChain ChatModel. See all any-llm supported providers here

No need to rewrite your provider-specific adapter code every time you want to test a new model. Switch between OpenAI, Anthropic, Gemini, and local models (via Ollama/LocalAI) just by changing a string.

Features

  • Unified Interface: Use OpenAI, Anthropic, Google, or local models through a single API
  • Streaming Support: Full support for both synchronous and asynchronous streaming
  • Tool Calling: Native support for LangChain tool binding

Requirements

  • Python 3.11, 3.12, or 3.13

Installation

From PyPI

pip install langchain-anyllm

or

uv add langchain-anyllm

Quick Start

Note: You need to have the appropriate API key available for your chosen provider. API keys can be passed explicitly via the api_key parameter, or set as environment variables (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.). See the any-llm documentation for provider-specific requirements.

Basic Chat

from langchain_anyllm import ChatAnyLLM

# Initialize with any supported model
llm = ChatAnyLLM(model="openai:gpt-4", temperature=0.7)

# Invoke for a single response
response = llm.invoke("Tell me a joke")
print(response.content)

Streaming

from langchain_anyllm import ChatAnyLLM

llm = ChatAnyLLM(model="openai:gpt-4")

# Stream responses
for chunk in llm.stream("Write a poem about the ocean"):
    print(chunk.content, end="", flush=True)

Async Support

import asyncio
from langchain_anyllm import ChatAnyLLM

async def main():
    llm = ChatAnyLLM(model="openai:gpt-4")

    # Async invoke
    response = await llm.ainvoke("What is the meaning of life?")
    print(response.content)

    # Async streaming
    async for chunk in llm.astream("Count to 10"):
        print(chunk.content, end="", flush=True)

asyncio.run(main())

Tool Calling

from langchain_anyllm import ChatAnyLLM
from langchain_core.tools import tool

@tool
def get_weather(location: str) -> str:
    """Get the weather for a location."""
    return f"The weather in {location} is sunny!"

llm = ChatAnyLLM(model="openai:gpt-4")
llm_with_tools = llm.bind_tools([get_weather])

response = llm_with_tools.invoke("What's the weather in San Francisco?")
print(response.tool_calls)

Configuration

from langchain_anyllm import ChatAnyLLM

# Using model string with provider prefix
llm = ChatAnyLLM(
    model="openai:gpt-4",
    api_key="your-api-key",  # Optional, reads from environment if not provided
    api_base="https://custom-endpoint.com/v1",  # Optional custom endpoint
    temperature=0.7,
    max_tokens=1000,
    top_p=0.9,
)

# Or using separate provider parameter
llm = ChatAnyLLM(
    model="gpt-4",
    provider="openai",
    temperature=0.7,
)

# Enable JSON mode
llm = ChatAnyLLM(
    model="openai:gpt-4",
    response_format={"type": "json_object"},
)

Parameters

  • model (str): The model to use. Can include provider prefix (e.g., "openai:gpt-4") or be used with separate provider parameter
  • provider (str, optional): Provider name (e.g., "openai", "anthropic"). If not set, extracted from model string
  • api_key (str, optional): API key for the provider. Reads from environment if not provided
  • api_base (str, optional): Custom API endpoint
  • temperature (float, optional): Sampling temperature (0.0 to 2.0)
  • max_tokens (int, optional): Maximum number of tokens to generate
  • top_p (float, optional): Nucleus sampling parameter
  • response_format (dict, optional): Response format specification. Use {"type": "json_object"} for JSON mode
  • model_kwargs (dict, optional): Additional parameters to pass to the model

Supported Providers

any-llm supports a wide range of providers. See the full list here.

Development

Clone the repo

git clone https://github.com/mozilla-ai/langchain-any-llm.git
cd langchain-any-llm

Run Tests

uv run pytest tests/

Type Checking

mypy langchain_anyllm/

Linting

ruff check langchain_anyllm/

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_anyllm-0.0.1a1.tar.gz (110.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_anyllm-0.0.1a1-py3-none-any.whl (11.9 kB view details)

Uploaded Python 3

File details

Details for the file langchain_anyllm-0.0.1a1.tar.gz.

File metadata

  • Download URL: langchain_anyllm-0.0.1a1.tar.gz
  • Upload date:
  • Size: 110.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for langchain_anyllm-0.0.1a1.tar.gz
Algorithm Hash digest
SHA256 b3b6437bfc92f7975de7e4484d56d386693e76682952b96508171b45966a469f
MD5 98e01890e0676d419eadec9187e568e3
BLAKE2b-256 9fd2a6a5bd8d307580e989bcc4d0475080cf3066b7af47b4b223ed74c6078c89

See more details on using hashes here.

File details

Details for the file langchain_anyllm-0.0.1a1-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_anyllm-0.0.1a1-py3-none-any.whl
Algorithm Hash digest
SHA256 a1f693f3d4f0530574327869c75942ee1dd9b4e947ace62b4dd39fb31ffb7f4e
MD5 65882d2c5730ba602c73f96b161d49b4
BLAKE2b-256 a8b64bd44d46b1476f7700fcd370b6e5d51daedd7acb333c7e78e9c177e1d9a9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page