Skip to main content

A bridge to use Langchain output as an OpenAI-compatible API.

Project description

Langchain Openai API Bridge

PyPI version Downloads

🚀 Expose Langchain Agent (Langgraph) result as an OpenAI-compatible API 🚀

A FastAPI + Langchain / langgraph extension to expose agent result as an OpenAI-compatible API.

Use any OpenAI-compatible UI or UI framework with your custom Langchain Agent.

Support:

OpenAI API features:

  • Chat Completions API
    • ✅ Invoke
    • ✅ Stream
  • Assistant API - Feature in progress
    • ✅ Run Stream
    • ✅ Threads
    • ✅ Messages
    • ✅ Run
    • ✅ Tools step stream
    • 🚧 Human In The Loop

Supporetd Vendors:

  • ✅ OpenAI
    • ✅ Stream
    • ✅ Multimodal
    • ✅ Auto tool choice
  • ✅ Anthropic
    • ✅ Stream
    • ✅ Multimodal
    • ✅ Auto tool choice
  • ✅ Groq
    • ✅ Stream
    • ❌ Multimodal
    • ✅ Auto tool choice
  • ✅ LLamaCPP local inference
    • ✅ Stream
    • ❌ Multimodal
    • ❌ Auto tool choice

If you find this project useful, please give it a star ⭐!

Table of Content

Quick Install

pip
pip install langchain-openai-api-bridge
poetry
poetry add langchain-openai-api-bridge

Usage

OpenAI Assistant API Compatible

# Assistant Bridge as OpenAI Compatible API

from fastapi.middleware.cors import CORSMiddleware
from fastapi import FastAPI
from dotenv import load_dotenv, find_dotenv
import uvicorn

from langchain_openai_api_bridge.assistant import (
    InMemoryMessageRepository,
    InMemoryRunRepository,
    InMemoryThreadRepository,
)
from langchain_openai_api_bridge.fastapi.langchain_openai_api_bridge_fastapi import (
    LangchainOpenaiApiBridgeFastAPI,
)
from tests.test_functional.fastapi_assistant_agent_openai.my_agent_factory import (
    MyAgentFactory,
)

_ = load_dotenv(find_dotenv())


app = FastAPI(
    title="Langchain Agent OpenAI API Bridge",
    version="1.0",
    description="OpenAI API exposing langchain agent",
)

in_memory_thread_repository = InMemoryThreadRepository()
in_memory_message_repository = InMemoryMessageRepository()
in_memory_run_repository = InMemoryRunRepository()

bridge = LangchainOpenaiApiBridgeFastAPI(
    app=app, agent_factory_provider=lambda: MyAgentFactory()
)
bridge.bind_openai_assistant_api(
    thread_repository_provider=in_memory_thread_repository,
    message_repository_provider=in_memory_message_repository,
    run_repository_provider=in_memory_run_repository,
    prefix="/my-assistant",
)


if __name__ == "__main__":
    uvicorn.run(app, host="localhost")
# Agent Creation
@tool
def magic_number_tool(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2


class MyAgentFactory(AgentFactory):

    def create_agent(self, dto: CreateLLMDto) -> Runnable:
        llm = self.create_llm(dto=dto)

        return create_react_agent(
            llm,
            [magic_number_tool],
            messages_modifier="""You are a helpful assistant.""",
        )

    def create_llm(self, dto: CreateLLMDto) -> Runnable:
        return ChatOpenAI(
            model=dto.model,
            api_key=dto.api_key,
            streaming=True,
            temperature=dto.temperature,
        )

Full example:

OpenAI Chat Completion API Compatible

# Server

from langchain_openai_api_bridge.assistant import (
    AssistantApp,
    InMemoryMessageRepository,
    InMemoryRunRepository,
    InMemoryThreadRepository,
)
from langchain_openai_api_bridge.fastapi import include_chat_completion

app = FastAPI(
    title="Langchain Agent OpenAI API Bridge",
    version="1.0",
    description="OpenAI API exposing langchain agent",
)

bridge = LangchainOpenaiApiBridgeFastAPI(
    app=app, agent_factory_provider=lambda: MyAnthropicAgentFactory()
)
bridge.bind_openai_chat_completion(prefix="/my-custom-path/anthropic")

if __name__ == "__main__":
    uvicorn.run(app, host="localhost")
# Client
openai_client = OpenAI(
    base_url="http://my-server/my-custom-path/anthropic/openai/v1",
)

chat_completion = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {
            "role": "user",
            "content": 'Say "This is a test"',
        }
    ],
)
print(chat_completion.choices[0].message.content)
#> "This is a test"

Full example:

// Vercel AI sdk - example
// ************************
// app/api/my-chat/route.ts
import { NextRequest } from "next/server";
import { z } from "zod";
import { type CoreMessage, streamText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";

export const ChatMessageSchema = z.object({
  id: z.string(),
  role: z.string(),
  createdAt: z.date().optional(),
  content: z.string(),
});

const BodySchema = z.object({
  messages: z.array(ChatMessageSchema),
});

export type AssistantStreamBody = z.infer<typeof BodySchema>;

const langchain = createOpenAI({
  //baseURL: "https://my-project/my-custom-path/openai/v1",
  baseURL: "http://localhost:8000/my-custom-path/openai/v1",
});

export async function POST(request: NextRequest) {
  const { messages }: { messages: CoreMessage[] } = await request.json();

  const result = await streamText({
    model: langchain("gpt-4o"),
    messages,
  });

  return result.toAIStreamResponse();
}

More Examples

More examples can be found in tests/test_functional directory. This project is not limited to OpenAI’s models; some examples demonstrate the use of Anthropic’s language models. Anthropic is just one example, and any LangChain-supported vendor is also supported by this library.

⚠️ Setup to run examples

Define OPENAI_API_KEY or ANTHROPIC_API_KEY on your system. Examples will take token from environment variable or .env at root of the project.

Contributing

If you want to contribute to this project, you can follow this guideline:

  1. Fork this project
  2. Create a new branch
  3. Implement your feature or bug fix
  4. Send a pull request

Installation

poetry install
poetry env use ./.venv/bin/python

Commands

Command Command
Run Tests poetry run pytest

Limitations

  • Chat Completions Tools

    • Functions cannot be passed through open ai API. Every functions need to be defined as a tool in langchain. Usage Example
  • LLM Usage Info

    • Returned usage info is innacurate. This is due to a Langchain/Langgraph limitation where usage info isn't available when calling a Langgraph Agent.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_openai_api_bridge-0.11.3.tar.gz (24.1 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file langchain_openai_api_bridge-0.11.3.tar.gz.

File metadata

File hashes

Hashes for langchain_openai_api_bridge-0.11.3.tar.gz
Algorithm Hash digest
SHA256 3bada6ce99d0dbfe6b15c5d596361929a08b5b37111d488ec5de36677b17e2a0
MD5 6b155ce967fdcfafa3f48a534a536d40
BLAKE2b-256 49f22e1caac50cefe00d05bffd4f92196c246968232c88fafc0f29f6419600b7

See more details on using hashes here.

File details

Details for the file langchain_openai_api_bridge-0.11.3-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_openai_api_bridge-0.11.3-py3-none-any.whl
Algorithm Hash digest
SHA256 e826cf712b0d2d47605c818f155b3260ce6b3860b878a73110e278ca21608b19
MD5 b50a93aac6d653e4013be19336f802a7
BLAKE2b-256 db823453fae595002787dfd7709cf2f1d8e24787530521de9cad941ac3fe176d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page