Skip to main content

A bridge to use Langchain output as an OpenAI-compatible API.

Project description

Langchain Openai API Bridge

🚀 Expose Langchain Agent (Langgraph) result as an OpenAI-compatible API 🚀

A FastAPI + Langchain / langgraph extension to expose agent result as an OpenAI-compatible API.

Use any OpenAI-compatible UI or UI framework (like the awesome 👌 Vercel AI SDK) with your custom Langchain Agent.

Support:

  • Chat Completions API
    • ✅ Invoke
    • ✅ Stream
  • Assistant API - Feature in progress
    • ✅ Run Stream
    • ✅ Threads
    • ✅ Messages
    • ✅ Run
    • ✅ Tools step stream
    • 🚧 Human In The Loop

Table of Content

Quick Install

pip
pip install langchain-openai-api-bridge
poetry
poetry add langchain-openai-api-bridge

Usage

OpenAI Assistant API Compatible

from fastapi.middleware.cors import CORSMiddleware
from fastapi import APIRouter, FastAPI
from dotenv import load_dotenv, find_dotenv
import uvicorn


from langchain_openai_api_bridge.assistant.assistant_app import AssistantApp

from langchain_openai_api_bridge.assistant.repository.in_memory_message_repository import (
    InMemoryMessageRepository,
)
from langchain_openai_api_bridge.assistant.repository.in_memory_run_repository import (
    InMemoryRunRepository,
)
from langchain_openai_api_bridge.assistant.repository.in_memory_thread_repository import (
    InMemoryThreadRepository,
)
from langchain_openai_api_bridge.fastapi.add_assistant_routes import (
    build_assistant_router,
)
from tests.test_functional.fastapi_assistant_agent_openai_advanced.my_agent_factory import (
    MyAgentFactory,
)

_ = load_dotenv(find_dotenv())


assistant_app = AssistantApp(
    thread_repository_type=InMemoryThreadRepository,
    message_repository_type=InMemoryMessageRepository,
    run_repository=InMemoryRunRepository,
    agent_factory=MyAgentFactory,
)

api = FastAPI(
    title="Langchain Agent OpenAI API Bridge",
    version="1.0",
    description="OpenAI API exposing langchain agent",
)

api.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
    expose_headers=["*"],
)

assistant_router = build_assistant_router(assistant_app=assistant_app)
open_ai_router = APIRouter(prefix="/my-assistant/openai/v1")

open_ai_router.include_router(assistant_router)
api.include_router(open_ai_router)

if __name__ == "__main__":
    uvicorn.run(api, host="localhost")
from langchain_openai_api_bridge.core.agent_factory import AgentFactory
from langgraph.graph.graph import CompiledGraph
from langchain_core.language_models import BaseChatModel
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

from langchain_openai_api_bridge.core.create_llm_dto import CreateLLMDto


@tool
def magic_number_tool(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2


class MyAgentFactory(AgentFactory):

    def create_agent(self, llm: BaseChatModel) -> CompiledGraph:
        return create_react_agent(
            llm,
            [magic_number_tool],
            messages_modifier="""You are a helpful assistant.""",
        )

    def create_llm(self, dto: CreateLLMDto) -> CompiledGraph:
        return ChatOpenAI(
            model=dto.model,
            api_key=dto.api_key,
            streaming=True,
            temperature=dto.temperature,
        )

OpenAI Chat Completion API Compatible

# Server
api = FastAPI(
    title="Langchain Agent OpenAI API Bridge",
    version="1.0",
    description="OpenAI API exposing langchain agent",
)

@tool
def magic_number_tool(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2


def assistant_openai_v1_chat(request: OpenAIChatCompletionRequest, api_key: str):
    llm = ChatOpenAI(
        model=request.model,
        api_key=api_key,
        streaming=True,
    )
    agent = create_react_agent(
        llm,
        [magic_number_tool],
        messages_modifier="""You are a helpful assistant.""",
    )

    return V1ChatCompletionRoutesArg(model_name=request.model, agent=agent)


add_v1_chat_completions_agent_routes(
    api,
    path="/my-custom-path",
    handler=assistant_openai_v1_chat,
    system_fingerprint=system_fingerprint,
)
# Client
openai_client = OpenAI(
    base_url="http://my-server/my-custom-path/openai/v1",
)

chat_completion = openai_client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {
            "role": "user",
            "content": 'Say "This is a test"',
        }
    ],
)
print(chat_completion.choices[0].message.content)
#> "This is a test"

Full python example: Server, Client

If you find this project useful, please give it a star ⭐!

Bonus Client using NextJS + Vercel AI SDK
// app/api/my-chat/route.ts
import { NextRequest } from "next/server";
import { z } from "zod";
import { type CoreMessage, streamText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";

export const ChatMessageSchema = z.object({
  id: z.string(),
  role: z.string(),
  createdAt: z.date().optional(),
  content: z.string(),
});

const BodySchema = z.object({
  messages: z.array(ChatMessageSchema),
});

export type AssistantStreamBody = z.infer<typeof BodySchema>;

const langchain = createOpenAI({
  //baseURL: "https://my-project/my-custom-path/openai/v1",
  baseURL: "http://localhost:8000/my-custom-path/openai/v1",
});

export async function POST(request: NextRequest) {
  const { messages }: { messages: CoreMessage[] } = await request.json();

  const result = await streamText({
    model: langchain("gpt-4o"),
    messages,
  });

  return result.toAIStreamResponse();
}

More Examples

Every examples can be found in tests/test_functional directory.

  • OpenAI LLM -> Langgraph Agent -> OpenAI Completion - Server, Client
  • Anthropic LLM -> Langgraph Agent -> OpenAI Completion - Server, Client
  • Advanced - OpenAI LLM -> Langgraph Agent -> OpenAI Completion - Server, Client
⚠️ Setup to run examples

Define OPENAI_API_KEY or ANTHROPIC_API_KEY on your system. Examples will take token from environment variable or .env at root of the project.

💁 Contributing

If you want to contribute to this project, you can follow this guideline:

  1. Fork this project
  2. Create a new branch
  3. Implement your feature or bug fix
  4. Send a pull request

Installation

poetry install
poetry env use ./.venv/bin/python

Commands

Command Command
Run Tests poetry run pytest

Limitations

  • Chat Completions Tools

    • Functions do not work when configured on the client. Set up tools and functions using LangChain on the server. Usage Example
    • ⚠️ LangChain functions are not streamed in responses due to a limitation in LangGraph.
      • Details: LangGraph's astream_events - on_tool_start, on_tool_end, and on_llm_stream events do not contain information typically available when calling tools.
  • LLM Usage Info

    • Returned usage info is innacurate. This is due to a Langchain/Langgraph limitation where usage info isn't available when calling a Langgraph Agent.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_openai_api_bridge-0.3.3.tar.gz (19.4 kB view hashes)

Uploaded Source

Built Distribution

langchain_openai_api_bridge-0.3.3-py3-none-any.whl (35.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page