Skip to main content

LangChain integration for g4f

Project description

LangChain G4F Integration

A LangChain integration for GPT4Free (g4f) that allows you to use any g4f provider with LangChain's chat model interface. This package eliminates the need for separate g4f installation.

Features

  • 🤖 Full LangChain Compatibility: Drop-in replacement for OpenAI chat models
  • 🔄 Multiple Providers: Support for all g4f providers including your custom OpenRouterCustom
  • 🌊 Streaming Support: Both sync and async streaming responses
  • 🔐 Authentication: API key support for providers that require it
  • Async Support: Full async/await support for modern applications
  • 🎛️ Parameter Control: Temperature, max_tokens, and other model parameters

Installation

# Install the package (g4f is , no separate installation needed)
pip install langchain-g4f-chat

Quick Start

from langchain_g4f_chat import ChatG4F

# Create a ChatG4F instance with correct model name
chat = ChatG4F(
    model="openai/gpt-3.5-turbo",  # ✅ Correct OpenRouter format
    provider=ChatG4F.Provider.OpenRouter,  #  Provider
    api_key="your-openrouter-api-key",
    temperature=0.7,
)

# Use with LangChain (when langchain-core is installed)
from langchain_core.messages import HumanMessage, SystemMessage

messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="What is the capital of France?")
]

response = chat.invoke(messages)
print(response.content)

Basic Usage (Without LangChain Core)

from langchain_g4f_chat import ChatG4F
import langchain_g4f_chat as lg4f  # Import the  g4f

# Create ChatG4F instance
chat = ChatG4F(
    model="gpt-3.5-turbo",
    provider=lg4f.Provider.OpenRouterCustom,  # Use  Provider
    api_key="your-api-key",
    temperature=0.7,
)

# Use g4f directly with ChatG4F parameters
messages = [
    {"role": "system", "content": "You are helpful."},
    {"role": "user", "content": "Hello!"}
]

response = lg4f.ChatCompletion.create(  # Use  ChatCompletion
    model=chat.model_name,
    messages=messages,
    provider=chat.provider,
    api_key=chat.api_key.get_secret_value() if chat.api_key else None,
    temperature=chat.temperature,
)

print(response)

Advanced Usage

Multiple Providers

from langchain_g4f_chat import ChatG4F
import langchain_g4f_chat as lg4f

# Try different providers with fallback
providers = [
    lg4f.Provider.OpenRouterCustom,  # Use  Provider
    lg4f.Provider.OpenAI,
    None,  # Auto-select
]

for provider in providers:
    try:
        chat = ChatG4F(
            model="gpt-3.5-turbo",
            provider=provider,
            api_key="your-key" if provider else None
        )
        # Use the chat model
        break
    except Exception as e:
        print(f"Provider {provider} failed: {e}")
        continue

Streaming Responses

from langchain_g4f_chat import ChatG4F
import langchain_g4f_chat as lg4f

# Enable streaming
chat = ChatG4F(
    model="gpt-3.5-turbo",
    provider=lg4f.Provider.OpenRouterCustom,  # Use  Provider
    api_key="your-key",
    stream=True
)

# Stream with g4f directly
response = lg4f.ChatCompletion.create(  # Use  ChatCompletion
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Tell me a story"}],
    provider=lg4f.Provider.OpenRouterCustom,
    api_key="your-key",
    stream=True
)

for chunk in response:
    print(chunk, end='', flush=True)

Image Input (Vision Models)

from langchain_core.messages import HumanMessage

# Using image URL
chat = ChatG4F(
    model="openai/gpt-4-vision-preview",  # Vision model
    provider=g4f.Provider.OpenRouterCustom,
    api_key="your-key"
)

# Option 1: Direct multimodal content
message = HumanMessage(content=[
    {"type": "text", "text": "What's in this image?"},
    {
        "type": "image_url",
        "image_url": {"url": "https://example.com/image.jpg"}
    }
])

response = chat.invoke([message])
print(response.content)

# Option 2: Base64 encoded image
import base64
with open("image.jpg", "rb") as f:
    image_data = base64.b64encode(f.read()).decode()

message = HumanMessage(content=[
    {"type": "text", "text": "Describe this image"},
    {
        "type": "image_url", 
        "image_url": {"url": f"data:image/jpeg;base64,{image_data}"}
    }
])

response = chat.invoke([message])
print(response.content)

# Option 3: Anthropic-style format (automatically converted)
message = HumanMessage(content=[
    {"type": "text", "text": "What do you see?"},
    {
        "type": "image",
        "source": {
            "type": "base64",
            "media_type": "image/jpeg",
            "data": image_data
        }
    }
])

response = chat.invoke([message])
print(response.content)

Multiple Provider Examples

from langchain_g4f_chat import ChatG4F
import langchain_g4f_chat as lg4f

# DeepInfra Provider
chat_deepinfra = ChatG4F(
    model="meta-llama/Llama-2-70b-chat-hf",
    provider=lg4f.Provider.DeepInfra,  # Use  Provider
    # No API key needed for many models
)

# HuggingChat Provider  
chat_hugging = ChatG4F(
    model="microsoft/DialoGPT-medium",
    provider=lg4f.Provider.HuggingChat,  # Use  Provider
)

# Blackbox Provider
chat_blackbox = ChatG4F(
    model="gpt-3.5-turbo",
    provider=lg4f.Provider.Blackbox,  # Use  Provider
)

# Try multiple providers with fallback
providers_to_try = [
    (lg4f.Provider.OpenRouterCustom, "openai/gpt-3.5-turbo", "your-key"),  # Use  Provider
    (lg4f.Provider.DeepInfra, "meta-llama/Llama-2-7b-chat-hf", None),
    (lg4f.Provider.HuggingChat, "microsoft/DialoGPT-medium", None),
    (lg4f.Provider.Blackbox, "gpt-3.5-turbo", None),
]

for provider, model, api_key in providers_to_try:
    try:
        chat = ChatG4F(
            model=model,
            provider=provider,
            api_key=api_key if api_key else None
        )
        response = chat.invoke([HumanMessage(content="Hello!")])
        print(f"Success with {provider.__name__}: {response.content}")
        break
    except Exception as e:
        print(f"Provider {provider.__name__} failed: {e}")
        continue

Async Usage

import asyncio
from langchain_g4f_chat import ChatG4F
import langchain_g4f_chat as lg4f

async def chat_async():
    chat = ChatG4F(
        model="gpt-3.5-turbo",
        provider=lg4f.Provider.OpenRouterCustom,  # Use  Provider
        api_key="your-key"
    )
    
    # Use with LangChain async methods (when available)
    # response = await chat.ainvoke(messages)
    
    # Or use g4f async directly
    response = await lg4f.ChatCompletion.create_async(  # Use  ChatCompletion
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Hello!"}],
        provider=lg4f.Provider.OpenRouterCustom,
        api_key="your-key"
    )
    
    return response

# Run async
result = asyncio.run(chat_async())

Configuration Options

Parameter Type Default Description
model str "gpt-3.5-turbo" Model name to use
provider Any None G4F provider (auto-select if None)
api_key str None API key for authenticated providers
temperature float 0.7 Sampling temperature
max_tokens int None Maximum tokens to generate
stream bool False Enable streaming responses
model_kwargs dict {} Additional parameters for g4f

Supported Providers

  • OpenRouterCustom: Your custom OpenRouter provider ✅
  • OpenAI: Official OpenAI API
  • Bing: Microsoft Bing Chat
  • Claude: Anthropic Claude models
  • Auto: Let g4f choose the best available provider

Integration with LangChain

Once you have langchain-core installed, you can use ChatG4F with:

  • LangChain Chains: Use in sequential chains
  • LangChain Agents: As the LLM for AI agents
  • Memory: With conversation memory
  • Callbacks: Full callback support
  • Async: Async chains and operations
# Example with LangChain chain (requires langchain-core)
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

prompt = PromptTemplate(
    input_variables=["question"],
    template="Answer this question: {question}"
)

chain = LLMChain(llm=chat, prompt=prompt)
result = chain.run("What is AI?")

Error Handling

from langchain_g4f_chat import ChatG4F
import langchain_g4f_chat as lg4f

try:
    chat = ChatG4F(
        model="gpt-4",
        provider=lg4f.Provider.OpenRouterCustom,  # Use  Provider
        api_key="your-key"
    )
    
    response = lg4f.ChatCompletion.create(  # Use  ChatCompletion
        model="gpt-4",
        messages=[{"role": "user", "content": "Hello"}],
        provider=lg4f.Provider.OpenRouterCustom,
        api_key="your-key"
    )
    
except Exception as e:
    print(f"Error: {e}")
    # Fallback to different provider or model

Troubleshooting

Import Error: No module named 'langchain_g4f_chat'

If you get this error even after installing the package:

  1. Check installation:

    pip list | grep langchain-g4f-chat
    
  2. Reinstall the package:

    pip uninstall langchain-g4f-chat -y
    pip install langchain-g4f-chat
    
  3. For development mode:

    cd langchain_g4f_chat
    pip install -e .
    
  4. Verify import:

    from langchain_g4f_chat import ChatG4F
    print("Import successful!")
    

Common Issues

  • Model not found: Use providers that support your model or let g4f auto-select
  • API key errors: Ensure you're using the correct API key for authenticated providers
  • Rate limits: Some providers have rate limits; try different providers if one fails

Module Structure

langchain_g4f_chat/
├── __init__.py           # Main exports
├── requirements.txt      # Dependencies (includes g4f requirements)
├── setup.py             # Package setup
├── g4f/                 # g4f
│   ├── __init__.py
│   ├── base/
│   ├── providers/
│   └── ...
├── core/                # Core utilities
├── text/                # Text/chat models
└── images/              # Image generation

Development

To install in development mode (g4f is , no separate installation needed):

cd langchain_g4f_chat
pip install -e .

Testing

Run the test scripts to verify functionality:

python test_langchain_g4f_chat_practical.py
python test_complete_integration.py

License

MIT License - Feel free to use and modify as needed.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_g4f_chat-0.1.7.tar.gz (376.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_g4f_chat-0.1.7-py3-none-any.whl (503.0 kB view details)

Uploaded Python 3

File details

Details for the file langchain_g4f_chat-0.1.7.tar.gz.

File metadata

  • Download URL: langchain_g4f_chat-0.1.7.tar.gz
  • Upload date:
  • Size: 376.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for langchain_g4f_chat-0.1.7.tar.gz
Algorithm Hash digest
SHA256 67463c00f9e88649454d5636acca1a108ad3b8b6d81bb8ab4e8d693e3c25de62
MD5 2ff092855e8e1f8786e38b3b5acded52
BLAKE2b-256 4816ad4075480ec3fd72ea35aa81a4b286c83a418695a899b1a085b4154e2076

See more details on using hashes here.

File details

Details for the file langchain_g4f_chat-0.1.7-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_g4f_chat-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 a076c4e70f3c5f227f3c91284e43d1f33b35e04f35ea5a6b93e94ec547873f33
MD5 a37d96846a854de23384cab1aad567ef
BLAKE2b-256 1e77f5a87f0954703ededa3b1bfc7af82ebffd966c4ee2481067ea19861e1d00

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page