Skip to main content

The official gpt4free repository | various collection of powerful language models

Project description

GPT4Free (g4f)

PyPI Docker Hub License: GPL v3

GPT4Free logo

Created by @xtekky,
maintained by @hlohaus

Support the project on GitHub Sponsors ❤️

Live demo & docs: https://g4f.dev | Documentation: https://g4f.dev/docs


GPT4Free (g4f) is a community-driven project that aggregates multiple accessible providers and interfaces to make working with modern LLMs and media-generation models easier and more flexible. GPT4Free aims to offer multi-provider support, local GUI, OpenAI-compatible REST APIs, and convenient Python and JavaScript clients — all under a community-first license.

This README is a consolidated, improved, and complete guide to installing, running, and contributing to GPT4Free.

Table of contents


What’s included

  • Python client library and async client.
  • Optional local web GUI.
  • FastAPI-based OpenAI-compatible API (Interference API).
  • Official browser JS client (g4f.dev distribution).
  • Docker images (full and slim).
  • Multi-provider adapters (LLMs, media providers, local inference backends).
  • Tooling for image/audio/video generation and media persistence.

Quick links


Requirements & compatibility

  • Python 3.10+ recommended.
  • Google Chrome/Chromium for providers using browser automation.
  • Docker for containerized deployment.
  • Works on x86_64 and arm64 (slim image supports both).
  • Some provider adapters may require platform-specific tooling (Chrome/Chromium, etc.). Check provider docs for details.

Installation

Docker (recommended)

  1. Install Docker: https://docs.docker.com/get-docker/
  2. Create persistent directories:
    • Example (Linux/macOS):
      mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media
      sudo chown -R 1200:1201 ${PWD}/har_and_cookies ${PWD}/generated_media
      
  3. Pull image:
    docker pull hlohaus789/g4f
    
  4. Run container:
    docker run -p 8080:8080 -p 7900:7900 \
      --shm-size="2g" \
      -v ${PWD}/har_and_cookies:/app/har_and_cookies \
      -v ${PWD}/generated_media:/app/generated_media \
      hlohaus789/g4f:latest
    

Notes:

  • Port 8080 serves GUI/API; 7900 can expose a VNC-like desktop for provider logins (optional).
  • Increase --shm-size for heavier browser automation tasks.

Slim Docker image (x64 & arm64)

mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media
chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_media

docker run \
  -p 1337:8080 -p 8080:8080 \
  -v ${PWD}/har_and_cookies:/app/har_and_cookies \
  -v ${PWD}/generated_media:/app/generated_media \
  hlohaus789/g4f:latest-slim

Notes:

  • The slim image can update the g4f package on startup and installs additional dependencies as needed.
  • In this example, the Interference API is mapped to 1337.

Windows Guide (.exe)

👉 Check out the Windows launcher for GPT4Free:
🔗 https://github.com/gpt4free/g4f.exe 🚀

  1. Download the release artifact g4f.exe.zip from: https://github.com/xtekky/gpt4free/releases/latest
  2. Unzip and run g4f.exe.
  3. Open GUI at: http://localhost:8080/chat/
  4. If Windows Firewall blocks access, allow the application.

Python Installation (pip / from source / partial installs)

Prerequisites:

Install from PyPI (recommended):

pip install -U g4f[all]

Partial installs

  • To install only specific functionality, use optional extras groups. See docs/requirements.md in the project docs.

Install from source:

git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
pip install -r requirements.txt
pip install -e .

Notes:

  • Some features require Chrome/Chromium or other tools; follow provider-specific docs.

Running the app

GUI (web client)

  • Run via Python:
from g4f.gui import run_gui
run_gui()
  • Or via CLI:
python -m g4f.cli gui --port 8080 --debug

FastAPI / Interference API

  • Start FastAPI server:
python -m g4f --port 8080 --debug
  • If using slim docker mapping, Interference API may be available at http://localhost:1337/v1
  • Swagger UI: http://localhost:1337/docs

CLI

  • Start GUI server:
python -m g4f.cli gui --port 8080 --debug

MCP Server

GPT4Free now includes a Model Context Protocol (MCP) server that allows AI assistants like Claude to access web search, scraping, and image generation capabilities.

Starting the MCP server (stdio mode):

# Using g4f command
g4f mcp

# Or using Python module
python -m g4f.mcp

Starting the MCP server (HTTP mode):

# Start HTTP server on port 8765
g4f mcp --http --port 8765

# Custom host and port
g4f mcp --http --host 127.0.0.1 --port 3000

HTTP mode provides:

  • POST http://localhost:8765/mcp - JSON-RPC endpoint
  • GET http://localhost:8765/health - Health check

Configuring with Claude Desktop:

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "gpt4free": {
      "command": "python",
      "args": ["-m", "g4f.mcp"]
    }
  }
}

Available MCP Tools:

  • web_search - Search the web using DuckDuckGo
  • web_scrape - Extract text content from web pages
  • image_generation - Generate images from text prompts

For detailed MCP documentation, see g4f/mcp/README.md

Optional provider login (desktop within container)

  • Accessible at:
    http://localhost:7900/?autoconnect=1&resize=scale&password=secret
    
  • Useful for logging into web-based providers to obtain cookies/HAR files.

Using the Python client

Install:

pip install -U g4f[all]

Synchronous text example:

from g4f.client import Client

client = Client()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello, how are you?"}],
    web_search=False
)
print(response.choices[0].message.content)

Expected:

Hello! How can I assist you today?

Image generation example:

from g4f.client import Client

client = Client()
response = client.images.generate(
    model="flux",
    prompt="a white siamese cat",
    response_format="url"
)
print(f"Generated image URL: {response.data[0].url}")

Async client example:

from g4f.client import AsyncClient
import asyncio

async def main():
    client = AsyncClient()
    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Explain quantum computing briefly"}],
    )
    print(response.choices[0].message.content)

asyncio.run(main())

Notes:


Using GPT4Free.js (browser JS client)

Use the official JS client in the browser—no backend required.

Example:

<script type="module">
  import Client from 'https://g4f.dev/dist/js/client.js';

  const client = new Client();
  const result = await client.chat.completions.create({
      model: 'gpt-4.1',  // Or "gpt-4o", "deepseek-v3", etc.
      messages: [{ role: 'user', content: 'Explain quantum computing' }]
  });
  console.log(result.choices[0].message.content);
</script>

Notes:

  • The JS client is distributed via the g4f.dev CDN for easy usage. Review CORS considerations and usage limits.

Providers & models (overview)

  • GPT4Free integrates many providers including (but not limited to) OpenAI-compatible endpoints, PerplexityLabs, Gemini, MetaAI, Pollinations (media), and local inference backends.
  • Model availability and behavior depend on provider capabilities. See the providers doc for current, supported provider/model lists: https://g4f.dev/docs/providers-and-models

Provider requirements may include:

  • API keys or tokens (for authenticated providers)
  • Browser cookies / HAR files for providers scraped via browser automation
  • Chrome/Chromium or headless browser tooling
  • Local model binaries and runtime (for local inference)

Local inference & media

  • GPT4Free supports local inference backends. See docs/local.md for supported runtimes and hardware guidance.
  • Media generation (image, audio, video) is supported through providers (e.g., Pollinations). See docs/media.md for formats, options, and sample usage.

Configuration & customization

  • Configure via environment variables, CLI flags, or config files. See docs/config.md.
  • To reduce install size, use partial requirement groups. See docs/requirements.md.
  • Provider selection: learn how to set defaults and override per-request at docs/selecting_a_provider.md.
  • Persistence: HAR files, cookies, and generated media persist in mapped directories (e.g., har_and_cookies, generated_media).

Running on smartphone

  • The web GUI is responsive and can be accessed from a phone by visiting your host IP:8080 or via a tunnel. See docs/guides/phone.md.

Interference API (OpenAI‑compatible)

  • The Interference API enables OpenAI-like workflows routed through GPT4Free provider selection.
  • Docs: docs/interference-api.md
  • Default endpoint (example slim docker): http://localhost:1337/v1
  • Swagger UI: http://localhost:1337/docs

Examples & common patterns


Contributing

Contributions are welcome — new providers, features, docs, and fixes are appreciated.

How to contribute:

  1. Fork the repository.
  2. Create a branch for your change.
  3. Run tests and linters.
  4. Open a Pull Request with a clear description and tests/examples if applicable.

Repository: https://github.com/xtekky/gpt4free

How to create a new provider

  • Read the guide: docs/guides/create_provider.md
  • Typical steps:
    • Implement a provider adapter in g4f/Provider/
    • Add configuration and dependency notes
    • Include tests and usage examples
    • Respect third‑party code licenses and attribute appropriately

How AI can help you write code


Security, privacy & takedown policy

  • Do not store or share sensitive credentials. Use per-provider recommended security practices.
  • If your site appears in the project’s links and you want it removed, send proof of ownership to takedown@g4f.ai and it will be removed promptly.
  • For production, secure the server with HTTPS, authentication, and firewall rules. Limit access to provider credentials and cookie/HAR storage.

Credits, contributors & attribution

Many more contributors are acknowledged in the repository.


Powered-by highlights


Changelog & releases


Manifesto / Project principles

GPT4Free is guided by community principles:

  1. Open access to AI tooling and models.
  2. Collaboration across providers and projects.
  3. Opposition to monopolistic, closed systems that restrict creativity.
  4. Community-centered development and broad access to AI technologies.
  5. Promote innovation, creativity, and accessibility.

https://g4f.dev/manifest


License

This program is licensed under the GNU General Public License v3.0 (GPLv3). See the full license: https://www.gnu.org/licenses/gpl-3.0.txt

Summary:

  • You may redistribute and/or modify under the terms of GPLv3.
  • The program is provided WITHOUT ANY WARRANTY.

Copyright notice

xtekky/gpt4free: Copyright (C) 2025 xtekky

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

Contact & sponsorship


Appendix: Quick commands & examples

Install (pip):

pip install -U g4f[all]

Run GUI (Python):

python -m g4f.cli gui --port 8080 --debug
# or
python -c "from g4f.gui import run_gui; run_gui()"

Docker (full):

docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 7900:7900 \
  --shm-size="2g" \
  -v ${PWD}/har_and_cookies:/app/har_and_cookies \
  -v ${PWD}/generated_media:/app/generated_media \
  hlohaus789/g4f:latest

Docker (slim):

docker run -p 1337:8080 -p 8080:8080 \
  -v ${PWD}/har_and_cookies:/app/har_and_cookies \
  -v ${PWD}/generated_media:/app/generated_media \
  hlohaus789/g4f:latest-slim

Python usage patterns:

  • client.chat.completions.create(...)
  • client.images.generate(...)
  • Async variants via AsyncClient

Docs & deeper reading


Thank you for using and contributing to GPT4Free — together we make powerful AI tooling accessible, flexible, and community-driven.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

g4f-6.6.2.tar.gz (444.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

g4f-6.6.2-py3-none-any.whl (561.6 kB view details)

Uploaded Python 3

File details

Details for the file g4f-6.6.2.tar.gz.

File metadata

  • Download URL: g4f-6.6.2.tar.gz
  • Upload date:
  • Size: 444.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for g4f-6.6.2.tar.gz
Algorithm Hash digest
SHA256 03c1a6e3b0e3c65a417ba59a2884a00fc5d40c31350ad76fcaedd845677ecdd3
MD5 d0a4e3aac72aa2a041cd7b63278c0e81
BLAKE2b-256 f54d63be6f586db2fac1916a2ffcbdd18ffb86e15b083ffb0bbcf3fd941a8ae0

See more details on using hashes here.

Provenance

The following attestation bundles were made for g4f-6.6.2.tar.gz:

Publisher: publish-to-pypi.yml on xtekky/gpt4free

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file g4f-6.6.2-py3-none-any.whl.

File metadata

  • Download URL: g4f-6.6.2-py3-none-any.whl
  • Upload date:
  • Size: 561.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for g4f-6.6.2-py3-none-any.whl
Algorithm Hash digest
SHA256 d60f6d3e61ef51b94d612903a82ca2e4e50cc474fef30b2c55b703b8ce01db2c
MD5 c877773b411a3b2eda4866ad38f21fa6
BLAKE2b-256 66d2e970fae0a4cf1c747b14cdda150ebf82a3218b5c2ac1d6d02b24b5396652

See more details on using hashes here.

Provenance

The following attestation bundles were made for g4f-6.6.2-py3-none-any.whl:

Publisher: publish-to-pypi.yml on xtekky/gpt4free

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page