The official gpt4free repository | various collection of powerful language models
Project description
By using this repository or any code related to it, you agree to the legal notice. The author is not responsible for any copies, forks, reuploads made by other users, or anything else related to gpt4free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
- latest pypi version:
0.1.6.5
pip install -U g4f
or if you just want to use the gui or interference api, install with pipx
pipx install g4f
New features
- Telegram Channel: https://t.me/g4f_channel
- g4f GUI is back !!:
Install g4f with pip and then run:
g4f gui
or
python -m g4f.gui.run
preview:
- run interference api from pypi package:
g4f api
or
python -m g4f.interference.run
Table of Contents
- Getting Started
- Usage
- Providers
- Related gpt4free projects
- Contribute
- Contributors
- Copyright
- Star History
- License
Getting Started
Prerequisites:
- Download and install Python (Version 3.10+ is recommended).
Setting up the project:
Install using pypi
pip install -U g4f
or
- Clone the GitHub repository:
git clone https://github.com/xtekky/gpt4free.git
- Navigate to the project directory:
cd gpt4free
- (Recommended) Create a Python virtual environment: You can follow the Python official documentation for virtual environments.
python3 -m venv venv
- Activate the virtual environment:
- On Windows:
.\venv\Scripts\activate
- On macOS and Linux:
source venv/bin/activate
- Install the required Python packages from
requirements.txt
:
pip install -r requirements.txt
- Create a
test.py
file in the root folder and start using the repo, further Instructions are below
import g4f
...
Setting up with Docker:
If you have Docker installed, you can easily set up and run the project without manually installing dependencies.
-
First, ensure you have both Docker and Docker Compose installed.
-
Clone the GitHub repo:
git clone https://github.com/xtekky/gpt4free.git
- Navigate to the project directory:
cd gpt4free
- Build the Docker image:
docker compose build
- Start the service using Docker Compose:
docker compose up
You server will now be running at http://localhost:1337
. You can interact with the API or run your tests as you would normally.
To stop the Docker containers, simply run:
docker compose down
Note: When using Docker, any changes you make to your local files will be reflected in the Docker container thanks to the volume mapping in the docker-compose.yml
file. If you add or remove dependencies, however, you'll need to rebuild the Docker image using docker compose build
.
Usage
The g4f
Package
ChatCompletion
import g4f
g4f.logging = True # enable logging
g4f.check_version = False # Disable automatic version checking
print(g4f.version) # check version
print(g4f.Provider.Ails.params) # supported args
# Automatic selection of provider
# streamed completion
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
for message in response:
print(message, flush=True, end='')
# normal response
response = g4f.ChatCompletion.create(
model=g4f.models.gpt_4,
messages=[{"role": "user", "content": "Hello"}],
) # alterative model setting
print(response)
Completion
import g4f
allowed_models = [
'code-davinci-002',
'text-ada-001',
'text-babbage-001',
'text-curie-001',
'text-davinci-002',
'text-davinci-003'
]
response = g4f.Completion.create(
model = 'text-davinci-003',
prompt = 'say this is a test')
print(response)
Providers:
import g4f
from g4f.Provider import (
AItianhu,
Acytoo,
Aichat,
Ails,
Bard,
Bing,
ChatBase,
ChatgptAi,
H2o,
HuggingChat,
OpenAssistant,
OpenaiChat,
Raycast,
Theb,
Vercel,
Vitalentum,
Ylokh,
You,
Yqcloud,
)
# Set with provider
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
provider=g4f.Provider.Aichat,
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
for message in response:
print(message)
Cookies Required:
Cookies are essential for the proper functioning of some service providers. It is imperative to maintain an active session, typically achieved by logging into your account.
When running the g4f package locally, the package automatically retrieves cookies from your web browser using the get_cookies
function. However, if you're not running it locally, you'll need to provide the cookies manually by passing them as parameters using the cookies
parameter.
import g4f
from g4f.Provider import (
Bard,
Bing,
HuggingChat,
OpenAssistant,
OpenaiChat,
)
# Usage:
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
provider=Bard,
#cookies=g4f.get_cookies(".google.com"),
cookies={"cookie_name": "value", "cookie_name2": "value2"},
auth=True
)
Async Support:
To enhance speed and overall performance, execute providers asynchronously. The total execution time will be determined by the duration of the slowest provider's execution.
import g4f, asyncio
_providers = [
g4f.Provider.Aichat,
g4f.Provider.ChatBase,
g4f.Provider.Bing,
g4f.Provider.GptGo,
g4f.Provider.You,
g4f.Provider.Yqcloud,
]
async def run_provider(provider: g4f.Provider.BaseProvider):
try:
response = await g4f.ChatCompletion.create_async(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
provider=provider,
)
print(f"{provider.__name__}:", response)
except Exception as e:
print(f"{provider.__name__}:", e)
async def run_all():
calls = [
run_provider(provider) for provider in _providers
]
await asyncio.gather(*calls)
asyncio.run(run_all())
Proxy Support:
All providers support specifying a proxy in the create functions.
import g4f
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
proxy="http://host:port",
# or socks5://user:pass@host:port
)
print(f"Result:", response)
interference openai-proxy api (use with openai python package)
run interference api from pypi package:
from g4f.api import run_api
run_api()
run interference api from repo:
If you want to use the embedding function, you need to get a huggingface token. You can get one at https://huggingface.co/settings/tokens make sure your role is set to write. If you have your token, just use it instead of the OpenAI api-key.
run server:
g4f api
or
python -m g4f.api
import openai
openai.api_key = "Empty if you don't use embeddings, otherwise your hugginface token"
openai.api_base = "http://localhost:1337"
def main():
chat_completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "write a poem about a tree"}],
stream=True,
)
if isinstance(chat_completion, dict):
# not stream
print(chat_completion.choices[0].message.content)
else:
# stream
for token in chat_completion:
content = token["choices"][0]["delta"].get("content")
if content != None:
print(content, end="", flush=True)
if __name__ == "__main__":
main()
Models
gpt-3.5 / gpt-4
Website | Provider | gpt-3.5 | gpt-4 | Streaming | Asynchron | Status | Auth |
---|---|---|---|---|---|---|---|
www.aitianhu.com | g4f.Provider.AItianhu |
✔️ | ❌ | ❌ | ✔️ | ❌ | |
chat.acytoo.com | g4f.Provider.Acytoo |
✔️ | ❌ | ✔️ | ✔️ | ❌ | |
chat-gpt.org | g4f.Provider.Aichat |
✔️ | ❌ | ❌ | ✔️ | ❌ | |
ai.ls | g4f.Provider.Ails |
✔️ | ❌ | ✔️ | ✔️ | ❌ | |
bard.google.com | g4f.Provider.Bard |
❌ | ❌ | ❌ | ✔️ | ✔️ | |
bing.com | g4f.Provider.Bing |
❌ | ✔️ | ✔️ | ✔️ | ❌ | |
www.chatbase.co | g4f.Provider.ChatBase |
✔️ | ✔️ | ✔️ | ✔️ | ❌ | |
chatgpt.ai | g4f.Provider.ChatgptAi |
✔️ | ❌ | ✔️ | ✔️ | ❌ | |
opchatgpts.net | g4f.Provider.ChatgptLogin |
✔️ | ❌ | ❌ | ✔️ | ❌ | |
ava-ai-ef611.web.app | g4f.Provider.CodeLinkAva |
✔️ | ❌ | ✔️ | ✔️ | ❌ | |
gptgo.ai | g4f.Provider.GptGo |
✔️ | ❌ | ✔️ | ✔️ | ❌ | |
gpt-gm.h2o.ai | g4f.Provider.H2o |
❌ | ❌ | ✔️ | ✔️ | ❌ | |
huggingface.co | g4f.Provider.HuggingChat |
❌ | ❌ | ✔️ | ✔️ | ✔️ | |
opchatgpts.net | g4f.Provider.Opchatgpts |
✔️ | ❌ | ❌ | ✔️ | ❌ | |
open-assistant.io | g4f.Provider.OpenAssistant |
❌ | ❌ | ✔️ | ✔️ | ✔️ | |
chat.openai.com | g4f.Provider.OpenaiChat |
✔️ | ❌ | ❌ | ✔️ | ✔️ | |
www.perplexity.ai | g4f.Provider.PerplexityAi |
✔️ | ❌ | ❌ | ✔️ | ❌ | |
raycast.com | g4f.Provider.Raycast |
✔️ | ✔️ | ✔️ | ❌ | ✔️ | |
theb.ai | g4f.Provider.Theb |
✔️ | ❌ | ✔️ | ❌ | ✔️ | |
sdk.vercel.ai | g4f.Provider.Vercel |
✔️ | ❌ | ✔️ | ❌ | ❌ | |
app.vitalentum.io | g4f.Provider.Vitalentum |
✔️ | ❌ | ✔️ | ✔️ | ❌ | |
wewordle.org | g4f.Provider.Wewordle |
✔️ | ❌ | ❌ | ✔️ | ❌ | |
chat.ylokh.xyz | g4f.Provider.Ylokh |
✔️ | ❌ | ✔️ | ✔️ | ❌ | |
you.com | g4f.Provider.You |
✔️ | ❌ | ❌ | ✔️ | ❌ | |
chat9.yqcloud.top | g4f.Provider.Yqcloud |
✔️ | ❌ | ✔️ | ✔️ | ❌ | |
cromicle.top | g4f.Provider.Cromicle |
✔️ | ❌ | ✔️ | ✔️ | ❌ | |
aiservice.vercel.app | g4f.Provider.AiService |
✔️ | ❌ | ❌ | ❌ | ❌ | |
chat.dfehub.com | g4f.Provider.DfeHub |
✔️ | ❌ | ✔️ | ❌ | ❌ | |
free.easychat.work | g4f.Provider.EasyChat |
✔️ | ❌ | ✔️ | ❌ | ❌ | |
next.eqing.tech | g4f.Provider.Equing |
✔️ | ❌ | ✔️ | ❌ | ❌ | |
chat9.fastgpt.me | g4f.Provider.FastGpt |
✔️ | ❌ | ✔️ | ❌ | ❌ | |
forefront.com | g4f.Provider.Forefront |
✔️ | ❌ | ✔️ | ❌ | ❌ | |
chat.getgpt.world | g4f.Provider.GetGpt |
✔️ | ❌ | ✔️ | ❌ | ❌ | |
liaobots.com | g4f.Provider.Liaobots |
✔️ | ✔️ | ✔️ | ✔️ | ❌ | |
p5.v50.ltd | g4f.Provider.V50 |
✔️ | ❌ | ❌ | ❌ | ❌ | |
chat.wuguokai.xyz | g4f.Provider.Wuguokai |
✔️ | ❌ | ❌ | ❌ | ❌ |
Other Models
Model | Base Provider | Provider | Website |
---|---|---|---|
palm | g4f.Provider.Bard | bard.google.com | |
h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 | Huggingface | g4f.Provider.H2o | www.h2o.ai |
h2ogpt-gm-oasst1-en-2048-falcon-40b-v1 | Huggingface | g4f.Provider.H2o | www.h2o.ai |
h2ogpt-gm-oasst1-en-2048-open-llama-13b | Huggingface | g4f.Provider.H2o | www.h2o.ai |
claude-instant-v1 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
claude-v1 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
claude-v2 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
command-light-nightly | Cohere | g4f.Provider.Vercel | sdk.vercel.ai |
command-nightly | Cohere | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-neox-20b | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
oasst-sft-1-pythia-12b | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
oasst-sft-4-pythia-12b-epoch-3.5 | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
santacoder | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
bloom | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
flan-t5-xxl | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
code-davinci-002 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-3.5-turbo-16k | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-3.5-turbo-16k-0613 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-4-0613 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-ada-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-babbage-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-curie-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-davinci-002 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-davinci-003 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
llama13b-v2-chat | Replicate | g4f.Provider.Vercel | sdk.vercel.ai |
llama7b-v2-chat | Replicate | g4f.Provider.Vercel | sdk.vercel.ai |
Related gpt4free projects
🎁 Projects | ⭐ Stars | 📚 Forks | 🛎 Issues | 📬 Pull requests |
gpt4free | ||||
gpt4free-ts | ||||
Free AI API's & Potential Providers List | ||||
ChatGPT-Clone | ||||
ChatGpt Discord Bot | ||||
LangChain gpt4free | ||||
ChatGpt Telegram Bot | ||||
Action Translate Readme | ||||
Langchain Document GPT |
Contribute
Create Provider with AI Tool
Call in your terminal the "create_provider" script:
python etc/tool/create_provider.py
- Enter your name for the new provider.
- Copy & Paste a cURL command from your browser developer tools.
- Let the AI create the provider for you.
- Customize the provider according to your needs.
Create Provider
- Check out the current list of potential providers, or find your own provider source!
- Create a new file in g4f/provider with the name of the Provider
- Implement a class that extends BaseProvider.
from __future__ import annotations
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider
class HogeService(AsyncGeneratorProvider):
url = "https://chat-gpt.com"
supports_gpt_35_turbo = True
working = True
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
yield ""
- Here, you can adjust the settings, for example if the website does support streaming, set
supports_stream
toTrue
... - Write code to request the provider in
create_async_generator
andyield
the response, even if its a one-time response, do not hesitate to look at other providers for inspiration - Add the Provider Name in g4f/provider/init.py
from .HogeService import HogeService
__all__ = [
HogeService,
]
- You are done !, test the provider by calling it:
import g4f
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Provider.PROVIDERNAME,
messages=[{"role": "user", "content": "test"}], stream=g4f.Provider.PROVIDERNAME.supports_stream)
for message in response:
print(message, flush=True, end='')
Contributors
A list of the contributors is available here
The Vercel.py
file contains code from vercel-llm-api by @ading2210, which is licenced under the GNU GPL v3
Top 1 Contributor: @hlohaus
Copyright
This program is licensed under the GNU GPL v3
xtekky/gpt4free: Copyright (C) 2023 xtekky
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Star History
License
|
This project is licensed under GNU_GPL_v3.0. |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.