Skip to main content

Unofficial open APIs for popular LLMs with self-hosted redirect capability

Project description

๐Ÿ•Š๏ธ LLM-API-Open (LMAO)

LLM-API-Open logo

Unofficial open APIs for popular LLMs with self-hosted redirect capability


โ“ WAT

๐Ÿ•Š๏ธ LLM-API-Open (LMAO) allows for the free and universal utilization of popular Large Language Models (LLM). This is achieved using browser automation. LLM-API-Open (LMAO) launches a browser in headless mode and controls a website as if a real user were using it. This enables the use of popular LLMs that usually don't offer easy and free access to their official APIs

๐Ÿ”ฅ Additionally, LLM-API-Open (LMAO) is capable of creating its own API server to which any other apps can send requests. In other words, you can utilize LLM-API-Open (LMAO) both as a Python package and as an API proxy for any of your apps!


๐Ÿšง LLM-API-Open is development

Due to my studies, I don't have much time to work on the project ๐Ÿ˜”

Currently, LLM-API-Open has only 2 modules: ChatGPT and Microsoft Copilot

๐Ÿ“ˆ But it is possible to add other popular online LLMs (You can wait, or make a pull-request yourself)

๐Ÿ“„ Documentation is also under development! Consider reading docstring for now


๐Ÿ˜‹ Support project

  • BTC: bc1qd2j53p9nplxcx4uyrv322t3mg0t93pz6m5lnft

  • ETH: 0x284E6121362ea1C69528eDEdc309fC8b90fA5578

  • ZEC: t1Jb5tH61zcSTy2QyfsxftUEWHikdSYpPoz

  • Or by my music on ๐ŸŸฆ bandcamp


๐Ÿ—๏ธ Getting started

โš ๏ธ Will not work with Python 3.13 or later due to imghdr

โš™๏ธ 1. Download / build / install LLM-API-Open

There is 4 general ways to get LLM-API-Open

โš™๏ธ Install via pip

  • Install from PyPi

    pip install llm-api-open
    
  • Or install from GitHub directly

    pip install git+https://github.com/F33RNI/LLM-API-Open.git
    
  • Or clone repo and install

    git clone https://github.com/F33RNI/LLM-API-Open.git
    cd LLM-API-Open
    
    python -m venv venv
    source venv/bin/activate
    
    pip install .
    

โฌ‡๏ธ Download cli version from releases

https://github.com/F33RNI/LLM-API-Open/releases/latest

๐Ÿ”จ Build cli version from source using PyInstaller

git clone https://github.com/F33RNI/LLM-API-Open.git
cd LLM-API-Open

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

pyinstaller lmao.spec

dist/lmao --help

๐Ÿ’ป Use source as is

git clone https://github.com/F33RNI/LLM-API-Open.git
cd LLM-API-Open

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

export PYTHONPATH=./src:$PYTHONPATH
export PYTHONPATH=./src/lmao:$PYTHONPATH
python -m main --help

๐Ÿ”ง 2. Configure LLM-API-Open

  1. Download configs directory from this repo
  2. Open .json files of modules you need in any editor and change it as you need
  3. Specify path to configs directory with -c path/to/configs argument

๐Ÿ“ฆ Python package example

import logging
import json

from lmao.module_wrapper import ModuleWrapper

# Initialize logging in a simplest way
logging.basicConfig(level=logging.INFO)

# Load config
with open("path/to/configs/chatgpt.json", "r", encoding="utf-8") as file:
    module_config = json.loads(file.read())

# Initialize module
module = ModuleWrapper("chatgpt", module_config)
module.initialize(blocking=True)

# Ask smth
conversation_id = None
for response in module.ask({"prompt": "Hi! Who are you?", "convert_to_markdown": True}):
    conversation_id = response.get("conversation_id")
    response_text = response.get("response")
    print(response_text, end="\n\n")

# Delete conversation
module.delete_conversation({"conversation_id": conversation_id})

# Close (unload) module
module.close(blocking=True)

๐Ÿ’ป CLI example

$ lmao --help        
usage: lmao [-h] [-v] [-c CONFIGS] [-t TEST] [-i IP] [-p PORT] [-s SSL [SSL ...]] [--tokens-use TOKENS_USE [TOKENS_USE ...]]
            [--tokens-manage TOKENS_MANAGE [TOKENS_MANAGE ...]] [--rate-limits-default RATE_LIMITS_DEFAULT [RATE_LIMITS_DEFAULT ...]] [--rate-limit-fast RATE_LIMIT_FAST]
            [--no-logging-init]

Unofficial open APIs for popular LLMs with self-hosted redirect capability

options:
  -h, --help            show this help message and exit
  -v, --version         show program's version number and exit
  -c CONFIGS, --configs CONFIGS
                        path to configs directory with each module config file (Default: configs)
  -t TEST, --test TEST  module name to test in cli instead of starting API server (eg. --test=chatgpt)
  -i IP, --ip IP        API server Host (IP) (Default: localhost)
  -p PORT, --port PORT  API server port (Default: 1312)
  -s SSL [SSL ...], --ssl SSL [SSL ...]
                        Paths to SSL certificate and private key (ex. --ssl "path/to/certificate.crt" "path/to/private.key")
  --tokens-use TOKENS_USE [TOKENS_USE ...]
                        API tokens to enable authorization for /status, /ask, /stop and /delete (ex. --tokens-use "tokenForMyApp" "tokenForMyAnotherApp"
                        "ultraPrivateTokeeeeeen")
  --tokens-manage TOKENS_MANAGE [TOKENS_MANAGE ...]
                        API tokens to enable authorization for /init and /close (ex. --tokens-manage "ultraPrivateTokeeeeeen")
  --rate-limits-default RATE_LIMITS_DEFAULT [RATE_LIMITS_DEFAULT ...]
                        Rate limits for all API requests except /stop (Default: --rate-limits-default "10/minute", "1/second")
  --rate-limit-fast RATE_LIMIT_FAST
                        Rate limit /stop API request (Default: "1/second")
  --no-logging-init     specify to bypass logging initialization (will be set automatically when using --test)

examples:
  lmao --test=chatgpt
  lmao --ip="0.0.0.0" --port=1312
  lmao --ip="0.0.0.0" --port=1312 --no-logging-init
  lmao --ip "0.0.0.0" --port=1312 --ssl certificate.crt private.key --tokens-use "tokenForMyApp" "tokenForMyAnotherApp" "ultraPrivateTokeeeeeen" --tokens-manage "ultraPrivateTokeeeeeen"
$ lmao --test=chatgpt
WARNING:root:Error adding cookie oai-did
WARNING:root:Error adding cookie ajs_anonymous_id
WARNING:root:Error adding cookie oai-allow-ne
User > Hi!    
chatgpt > Hello! How can I assist you today?

๐ŸŒ API example

Start server

Please see ๐Ÿ”’ HTTPS server and token-based authorization section for more info about HTTPS server and tokens

$ lmao --configs "configs" --ip "0.0.0.0" --port "1312" 
2024-03-30 23:14:50 INFO     Logging setup is complete
2024-03-30 23:14:50 INFO     Loading config files from configs directory
2024-03-30 23:14:50 INFO     Adding config of ms_copilot module
2024-03-30 23:14:50 INFO     Adding config of chatgpt module
 * Serving Flask app 'lmao.external_api'
 * Debug mode: off
2024-03-30 23:14:50 INFO     WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:1312
 * Running on http://192.168.0.3:1312
2024-03-30 23:14:50 INFO     Press CTRL+C to quit
...

๐Ÿ Python (requests)

import logging
import time
from typing import Dict

import requests

# API URL
BASE_URL = "http://localhost:1312/api"

# Timeout for each request
TIMEOUT = 60

# Initialize logging in a simplest way
logging.basicConfig(level=logging.INFO)


def post(endpoint: str, data: Dict):
    """POST request wrapper"""
    response_ = requests.post(f"{BASE_URL}/{endpoint}", json=data, timeout=TIMEOUT, stream=endpoint == "ask")
    if endpoint != "ask":
        try:
            logging.info(f"{endpoint.capitalize()} Response: {response_.status_code}. Data: {response_.json()}")
        except Exception:
            logging.info(f"{endpoint.capitalize()} Response: {response_.status_code}")
    else:
        logging.info(f"{endpoint.capitalize()} Response: {response_.status_code}")
    return response_


def get(endpoint: str):
    """GET request wrapper"""
    response_ = requests.get(f"{BASE_URL}/{endpoint}", timeout=TIMEOUT)
    logging.info(f"{endpoint.capitalize()} Response: {response_.status_code}. Data: {response_.json()}")
    return response_


# Initialize module
response = post("init", {"module": "chatgpt"})

# Read module's status and wait until it's initialized (in Idle)
logging.info("Waiting for module initialization")
while True:
    response = get("status")
    chatgpt_status_code = response.json()[0].get("status_code")
    if chatgpt_status_code == 2:
        break
    time.sleep(1)

# Ask and read stream response
response = post("ask", {"chatgpt": {"prompt": "Hi! Please write a long text about AI", "convert_to_markdown": True}})
logging.info("Stream Response:")
for line in response.iter_lines():
    if line:
        logging.info(line.decode("utf-8"))

# Delete last conversation
response = post("delete", {"chatgpt": {"conversation_id": ""}})

# Close module (uninitialize it)
response = post("close", {"module": "chatgpt"})

๐ŸŒ CURL

For CURL examples please read ๐Ÿ“„ API docs section


๐Ÿ“„ API docs

โš ๏ธ Documentation is still under development!

๐ŸŒ Module initialization /api/init

Begins module initialization (in a separate, non-blocking thread)

Please call /api/status to check if the module is initialized BEFORE calling /api/init.

After calling /api/init, please call /api/status to check if the module's initialization finished.

Request (POST):

Maximum content length: 100 bytes. Default rate limits: 10/minute, 1/second

  • Without authorization

    {
        "module": "name of module from MODULES"
    }
    
  • With authorization

    Please see ๐Ÿ”’ HTTPS server and token-based authorization section for more info

    {
        "module": "name of module from MODULES",
        "token": "YourStrongRandomToken from --tokens-manage argument"
    }
    

Returns:

  • โœ”๏ธ If everything is ok: status code 200 and {} body
  • โŒ Error codes 429, 401 or 413 in case of rate limit, wrong token or too large request
  • โŒ In case of other error: status code 400 or 500 and {"error": "Error message"} body

Example:

$ curl --request POST --header "Content-Type: application/json" --data '{"module": "chatgpt"}' http://localhost:1312/api/init
{}

๐ŸŒ Status /api/status

Retrieves the current status of all modules

Request (POST):

Maximum content length: 100 bytes. No rate limit

  • Without authorization

    {}
    
  • With authorization

    Please see ๐Ÿ”’ HTTPS server and token-based authorization section for more info

    {
        "token": "YourStrongRandomToken from --tokens-use argument"
    }
    

Returns:

  • โœ”๏ธ If no errors during modules iteration: status code 200 and
[
    {
        "module": "Name of the module from MODULES",
        "status_code": "Module's status code as an integer",
        "status_name": "Module's status as a string",
        "error": "Empty or module's error message"
    },
]
  • โŒ Error codes 429, 401 or 413 in case of rate limit, wrong token or too large request
  • โŒ In case of an modules iteration error: status code 500 and {"error": "Error message"} body

Example:

$ curl --request POST --header "Content-Type: application/json" --data '{}' http://localhost:1312/api/status
[{"error":"","module":"chatgpt","status_code":2,"status_name":"Idle"}]

๐ŸŒ Send request and get stream response /api/ask

Initiates a request to the specified module and streams responses back (if no_stream is false or not specified)

Please call /api/status to check if the module is initialized and not busy BEFORE calling /api/ask

To stop the stream / response, please call /api/stop

Request (POST):

Maximum content length: 100 bytes. Default rate limits: 10/minute, 1/second

  • Without authorization

    For ChatGPT:

    {
        "chatgpt": {
            "prompt": "Text request to send to the module",
            "conversation_id": "Optional conversation ID (to continue existing chat) or empty for a new conversation",
            "convert_to_markdown": true or false //(Optional flag for converting response to Markdown)
        },
        "no_stream": true if you need to receive only the last response
    }
    

    For Microsoft Copilot:

    {
        "ms_copilot": {
            "prompt": "Text request",
            "image": image as base64 to include into request,
            "conversation_id": "empty string or existing conversation ID",
            "style": "creative" / "balanced" / "precise",
            "convert_to_markdown": True or False
        },
        "no_stream": true if you need to receive only the last response
    }
    
  • With authorization

    Please see ๐Ÿ”’ HTTPS server and token-based authorization section for more info

    For ChatGPT:

    {
        "chatgpt": {
            "prompt": "Text request to send to the module",
            "conversation_id": "Optional conversation ID (to continue existing chat) or empty for a new conversation",
            "convert_to_markdown": true or false //(Optional flag for converting response to Markdown)
        },
        "token": "YourStrongRandomToken from --tokens-use argument",
        "no_stream": true if you need to receive only the last response
    }
    

    For Microsoft Copilot:

    {
        "ms_copilot": {
            "prompt": "Text request",
            "image": image as base64 to include into request,
            "conversation_id": "empty string or existing conversation ID",
            "style": "creative" / "balanced" / "precise",
            "convert_to_markdown": True or False
        },
        "token": "YourStrongRandomToken from --tokens-use argument",
        "no_stream": true if you need to receive only the last response
    }
    

Yields (or Returns if "no_stream" is specified):

  • โœ”๏ธ A stream of JSON objects containing module responses

For ChatGPT, each JSON object has the following structure:

{
    "finished": "True if it's the last response, False if not",
    "message_id": "ID of the current message (from assistant)",
    "response": "Actual response as text"
}

For Microsoft Copilot, each JSON object has the following structure:

{
    "finished": True if it's the last response, False if not,
    "response": "response as text (or meta response)",
    "images": ["array of image URL's"],
    "caption": "images caption",
    "attributions": [
        {
            "name": "name of attribution",
            "url": "URL of attribution"
        },
        ...
    ],
    "suggestions": ["array of suggestions of the requests"]
}

Returns:

  • โŒ Error codes 429, 401 or 413 in case of rate limit, wrong token or too large request
  • โŒ In case of error: status code 500 and {"error": "Error message"} body

Example:

$ curl --request POST --header "Content-Type: application/json" --data '{"chatgpt": {"prompt": "Hi! Who are you?", "convert_to_markdown": true}}' http://localhost:1312/api/ask
{"finished": false, "conversation_id": "1033be5b-d37d-46b3-b47c-9548da5b192c", "message_id": "00d9cc0d-c4d9-484d-a8e5-9c78eaf2a0e1", "response": "Hello! I'm ChatGPT, an AI developed by O"}
...
{"finished": true, "conversation_id": "1033be5b-d37d-46b3-b47c-9548da5b192c", "message_id": "00d9cc0d-c4d9-484d-a8e5-9c78eaf2a0e1", "response": "Hello! I'm ChatGPT, an AI developed by OpenAI. I'm here to help answer your questions, engage in conversation, provide information, or assist you with anything else you might need. How can I assist you today?"}

๐ŸŒ Stop stream response /api/stop

Stops the specified module's streaming response (stops yielding from /api/ask)

Request (POST):

Maximum content length: 100 bytes. Default rate limits: 1/second

  • Without authorization

    {
        "module": "Name of the module from MODULES"
    }
    
  • With authorization

    Please see ๐Ÿ”’ HTTPS server and token-based authorization section for more info

    {
        "module": "Name of the module from MODULES",
        "token": "YourStrongRandomToken from --tokens-use argument"
    }
    

Returns:

  • โœ”๏ธ If the stream stopped successfully: status code 200 and {} body
  • โŒ Error codes 429, 401 or 413 in case of rate limit, wrong token or too large request
  • โŒ In case of an error: status code 400 or 500 and {"error": "Error message"} body

Example:

$ curl --request POST --header "Content-Type: application/json" --data '{"module": "chatgpt"}' http://localhost:1312/api/stop
{}

๐ŸŒ Delete conversation /api/delete

Clears the module's conversation history

Please call /api/status to check if the module is initialized and not busy BEFORE calling /api/delete

Request (POST):

Maximum content length: 500 bytes. Default rate limits: 10/minute, 1/second

  • Without authorization

    For ChatGPT:

    {
        "chatgpt": {
            "conversation_id": "ID of conversation to delete or empty to delete the top one"
        }
    }
    

    For Microsoft Copilot:

    {
        "ms_copilot": {
            "conversation_id": "ID of conversation to delete or empty to delete the top one"
        }
    }
    
  • With authorization

    Please see ๐Ÿ”’ HTTPS server and token-based authorization section for more info

    For ChatGPT:

    {
        "chatgpt": {
            "conversation_id": "ID of conversation to delete or empty to delete the top one"
        },
        "token": "YourStrongRandomToken from --tokens-use argument"
    }
    

    For Microsoft Copilot:

    {
        "ms_copilot": {
            "conversation_id": "ID of conversation to delete or empty to delete the top one"
        },
        "token": "YourStrongRandomToken from --tokens-use argument"
    }
    

Returns:

  • โœ”๏ธ If conversation deleted successfully: status code 200 and {} body
  • โŒ Error codes 429, 401 or 413 in case of rate limit, wrong token or too large request
  • โŒ In case of an error: status code 400 or 500 and {"error": "Error message"} body

Example:

$ curl --request POST --header "Content-Type: application/json" --data '{"chatgpt": {"conversation_id": "1033be5b-d37d-46b3-b47c-9548da5b192c"}}' http://localhost:1312/api/delete
{}

๐ŸŒ Close module /api/close

Requests the module's session to close (in a separate, non-blocking thread)

Please call /api/status to check if the module is initialized and its status is Idle or Failed BEFORE calling /api/close

After calling /api/close, please call /api/status to check if the module's closing finished

Request (POST):

Maximum content length: 500 bytes. Default rate limits: 10/minute, 1/second

  • Without authorization

    {
        "module": "Name of the module from MODULES"
    }
    
  • With authorization

    Please see ๐Ÿ”’ HTTPS server and token-based authorization section for more info

    {
        "module": "Name of the module from MODULES",
        "token": "YourStrongRandomToken from --tokens-manage argument"
    }
    

Returns:

  • โœ”๏ธ If requested successfully: status code 200 and {} body
  • โŒ Error codes 429, 401 or 413 in case of rate limit, wrong token or too large request
  • โŒ In case of an error: status code 400 or 500 and {"error": "Error message"} body

๐Ÿ”’ HTTPS server and token-based authorization

โš ๏ธ Better use proper SSL service and redirect to local port

โš ๏ธ Don't use token-based authorization with bare HTTP (without any SSL). It's not safe!

It's possible to start SSL (HTTPS) server instead of HTTP. For that, provide --ssl argument with path to certificate file and path to private key file.

Example:

$ lmao --configs "configs" --ip "0.0.0.0" --port "1312" --ssl certificate.crt private.key
2024-04-19 02:09:10 INFO     Logging setup is complete
2024-04-19 02:09:10 INFO     Loading config files from configs directory
2024-04-19 02:09:10 INFO     Adding config of ms_copilot module
2024-04-19 02:09:10 INFO     Adding config of chatgpt module
2024-04-19 02:09:10 WARNING  No tokens provided. Everyone can use API
2024-04-19 02:09:10 INFO     Rate limits for all API requests except /status and /stop: 10/minute, 1/second
2024-04-19 02:09:10 INFO     Rate limits /status and /stop API requests: 1/second
 * Serving Flask app 'lmao.external_api'
 * Debug mode: off
2024-04-19 02:09:10 INFO     WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on https://127.0.0.1:1312
 * Running on https://192.168.0.3:1312
2024-04-19 02:09:10 INFO     Press CTRL+C to quit
...

Also you can enable token-based authorization. For that, provide --tokens-use argument with a list of some strong random tokens. Requests that provided these tokens can access /status, /ask, /stop and /delete. For /init and /stop, provide --tokens-manage argument with a list of some REALLY strong random tokens

โš ๏ธ Make sure you provided at least one token to --tokens-manage if you're using --tokens-use. Otherwise EVERYONE CAN ACCESS /init and /stop

Example (tokens for /status, /ask, /stop and /delete: naixae3eeNao6suu, kahMeixoo9un9OhR. Token for /init and /stop: ofi2ohRi8maish4x):

$ lmao --configs "configs" --ip "0.0.0.0" --port "1312" --ssl certificate.crt private.key --tokens-use naixae3eeNao6suu kahMeixoo9un9OhR --tokens-manage ofi2ohRi8maish4x
2024-04-19 02:07:02 INFO     Logging setup is complete
2024-04-19 02:07:02 INFO     Loading config files from configs directory
2024-04-19 02:07:02 INFO     Adding config of ms_copilot module
2024-04-19 02:07:02 INFO     Adding config of chatgpt module
2024-04-19 02:07:02 INFO     Token-based authorization enabled. Provided 2 tokens-use
2024-04-19 02:07:02 INFO     Token-based authorization enabled. Provided 1 tokens-manage
2024-04-19 02:07:02 INFO     Rate limits for all API requests except /status and /stop: 10/minute, 1/second
2024-04-19 02:07:02 INFO     Rate limits /status and /stop API requests: 1/second
 * Serving Flask app 'lmao.external_api'
 * Debug mode: off
2024-04-19 02:07:02 INFO     WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:1312
 * Running on http://192.168.0.3:1312
2024-04-19 02:07:02 INFO     Press CTRL+C to quit
...

๐Ÿ› How to fix Failed to establish a new connection when using multiple modules

  1. Open venv/lib/python3.XX/site-packages/undetected_chromedriver/patcher.py
  2. Add import secrets to the imports
  3. In def __init__ change prefix = "undetected" to prefix = f"undetected{secrets.token_hex(4)}"
  4. Save file and reload LMAO

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_api_open-2.4.7.tar.gz (54.9 kB view details)

Uploaded Source

Built Distribution

llm_api_open-2.4.7-py3-none-any.whl (100.2 kB view details)

Uploaded Python 3

File details

Details for the file llm_api_open-2.4.7.tar.gz.

File metadata

  • Download URL: llm_api_open-2.4.7.tar.gz
  • Upload date:
  • Size: 54.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for llm_api_open-2.4.7.tar.gz
Algorithm Hash digest
SHA256 5c202532e30397351a67a54863e6872fc7a229a7bc423573430e785da41abc31
MD5 3afeff82cd1ac008099d3a1c04413b77
BLAKE2b-256 a2b52fde2ffb0275bf467e000e253e4f1e9a0780558a3f5a17d6a9d0fc65dec8

See more details on using hashes here.

File details

Details for the file llm_api_open-2.4.7-py3-none-any.whl.

File metadata

  • Download URL: llm_api_open-2.4.7-py3-none-any.whl
  • Upload date:
  • Size: 100.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for llm_api_open-2.4.7-py3-none-any.whl
Algorithm Hash digest
SHA256 d926671e4d879b4b0f05e0bb3517b9aecbf23dee2d8b96869026b35aef7ed0e2
MD5 9d5e8d4bb5a56f2b63eb604efe5901c1
BLAKE2b-256 16b83ab286af9f1787e9f5999d7adc6aecd28d0f81a9bb6246867500728ba224

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page