Unofficial open APIs for popular LLMs with self-hosted redirect capability
Project description
๐๏ธ LLM-API-Open (LMAO)
Unofficial open APIs for popular LLMs with self-hosted redirect capability |
---|
โ WAT
๐๏ธ LLM-API-Open (LMAO) allows for the free and universal utilization of popular Large Language Models (LLM). This is achieved using browser automation. LLM-API-Open (LMAO) launches a browser in headless mode and controls a website as if a real user were using it. This enables the use of popular LLMs that usually don't offer easy and free access to their official APIs
๐ฅ Additionally, LLM-API-Open (LMAO) is capable of creating its own API server to which any other apps can send requests. In other words, you can utilize LLM-API-Open (LMAO) both as a Python package and as an API proxy for any of your apps!
๐ง LLM-API-Open is development
Due to my studies, I don't have much time to work on the project ๐
Currently, LLM-API-Open has only 2 modules: ChatGPT and Microsoft Copilot
๐ But it is possible to add other popular online LLMs (You can wait, or make a pull-request yourself)
๐ Documentation is also under development! Consider reading docstring for now
๐ Support project
-
BTC:
bc1qd2j53p9nplxcx4uyrv322t3mg0t93pz6m5lnft
-
ETH:
0x284E6121362ea1C69528eDEdc309fC8b90fA5578
-
ZEC:
t1Jb5tH61zcSTy2QyfsxftUEWHikdSYpPoz
-
Or by my music on ๐ฆ bandcamp
๐๏ธ Getting started
โ ๏ธ Will not work with Python 3.13 or later due to
imghdr
โ๏ธ 1. Download / build / install LLM-API-Open
There is 4 general ways to get LLM-API-Open
โ๏ธ Install via pip
-
Install from GitHub directly
pip install git+https://github.com/F33RNI/LLM-API-Open.git
-
Or clone repo and install
git clone https://github.com/F33RNI/LLM-API-Open.git cd LLM-API-Open python -m venv venv source venv/bin/activate pip install .
โฌ๏ธ Download cli version from releases
https://github.com/F33RNI/LLM-API-Open/releases/latest
๐จ Build cli version from source using PyInstaller
git clone https://github.com/F33RNI/LLM-API-Open.git
cd LLM-API-Open
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pyinstaller lmao.spec
dist/lmao --help
๐ป Use source as is
git clone https://github.com/F33RNI/LLM-API-Open.git
cd LLM-API-Open
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
export PYTHONPATH=./src:$PYTHONPATH
export PYTHONPATH=./src/lmao:$PYTHONPATH
python -m main --help
๐ง 2. Configure LLM-API-Open
- Download
configs
directory from this repo - Open
.json
files of modules you need in any editor and change it as you need - Specify path to
configs
directory with-c path/to/configs
argument
๐ฆ Python package example
import logging
import json
from lmao.module_wrapper import ModuleWrapper
# Initialize logging in a simplest way
logging.basicConfig(level=logging.INFO)
# Load config
with open("path/to/configs/chatgpt.json", "r", encoding="utf-8") as file:
module_config = json.loads(file.read())
# Initialize module
module = ModuleWrapper("chatgpt", module_config)
module.initialize(blocking=True)
# Ask smth
conversation_id = None
for response in module.ask({"prompt": "Hi! Who are you?", "convert_to_markdown": True}):
conversation_id = response.get("conversation_id")
response_text = response.get("response")
print(response_text, end="\n\n")
# Delete conversation
module.delete_conversation({"conversation_id": conversation_id})
# Close (unload) module
module.close(blocking=True)
๐ป CLI example
$ lmao --help
usage: lmao [-h] [-v] [-c CONFIG] [-t TEST] [-i IP] [-p PORT] [--no-logging-init]
Unofficial open APIs for popular LLMs with self-hosted redirect capability
options:
-h, --help show this help message and exit
-v, --version show program's version number and exit
-c CONFIGS, --configs CONFIGS
path to configs directory with each module config file (Default: configs)
-t TEST, --test TEST module name to test in cli instead of starting API server (eg. --test=chatgpt)
-i IP, --ip IP API server Host (IP) (Default: localhost)
-p PORT, --port PORT API server port (Default: 1312)
--no-logging-init specify to bypass logging initialization (will be set automatically when using --test)
examples:
lmao --test=chatgpt
lmao --ip="0.0.0.0" --port=1312
lmao --ip="0.0.0.0" --port=1312 --no-logging-init
$ lmao --test=chatgpt
WARNING:root:Error adding cookie oai-did
WARNING:root:Error adding cookie ajs_anonymous_id
WARNING:root:Error adding cookie oai-allow-ne
User > Hi!
chatgpt > Hello! How can I assist you today?
๐ API example
Start server
$ lmao --configs "configs" --ip "0.0.0.0" --port "1312"
2024-03-30 23:14:50 INFO Logging setup is complete
2024-03-30 23:14:50 INFO Loading config files from configs directory
2024-03-30 23:14:50 INFO Adding config of ms_copilot module
2024-03-30 23:14:50 INFO Adding config of chatgpt module
* Serving Flask app 'lmao.external_api'
* Debug mode: off
2024-03-30 23:14:50 INFO WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:1312
* Running on http://192.168.0.3:1312
2024-03-30 23:14:50 INFO Press CTRL+C to quit
...
๐ Python (requests)
import logging
import time
from typing import Dict
import requests
# API URL
BASE_URL = "http://localhost:1312/api"
# Timeout for each request
TIMEOUT = 60
# Initialize logging in a simplest way
logging.basicConfig(level=logging.INFO)
def post(endpoint: str, data: Dict):
"""POST request wrapper"""
response_ = requests.post(f"{BASE_URL}/{endpoint}", json=data, timeout=TIMEOUT, stream=endpoint == "ask")
if endpoint != "ask":
try:
logging.info(f"{endpoint.capitalize()} Response: {response_.status_code}. Data: {response_.json()}")
except Exception:
logging.info(f"{endpoint.capitalize()} Response: {response_.status_code}")
else:
logging.info(f"{endpoint.capitalize()} Response: {response_.status_code}")
return response_
def get(endpoint: str):
"""GET request wrapper"""
response_ = requests.get(f"{BASE_URL}/{endpoint}", timeout=TIMEOUT)
logging.info(f"{endpoint.capitalize()} Response: {response_.status_code}. Data: {response_.json()}")
return response_
# Initialize module
response = post("init", {"module": "chatgpt"})
# Read module's status and wait until it's initialized (in Idle)
logging.info("Waiting for module initialization")
while True:
response = get("status")
chatgpt_status_code = response.json()[0].get("status_code")
if chatgpt_status_code == 2:
break
time.sleep(1)
# Ask and read stream response
response = post("ask", {"chatgpt": {"prompt": "Hi! Please write a long text about AI", "convert_to_markdown": True}})
logging.info("Stream Response:")
for line in response.iter_lines():
if line:
logging.info(line.decode("utf-8"))
# Delete last conversation
response = post("delete", {"chatgpt": {"conversation_id": ""}})
# Close module (uninitialize it)
response = post("close", {"module": "chatgpt"})
๐ CURL
For CURL examples please read ๐ API docs
section
๐ API docs
โ ๏ธ Documentation is still under development!
๐ Module initialization /api/init
Begins module initialization (in a separate, non-blocking thread)
Please call
/api/status
to check if the module is initialized BEFORE calling/api/init
.After calling
/api/init
, please call/api/status
to check if the module's initialization finished.
Request (POST):
{
"module": "name of module from MODULES"
}
Returns:
- โ๏ธ If everything is ok: status code
200
and{}
body - โ In case of an error: status code
400
or500
and{"error": "Error message"}
body
Example:
$ curl --request POST --header "Content-Type: application/json" --data '{"module": "chatgpt"}' http://localhost:1312/api/init
{}
๐ Status /api/status
Retrieves the current status of all modules
Request (GET or POST):
{}
Returns:
- โ๏ธ If no errors during modules iteration: status code
200
and
[
{
"module": "Name of the module from MODULES",
"status_code": "Module's status code as an integer",
"status_name": "Module's status as a string",
"error": "Empty or module's error message"
},
]
- โ In case of an modules iteration error: status code
500
and{"error": "Error message"}
body
Example:
$ curl --request GET http://localhost:1312/api/status
[{"error":"","module":"chatgpt","status_code":2,"status_name":"Idle"}]
๐ Send request and get stream response /api/ask
Initiates a request to the specified module and streams responses back
Please call
/api/status
to check if the module is initialized and not busy BEFORE calling/api/ask
To stop the stream, please call
/api/stop
Request (POST):
For ChatGPT:
{
"chatgpt": {
"prompt": "Text request to send to the module",
"conversation_id": "Optional conversation ID (to continue existing chat) or empty for a new conversation",
"convert_to_markdown": true or false //(Optional flag for converting response to Markdown)
}
}
For Microsoft Copilot:
{
"ms_copilot": {
"prompt": "Text request",
"image": image as base64 to include into request,
"conversation_id": "empty string or existing conversation ID",
"style": "creative" / "balanced" / "precise",
"convert_to_markdown": True or False
}
}
Yields:
- โ๏ธ A stream of JSON objects containing module responses
For ChatGPT, each JSON object has the following structure:
{
"finished": "True if it's the last response, False if not",
"message_id": "ID of the current message (from assistant)",
"response": "Actual response as text"
}
For Microsoft Copilot, each JSON object has the following structure:
{
"finished": True if it's the last response, False if not,
"response": "response as text (or meta response)",
"images": ["array of image URL's"],
"caption": "images caption",
"attributions": [
{
"name": "name of attribution",
"url": "URL of attribution"
},
...
],
"suggestions": ["array of suggestions of the requests"]
}
Returns:
- โ In case of error: status code
500
and{"error": "Error message"}
body
Example:
$ curl --request POST --header "Content-Type: application/json" --data '{"chatgpt": {"prompt": "Hi! Who are you?", "convert_to_markdown": true}}' http://localhost:1312/api/ask
{"finished": false, "conversation_id": "1033be5b-d37d-46b3-b47c-9548da5b192c", "message_id": "00d9cc0d-c4d9-484d-a8e5-9c78eaf2a0e1", "response": "Hello! I'm ChatGPT, an AI developed by O"}
...
{"finished": true, "conversation_id": "1033be5b-d37d-46b3-b47c-9548da5b192c", "message_id": "00d9cc0d-c4d9-484d-a8e5-9c78eaf2a0e1", "response": "Hello! I'm ChatGPT, an AI developed by OpenAI. I'm here to help answer your questions, engage in conversation, provide information, or assist you with anything else you might need. How can I assist you today?"}
๐ Stop stream response /api/stop
Stops the specified module's streaming response (stops yielding from /api/ask
)
Request (POST):
{
"module": "Name of the module from MODULES"
}
Returns:
- โ๏ธ If the stream stopped successfully: status code
200
and{}
body - โ In case of an error: status code
400
or500
and{"error": "Error message"}
body
Example:
$ curl --request POST --header "Content-Type: application/json" --data '{"module": "chatgpt"}' http://localhost:1312/api/stop
{}
๐ Delete conversation /api/delete
Clears the module's conversation history
Please call
/api/status
to check if the module is initialized and not busy BEFORE calling/api/delete
Request:
For ChatGPT:
{
"chatgpt": {
"conversation_id": "ID of conversation to delete or empty to delete the top one"
}
}
Returns:
- โ๏ธ If conversation deleted successfully: status code
200
and{}
body - โ In case of an error: status code
400
or500
and{"error": "Error message"}
body
Example:
$ curl --request POST --header "Content-Type: application/json" --data '{"chatgpt": {"conversation_id": "1033be5b-d37d-46b3-b47c-9548da5b192c"}}' http://localhost:1312/api/delete
{}
๐ Close module /api/close
Requests the module's session to close (in a separate, non-blocking thread)
Please call
/api/status
to check if the module is initialized and its status is Idle or Failed BEFORE calling/api/close
After calling
/api/close
, please call/api/status
to check if the module's closing finished
Request:
{
"module": "Name of the module from MODULES"
}
Returns:
- โ๏ธ If requested successfully: status code
200
and{}
body - โ In case of an error: status code
400
or500
and{"error": "Error message"}
body
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llm-api-open-2.1.19.tar.gz
.
File metadata
- Download URL: llm-api-open-2.1.19.tar.gz
- Upload date:
- Size: 48.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | dd259b6aad37ded6c04b72d01a52ec879f599312e7f00e068a03c6706af9c0b8 |
|
MD5 | 3186e5c8ba6606bfc2af06d670dac1e8 |
|
BLAKE2b-256 | 958657db306099a6569575cf99b07f8cafb907eb35d01214a59d965b987c668e |
File details
Details for the file llm_api_open-2.1.19-py3-none-any.whl
.
File metadata
- Download URL: llm_api_open-2.1.19-py3-none-any.whl
- Upload date:
- Size: 93.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 987aa687f15b6e502e22b36fa2755edbed445b25cd89737b5e1f82b6e4e54848 |
|
MD5 | 0dc3d2f751b81c2e18f5699e2133b130 |
|
BLAKE2b-256 | a6c68dca4482a01702b3f8e4da19da867bd6e6a0460aecb9cb79da920864cd4a |