CLI to manage the OpenAI Responses Server that bridges chat completions to responses API calls
This project has been archived.
The maintainers of this project have marked this project as archived. No new releases are expected.
Project description
openai-responses-server
A server the serves any AI provider with OpenAI ChatCompletions as OpenAI's Responses API and hosted tools. I means it manages the stateful component of Responses API, and bridges Ollama, Vllm, LiteLLM and any other AI serving library. This means you can use OpenAI's new coding assistant "Codex", that needs Responses API endpoints.
It is still missing some features, but I would appreciate your support in stars, issues, suggestions and even pull requests if you are inclined for it.
I verified it works in my main repo, in my demo AI assistant that can hear, think and speak with the docker-compose-codex.yaml
Install today via pip: openai-responses-server
Roadmap
- Tool run support (Tested with llama 3.2 3b on Ollama)
- Validate work from CLI
- dotenv support
- State management (long term, not just in-memory)
- Web search support (crawl4ai)
- File upload + search
- graphiti (based on neo4j)
- Code interpreter
- Computer use
Installation
UV cli
Install uv if not installed yet. From: https://docs.astral.sh/uv/getting-started/installation/#standalone-installer
curl -LsSf https://astral.sh/uv/install.sh | sh
or
powershell -c "irm https://astral.sh/uv/install.ps1 | more"
Setup environment with:
uv venv
Install dependecies with uv
uv pip install .
uv pip install -e ".[dev]" # for development
📚 Citation
Cited projects
UncleCode. (2024). Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper [Computer software]. GitHub. https://github.com/unclecode/crawl4ai
Cite this project
If you use openai-responses-server in your research or project, please cite:
Code citation format
@software{openai-responses-server, author = {TeaBranch}, title = {openai-responses-server: Open-source server the serves any AI provider with OpenAI ChatCompletions as OpenAI's Responses API and hosted tools.}, year = {2025}, publisher = {GitHub}, journal = {GitHub Repository}, howpublished = {\url{https://github.com/teabranch/openai-responses-server}}, commit = {Please use the commit hash you're working with} }
Text citation format:
TeaBranch. (2025). openai-responses-server: Open-source server the serves any AI provider with OpenAI ChatCompletions as OpenAI's Responses API and hosted tools. [Computer software]. GitHub. https://github.com/teabranch/openai-responses-server
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openai_responses_server-0.1.13.tar.gz.
File metadata
- Download URL: openai_responses_server-0.1.13.tar.gz
- Upload date:
- Size: 14.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0a5ae5193ce4bab9c3c62a993c8a6244fa113d24dd553abdf5a8a3219978a239
|
|
| MD5 |
216aee9da79d476902de0c1cd0dbdea3
|
|
| BLAKE2b-256 |
70754307881796c9ae8db9cc2ec2193e06b64b9498beb87878fa24ed60222805
|
Provenance
The following attestation bundles were made for openai_responses_server-0.1.13.tar.gz:
Publisher:
publish.yml on teabranch/openai-responses-server
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openai_responses_server-0.1.13.tar.gz -
Subject digest:
0a5ae5193ce4bab9c3c62a993c8a6244fa113d24dd553abdf5a8a3219978a239 - Sigstore transparency entry: 206362563
- Sigstore integration time:
-
Permalink:
teabranch/openai-responses-server@2ea299131a5f84ba15cbc721517bc0a3c0ee22e9 -
Branch / Tag:
refs/tags/v0.1.13 - Owner: https://github.com/teabranch
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@2ea299131a5f84ba15cbc721517bc0a3c0ee22e9 -
Trigger Event:
release
-
Statement type:
File details
Details for the file openai_responses_server-0.1.13-py3-none-any.whl.
File metadata
- Download URL: openai_responses_server-0.1.13-py3-none-any.whl
- Upload date:
- Size: 13.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e6505797df82319ee352f07c073549e19d35ed0f5e0dcf2e220c55acbd16d7e5
|
|
| MD5 |
fd24944d2a0f2eda2779d448ad899e1e
|
|
| BLAKE2b-256 |
7f3233d53f4c3ee8d2a21954bc31b6f7f94c90d16b4109bf94eafcd4b93f8f0a
|
Provenance
The following attestation bundles were made for openai_responses_server-0.1.13-py3-none-any.whl:
Publisher:
publish.yml on teabranch/openai-responses-server
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openai_responses_server-0.1.13-py3-none-any.whl -
Subject digest:
e6505797df82319ee352f07c073549e19d35ed0f5e0dcf2e220c55acbd16d7e5 - Sigstore transparency entry: 206362567
- Sigstore integration time:
-
Permalink:
teabranch/openai-responses-server@2ea299131a5f84ba15cbc721517bc0a3c0ee22e9 -
Branch / Tag:
refs/tags/v0.1.13 - Owner: https://github.com/teabranch
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@2ea299131a5f84ba15cbc721517bc0a3c0ee22e9 -
Trigger Event:
release
-
Statement type: