Skip to main content

OpenAI-compatible API proxy for chatjimmy.ai

Project description

chatjimmy-proxy

OpenAI-compatible HTTP proxy for chatjimmy.ai. Point any OpenAI SDK or tool at it and use model jimmy.

Quick start

  1. clone and install:
    </code></pre>
    </li>
    </ol>
    <p>git clone <repo>
    cd chatjimmy-proxy
    uv sync
    uv run playwright install chromium</p>
    <pre><code>2. configure:
       ```bash
    cp .env.example .env
    # edit PROXY_API_KEY (leave blank to disable auth)
    

    Note: if you have a PROXY_API_KEY set in your shell environment (for example some systems default it to your username), the proxy will require that exact value in the Authorization header. Use export PROXY_API_KEY= to clear it, or choose a different secret.

    1. run discovery once:
      </code></pre>
      </li>
      </ol>
      <p>uv run chatjimmy-discover</p>
      <pre><code>4. start proxy (default port 8000, change with `PORT` env var):
         ```bash
      uv run chatjimmy-proxy
      # or explicitly:
      uv run uvicorn chatjimmy_proxy.main:app --host 0.0.0.0 --port ${PORT:-8000}
      

      If you see "address already in use" set PORT to a free port (e.g. 8001) or kill the process currently listening on the port.

      Usage

      curl http://localhost:8000/v1/chat/completions \
        -H "Content-Type: application/json" \
        -H "Authorization: Bearer $PROXY_API_KEY" \
        -d '{"model":"jimmy","messages":[{"role":"user","content":"Hi"}]}'
      

      Streaming: add --no-buffer and "stream": true.

      Python example:

      from openai import OpenAI
      client = OpenAI(base_url="http://localhost:8000/v1", api_key="key")
      print(client.chat.completions.create(model="jimmy", messages=[{"role":"user","content":"Hi"}]).choices[0].message.content)
      

      Editors and agents

      Any tool that lets you supply a custom OpenAI‑compatible provider should work. You need three things:

      1. Base URL – the root of the OpenAI API, not a specific endpoint. Use http://<host>:<port>/v1 (omit /chat/completions); most clients append the path themselves. If you include /chat/completions twice you'll see 404s like /v1/chat/completions/chat/completions.
      2. API key – the secret from .env (or any string if auth is off).
      3. Modeljimmy.

      Correct Roo Code configuration example:

      {
        "provider": "OpenAI Compatible",
        "baseUrl": "http://localhost:8000/v1",
        "apiKey": "<your-proxy-key>",
        "model": "jimmy"
      }
      

      If you accidentally set the base URL to include /chat/completions, the agent will produce 404 errors when it tries to call /v1/chat/completions/chat/completions.

      Development

      uv run pytest tests/ -v
      uv run ruff check src/
      uv run ruff format src/
      

      Packaging & publishing

      Build with uv run hatch build. Releases are made by tagging vX.Y.Z; GitHub Actions tests and publishes to PyPI using PYPI_API_TOKEN.

      Troubleshooting

      • Port already in use – if the proxy fails to start with address already in use:

        # locate the offending PID
        sudo lsof -i :8000 -t   # substitute whatever port you were using
        # kill it (or choose a different port)
        sudo kill <pid>
        # or directly:
        sudo kill -9 $(sudo lsof -i :8000 -t)
        

        alternatively, set PORT to a free port before launching:

        PORT=8001 uv run chatjimmy-proxy
        
      • Discovery failures – run with HEADLESS=false or switch to mode: browser_relay in blueprint.

      • 401/403 – clear .jimmy_blueprint.json/.jimmy_state.json and re-run discovery; ensure PROXY_API_KEY is correct.

      • Slow first response – discovery runs on startup; subsequent requests are fast under HTTP‑replay mode.

      License

      MIT – see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatjimmy_proxy-0.1.0.tar.gz (3.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chatjimmy_proxy-0.1.0-py3-none-any.whl (19.2 kB view details)

Uploaded Python 3

File details

Details for the file chatjimmy_proxy-0.1.0.tar.gz.

File metadata

  • Download URL: chatjimmy_proxy-0.1.0.tar.gz
  • Upload date:
  • Size: 3.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for chatjimmy_proxy-0.1.0.tar.gz
Algorithm Hash digest
SHA256 2e18a6567d2341296982f66dc2a3792c6fc3ee9e640ac3db021741fd20a997b3
MD5 813fd3f564e1c1b71980530efa4e6b92
BLAKE2b-256 f4eb790fce56840ae8d3c9de62601c6b8f0baf8ef6568182d61d1fa319a454c5

See more details on using hashes here.

File details

Details for the file chatjimmy_proxy-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for chatjimmy_proxy-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6eeda9833d293bf5716c74fbe8a8877b7a236cc89fcacba15387e5c5736e8e03
MD5 efa496d871b1f717c65e4bb3ebb16bce
BLAKE2b-256 c339c3d43d0902afa3c127b1ddcd2b377adb00914866b4a0a42064f2b82fa162

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page