Skip to main content

The Oshepherd guiding the Ollama(s) inference orchestration.

Project description

oshepherd

The Oshepherd guiding the Ollama(s) inference orchestration.

oshepherd logo

PyPI Version MIT License DeepWiki

A centralized FastAPI service, using Celery and Redis to orchestrate multiple Ollama servers as workers.

Install

pip install oshepherd

Usage

  1. Setup Redis:

    Celery uses Redis as message broker and backend. You'll need a Redis instance, which you can provision for free in redislabs.com.

  2. Setup FastAPI Server:

    # define configuration env file
    # use credentials for redis as broker and backend
    cp .api.env.template .api.env
    
    # start api
    oshepherd start-api --env-file .api.env
    
  3. Setup Celery/Ollama Worker(s):

    # install ollama https://ollama.com/download
    # optionally pull the model
    ollama pull mistral
    
    # define configuration env file
    # use credentials for redis as broker and backend
    cp .worker.env.template .worker.env
    
    # start worker
    oshepherd start-worker --env-file .worker.env
    
  4. Now you're ready to execute Ollama completions remotely. You can point your Ollama client to your oshepherd api server by setting the host, and it will return your requested completions from any of the workers:

    import ollama
    
    client = ollama.Client(host="http://127.0.0.1:5001")
    
    # Standard request
    response = client.generate(model="mistral", prompt="Why is the sky blue?")
    
    # Streaming request
    for chunk in client.generate(model="mistral", prompt="Why is the sky blue?", stream=True):
        print(chunk['response'], end='', flush=True)
    

    For a complete Python example with streaming support, see examples/pretty_streaming.py.

    import { Ollama } from "ollama/browser";
    
    const ollama = new Ollama({ host: "http://127.0.0.1:5001" });
    
    // Standard request
    const response = await ollama.generate({
        model: "mistral",
        prompt: "Why is the sky blue?",
    });
    
    // Streaming request
    const streamResponse = await ollama.generate({
        model: "mistral",
        prompt: "Why is the sky blue?",
        stream: true
    });
    
    for await (const chunk of streamResponse) {
        process.stdout.write(chunk.response);
    }
    

    For a complete TypeScript/JavaScript example with streaming support, see examples/ts-scripts/README.md.

    • Raw http request:
    curl -X POST -H "Content-Type: application/json" -L http://127.0.0.1:5001/api/generate/ \
    -d '{"model":"mistral","prompt":"Why is the sky blue?","stream":true}' \
    --no-buffer
    

Disclaimers 🚨

This package is in alpha, its architecture and api might change in the near future. Currently this is getting tested in a controlled environment by real users, but haven't been audited, nor tested thorugly. Use it at your own risk.

As this is an alpha version, support and responses might be limited. We'll do our best to address questions and issues as quickly as possible.

API server parity

  • Generate a completion: POST /api/generate
  • Generate a chat completion: POST /api/chat
  • Generate Embeddings: POST /api/embeddings
  • List Local Models: GET /api/tags
  • Version: GET /api/version
  • Show Model Information: POST /api/show (pending)
  • List Running Models: GET /api/ps (pending)

Oshepherd API server has been designed to maintain compatibility with the endpoints defined by Ollama, ensuring that any official client (i.e.: ollama-python, ollama-js) can use this server as host and receive expected responses. For more details on the full API specifications, refer to the official Ollama API documentation.

Contribution guidelines

We welcome contributions! If you find a bug or have suggestions for improvements, please open an issue or submit a pull request pointing to development branch. Before creating a new issue/pull request, take a moment to search through the existing issues/pull requests to avoid duplicates.

Conda Support

To run and build locally you can use conda:

conda create -n oshepherd python=3.12
conda activate oshepherd
pip install -r requirements.txt

# install oshepherd
pip install -e .
Tests

Follow usage instructions to start api server and celery worker using a local ollama, and then run the tests:

pytest -s tests/

Author

This is a project developed and maintained by mnemonica.ai.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

oshepherd-0.0.20.tar.gz (21.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

oshepherd-0.0.20-py3-none-any.whl (26.0 kB view details)

Uploaded Python 3

File details

Details for the file oshepherd-0.0.20.tar.gz.

File metadata

  • Download URL: oshepherd-0.0.20.tar.gz
  • Upload date:
  • Size: 21.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for oshepherd-0.0.20.tar.gz
Algorithm Hash digest
SHA256 cb81c271eb6b8b5d518fdc94ecc7bc79de09ed985e93752ba18a4a84594be3b8
MD5 99a8b509d788a867ddc51e2ab387d096
BLAKE2b-256 c9dbb264da20c5bfed144feb1379f560d3547142042d0e2e9b4684a22a6a74a2

See more details on using hashes here.

File details

Details for the file oshepherd-0.0.20-py3-none-any.whl.

File metadata

  • Download URL: oshepherd-0.0.20-py3-none-any.whl
  • Upload date:
  • Size: 26.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for oshepherd-0.0.20-py3-none-any.whl
Algorithm Hash digest
SHA256 56c8fa7d7b63ea98c41956233a48322ffa7606746d7c9808e2b7839789a9abd7
MD5 1fde3302ab1905431b16c987cd736345
BLAKE2b-256 25925fb09d50ba0afcb394e4a7545615a7cda2e0eeb01ed026067f63ea1bde30

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page