Skip to main content

Minimal reusable bridge for local and cloud Ollama execution.

Project description

olliq

olliq is a small Python package for local and cloud Ollama.

It is meant to solve a narrow problem well:

  • load configuration from code, environment variables, or config.json
  • switch cleanly between local and cloud execution
  • build the official Python ollama.Client
  • generate text or stream responses
  • expose a small CLI for quick manual use when needed

It does not handle retrieval, indexing, OCR, vector stores, or application-specific orchestration.

The primary goal of the project is Python reuse:

  • import olliq from other projects
  • keep local/cloud switching explicit
  • avoid repeating Ollama setup code across applications

Install

Install as a Python package:

pip install .

Install directly from GitHub:

pip install git+https://github.com/guelfoweb/olliq.git

Install the CLI with isolated environment:

pipx install .

CLI install directly from GitHub:

pipx install git+https://github.com/guelfoweb/olliq.git

Release to PyPI

olliq is ready for PyPI publishing through PyPI Trusted Publishing with GitHub Actions.

The repository includes the workflow publish.yml, configured for:

  • repository: guelfoweb/olliq
  • workflow name: publish.yml
  • GitHub environment: pypi
  • PyPI project name: olliq

Recommended release flow:

  1. Push the repository to GitHub.
  2. Create and push a version tag.
  3. Let GitHub Actions build and publish the release to PyPI.

Example:

git push -u origin main
git tag v0.1.0
git push origin v0.1.0

The workflow can also be triggered manually from GitHub Actions with workflow_dispatch.

Mental Model

There are three main Python usage paths:

  1. Use explicit Python config with create_config(...).
  2. Load configuration from environment variables or config.json with load_config(...).
  3. Combine explicit overrides, environment, and config.json with resolve_config(...).

The main package API is:

generate(prompt, config, stream=False)

Recommended import style:

from olliq import create_config, generate, load_config, resolve_config

Configuration

Example config.json:

{
  "ollama": {
    "model": "qwen3:latest",
    "cloud": false,
    "url": "http://localhost:11434",
    "temperature": 0.2,
    "system_prompt": "You are a concise assistant."
  }
}

Configuration order:

  1. Explicit function arguments
  2. Environment variables
  3. config.json
  4. Built-in defaults

Built-in defaults:

  • local mode
  • local URL: http://localhost:11434
  • temperature: 0.2

This means:

  • create_config(...) uses local mode by default
  • you only need cloud=True when you want Ollama Cloud

Supported environment variables:

  • OLLAMA_MODEL
  • OLLAMA_CLOUD
  • OLLAMA_URL
  • OLLAMA_TEMPERATURE
  • OLLAMA_SYSTEM_PROMPT
  • OLLAMA_API_KEY

Cloud behavior:

  • when cloud=True, the host is always https://ollama.com
  • when cloud=True, OLLAMA_API_KEY is required
  • library code never prompts for OLLAMA_API_KEY
  • the CLI can prompt for OLLAMA_API_KEY only in an interactive terminal

load_config() and resolve_config() differ slightly:

  • load_config(path) reads environment variables and config.json
  • resolve_config(path, ...) also applies explicit function arguments on top

Python Usage

This is the main intended use of olliq.

Recommended import

from olliq import create_config, generate

Local

from olliq import create_config, generate

config = create_config(
    model="qwen3:latest",
    url="http://localhost:11434",
    temperature=0.2,
)

print(generate("Say hello.", config))

Cloud

from olliq import create_config, generate

config = create_config(
    model="gpt-oss:20b",
    cloud=True,
    temperature=0.2,
    system_prompt="You are a concise assistant.",
)

print(generate("Summarize this text.", config))

Environment only

If you want to use only environment variables, do not provide a real config.json or pass a path that does not exist:

from olliq import generate, load_config

config = load_config("missing.json")
if config is None:
    raise RuntimeError("Missing configuration")

print(generate("Say hello.", config))

Example environment:

export OLLAMA_MODEL=qwen3:latest
export OLLAMA_CLOUD=false
export OLLAMA_URL=http://localhost:11434
export OLLAMA_TEMPERATURE=0.2

Resolve from file, env, and explicit overrides

from olliq import generate, resolve_config

config = resolve_config(
    "config.json",
    model="qwen3:latest",
    cloud=False,
)

if config is None:
    raise RuntimeError("Missing configuration")

print(generate("Say hello.", config))

Stream

from olliq import create_config, generate

config = create_config(model="qwen3:latest")

for chunk in generate("Tell me a story.", config, stream=True):
    print(chunk, end="", flush=True)
print()

List models

from olliq import create_config, list_models

config = create_config()
print(list_models(config))

CLI Usage

The CLI is a convenience layer around the same package behavior.

Basic local prompt

olliq --model qwen3:latest "Say hello"

Cloud prompt

olliq --cloud --model gpt-oss:20b "Summarize this text"

Stream output

olliq --stream --model qwen3:latest "Tell me a story"

List models

olliq --list

Use a config file explicitly

olliq --config /path/to/config.json "Say hello"

Pipe stdin

cat filename.txt | olliq --model qwen3:latest "summarize"

If both a positional prompt and piped stdin are provided, the final prompt is:

<prompt>

<stdin content>

Public API

Preferred API:

  • create_config
  • load_config
  • resolve_config
  • generate
  • list_models

Compatibility aliases:

  • create_ollama_config
  • generate_prompt
  • generate_stream
  • generate_prompt_stream

Exceptions:

  • ConfigError
  • AuthError
  • GenerationError

Created by Gianni Amato.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

olliq-0.1.0.tar.gz (16.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

olliq-0.1.0-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

File details

Details for the file olliq-0.1.0.tar.gz.

File metadata

  • Download URL: olliq-0.1.0.tar.gz
  • Upload date:
  • Size: 16.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for olliq-0.1.0.tar.gz
Algorithm Hash digest
SHA256 19cb3d4a88926da8ab3e2d9081b77227deefce27156125d8bba16fdd81bad7ed
MD5 0b3c2d14a58fa4fe447b6d38d9b267d1
BLAKE2b-256 2f6f897beb7f34033e9bb11f868037fbe762db18bed95cd19511735ab2151aff

See more details on using hashes here.

Provenance

The following attestation bundles were made for olliq-0.1.0.tar.gz:

Publisher: publish.yml on guelfoweb/olliq

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file olliq-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: olliq-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 13.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for olliq-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1404ddda016c2a4bd9c7bda256c349f55e6591af305632bb2a064a2212fa70d3
MD5 8eb325d84ea72464d26605319283969d
BLAKE2b-256 99fa33bb6571fe59f2716fb87d00756eebf9e9354cf721e3ab6d74e17cc47329

See more details on using hashes here.

Provenance

The following attestation bundles were made for olliq-0.1.0-py3-none-any.whl:

Publisher: publish.yml on guelfoweb/olliq

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page