Oris: Responsible AI runtime framework for production pipelines.
Project description
| CI | |
| Docs | |
| Package | |
| Meta |
Oris is an open-source Responsible AI runtime for Python. Describe pipelines in YAML (or build them in code), run them through one executor, and get input/output policy checks and run- and step-level traces by default.
Oris stays framework-agnostic: anything you can invoke like run(dict) can use the same boundaries—including external LLM stacks wrapped with SafeRunner—so you can experiment locally and ship with clearer safety and observability defaults.
Table of contents
- Installation
- Documentation
- Features
- Quick start
- CLI
- Output format
- SafeRunner
- Project layout
- Examples and notebooks
- Contributing
- License
Installation
The simplest way to get Oris is via pip:
pip install oris-ai
Verify the CLI:
oris --help
From source (library, CLI, and tests):
git clone https://github.com/DevStrikerTech/oris.git
cd oris
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -e .
Developers (lint, types, tests, notebook execution):
pip install -e ".[dev]"
Oris requires Python 3.10+. Runtime dependencies are minimal (PyYAML only). For a fuller walkthrough, see the Installation page in the docs.
Documentation
If you are new to the project, start with the Introduction, then follow Installation and Quickstart on the documentation site. The Concepts section explains pipelines, components, providers, RAI, and traces; Guides cover the CLI, SafeRunner, and run summaries.
Site: devstrikertech.github.io/oris (MkDocs Material, similar information architecture to projects like Haystack).
Preview locally:
pip install -e ".[docs]"
mkdocs serve
The Docs workflow publishes to GitHub Pages on pushes to prod (.github/workflows/docs.yml). In Settings → Pages, choose GitHub Actions as the source if needed.
Features
YAML-first pipelines
Define steps, optional providers, and settings (such as tracing). Configuration is validated before execution; YAML is loaded with yaml.safe_load only.
Guards and policy
A default PolicyEnforcer applies input checks (blocked keys, basic injection heuristics, simple PII-shaped patterns) and output checks (blocked terms and test-oriented stubs). The same policy surface is used by SafeRunner for external callables.
Built-in components and provider stubs
Use passthrough, template_response, and generate / llm_echo from the default registry. Declared openai / huggingface provider types are stubs (no network I/O in the core package) so CI and demos stay reproducible.
Observability
Each run produces a RunTrace with per-step latency, status, and flags. PipelineResult.to_run_summary() gives a stable JSON-oriented shape for logs and the CLI (with optional redaction of sensitive-looking keys).
CLI parity
oris validate and oris run use the same definitions as Pipeline.from_yaml in Python, with --format pretty and --debug for human-friendly output and stderr trace lines.
Quick start
Save as pipeline.yaml:
name: quickstart
settings:
tracing: true
steps:
- id: reply
type: template_response
config:
template: "Answer placeholder for: {query}"
Python
from oris import Pipeline
pipeline = Pipeline.from_yaml("pipeline.yaml")
result = pipeline.run({"query": "What is responsible AI?"})
print(result.output)
CLI
oris validate pipeline.yaml
oris run pipeline.yaml --input-json '{"query":"What is responsible AI?"}'
oris run pipeline.yaml --input-json '{"query":"hi"}' --format pretty --debug
--debug prints trace-oriented details on stderr; stdout remains the JSON summary. Sample YAML and notebooks live under examples/.
CLI
| Command | Purpose |
|---|---|
oris validate <file.yaml> |
Load and validate the pipeline (schema, components, providers). |
oris run <file.yaml> --input-json '<json object>' |
Run with a JSON object as input; default stdout is a compact JSON summary. |
oris run ... --format pretty |
Pretty-printed JSON summary. |
oris run ... --debug |
Stderr: run_id, trace status, per-step latency and flags. |
oris validate ... --debug |
Stderr: pipeline name and step list. |
Output format
Pipeline.run returns a PipelineResult: output (dict), trace (RunTrace), and metadata. See models.py.
result.to_run_summary() includes:
run_id— Run identifier.status—"success"or"failed"from trace status.output— Final payload (CLI may redact nested sensitive-looking keys).trace— Per-step entries:step_id,component_name,status,latency_ms,flags.
CLI formatting and redaction: output.py.
SafeRunner
SafeRunner wraps external inference or tools with the same PolicyEnforcer as the main executor—validate input, run a callable or run(dict) target, validate output, optionally attach a one-step trace.
from oris.integrations import SafeRunner
from oris.rai.policy import PolicyEnforcer
def my_external_llm(payload: dict) -> dict:
q = payload.get("query", "")
return {"output": f"stub response for: {q!r}"}
runner = SafeRunner(my_external_llm, policy=PolicyEnforcer())
plain = runner.run({"query": "Hello"})
traced = runner.run({"query": "Hello"}, include_trace=True)
Returns are normalized to dict from mappings or Pydantic-style model_dump(). More detail: SafeRunner guide.
Project layout
| Area | Role |
|---|---|
oris.core |
Shared enums and exceptions. |
oris.components |
Component registry and built-ins. |
oris.pipeline |
YAML loading, schema, plan, builder. |
oris.runtime |
Executor, orchestrator, hooks, trace manager, PipelineResult. |
oris.rai |
PolicyEnforcer, input/output guards. |
oris.providers |
LLMProvider and built-in YAML provider stubs. |
oris.integrations |
SafeRunner. |
oris.tracing |
Run and step trace models. |
oris.cli |
oris CLI entrypoint. |
Examples and notebooks
| Asset | Description |
|---|---|
examples/simple_generation.yaml |
Template step; good first Pipeline.run. |
examples/provider_pipeline.yaml |
Provider declaration + generate (needs OPENAI_API_KEY for the stub). |
examples/basic_pipeline.ipynb |
YAML → run → to_run_summary(). |
examples/safe_runner.ipynb |
SafeRunner, traces, policy violations. |
examples/llm_integration.ipynb |
Optional Ollama + mock fallback; YAML provider stub. |
Execute notebooks from the repo root (after pip install -e ".[dev]"):
export JUPYTER_CONFIG_DIR="$PWD/.jupyter" && mkdir -p .jupyter
python -m nbconvert --to notebook --execute examples/basic_pipeline.ipynb --inplace
Repeat for the other notebooks, or open them in your editor.
Contributing
We welcome issues and pull requests. Start with CONTRIBUTING.md (branches, quality gates, tests). Report security issues per SECURITY.md. Community expectations: CODE_OF_CONDUCT.md.
License
MIT — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file oris_ai-0.7.2.tar.gz.
File metadata
- Download URL: oris_ai-0.7.2.tar.gz
- Upload date:
- Size: 101.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4e96825fd6294abed0b992af029928c1bf01ef795ed6ec23bed6bc0cc82b087a
|
|
| MD5 |
42cbb3e0d4bb885fe2139007b4822cc5
|
|
| BLAKE2b-256 |
983649717631e16064313efb4a235c093201fb149ed3853bfdcc4bd27ad9b08a
|
File details
Details for the file oris_ai-0.7.2-py3-none-any.whl.
File metadata
- Download URL: oris_ai-0.7.2-py3-none-any.whl
- Upload date:
- Size: 41.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5af8bbc4f93865652f5a6cdb7857e57b5ca4121f2bba1388efc382f3719c6e2e
|
|
| MD5 |
b88f21f649fd3d9fd0070e9bd49083aa
|
|
| BLAKE2b-256 |
c192a1fecc996b6dfcfc655afba830d9d6573cc32d075291a752e90af2b8a9e8
|