Skip to main content

Motus Agent Framework

Project description

Motus

License Release Python Slack

Higher capability. Lower cost. Faster agents.
Deploy locally or to the cloud in one command. Same code, any scale.

LithosAI · Cloud · Docs · Quickstart · Examples · Contributing · Slack

About

Agentic inference is exploding. Motus is the open-source agent serving project that enables higher capability, lower cost, and faster agents that are easy to deploy locally or to the cloud at any scale.

Quickstart

Build with any coding agent and serve locally or deploy in the cloud with the Motus plugin

Motus is build from the ground up to work with any coding agent (e.g., Claude Code, Codex, or Cursor) out of the box. Install the motus plugin with one command:

curl -fsSL https://www.lithosai.com/install.sh | sh

Then use it in conversation with your coding agent:

/motus                          # activate motus skills

build your agent                # start building your agent

/motus serve                    # serve the agent locally

/motus deploy                   # deploy to the cloud

See plugins/motus/README.md for marketplace installs and more details.

Install the Motus Python library

Using uv:

uv add motus

Alternatively, with pip:

pip install motus

Build an agent

from motus.agent import ReActAgent
from motus.models import OpenAIChatClient
from motus.runtime import resolve
from motus.tools import tool

@tool  # define a simple tool
async def search(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

# define a ReAct agent
agent = ReActAgent(client=OpenAIChatClient(), model_name="gpt-4o", tools=[search])
print(resolve(agent("Hello World!")))

Simple by default, check out the agents documentation when you are ready to go deeper.

Build a simple workflow

Fetch an article, summarize and extract hashtags in parallel, then publish:

from motus.runtime import resolve
from motus.runtime.agent_task import agent_task

# define tasks in your workflow
@agent_task
async def summarize(article): ... # just normal functions

# extract hashtags
@agent_task
async def extract(article): ...

# augment agent tasks with retries and timeouts
@agent_task(retries=3, timeout=10.0)
async def fetch(url): ...

# publish on LinkedIn
@agent_task
async def publish(summary, hashtags): ...

# Your logic is your code:
article = fetch("https://www.lithosai.com")
summary = summarize(article)            # Motus infers the dependency graph from data flow.
hashtags = extract(article)             # Both depend on `article`, run in parallel.
post = publish(summary, hashtags)    # Waits for both upstream tasks.

# get final result
print(resolve(post))

No DAGs, just simple python. Leverage @agent_task decorators to turn functions into scheduled tasks. Motus handles scheduling, parallelism, ordering, resilience, tracing. Learn more about the Motus runtime

Serve locally or deploy to the cloud

# Serve locally
@Jianan use workable example
motus serve start myapp:agent --port 8000

# Chat with your locally served agent
motus serve chat http://localhost:8000 "Hello!"

# Deploy to Motus Cloud
motus deploy --name myapp myapp:agent

# Chat with your cloud deployed agent
motus serve chat https://myapp.lithosai.com "Hello!"

Examples

@jianana will check and fix these. Start one without the warning and explain what it does when you show it.

# Task graph — parallelism, dependency tracking, multi-return
MOTUS_LOG_LEVEL=WARNING python examples/runtime/task_graph_demo.py

# Resilience — retries, timeouts, policy overrides
MOTUS_LOG_LEVEL=WARNING python examples/runtime/resilient_tasks.py

# Hooks — global, per-task, per-type lifecycle callbacks
MOTUS_LOG_LEVEL=WARNING python examples/runtime/hooks_demo.py

# MCP — seven integration patterns (lazy, context manager, sandbox, remote)
MOTUS_LOG_LEVEL=WARNING python examples/mcp_tools.py lazy

# Memory — compaction, session save/restore
python examples/memory.py memory_restore

# Multi-agent — orchestrator delegates to researcher + writer
python examples/runtime/agent_as_tool.py

Framework Features

Start simple

Agents ReActAgent runs the reasoning loop, tool dispatch, and conversation state. Multi-turn memory, structured output via Pydantic, and input/output guardrails — all built in. A working agent in under 10 lines.
Tools Write a function, get a tool. Expose class methods with @tools, wrap an MCP server with get_mcp(), nest another agent with as_tool(), or run untrusted code in a Docker sandbox — everything composes through the same tools=[...] interface. Built-in utilities: skills, bash, file ops, glob / grep, todo tracking.
Task-graph runtime @agent_task turns any function into a node in a dependency graph — automatic parallel execution, multi-return futures, non-blocking operators. Retries, timeouts, and backoff are declarative on the task and overridable per call site with .policy().
Multi-provider models Unified client for OpenAI, Anthropic, Gemini, and OpenRouter. Switch providers by changing one line — agent logic stays the same. Local models (Ollama, vLLM) work through base_url.
Tracing & debugging Every LLM call, tool invocation, and task dependency traced automatically. Interactive HTML viewer, Jaeger export, or cloud dashboard — enabled with one env var.
Local serving motus serve exposes any agent as a session-based HTTP API locally. Test the full serving stack before deploying to the cloud.

Go deeper

Memory Provided memory solutions: basic (append-only), compact (auto-summarizes when token budget runs thin). Session save/restore built in.
Guardrails Input and output validation on both agents and individual tools. Declare the parameters you care about — return a dict to modify, raise to block. Structured output guardrails match fields on Pydantic models.
Multi-agent composition agent.as_tool() wraps any agent as a tool. The supervisor doesn't know whether it's calling a function or another agent — the interface is identical. fork() creates independent conversation branches.
MCP integration Connect any MCP-compatible server with get_mcp() — local via stdio, remote via HTTP, or inside a Docker container. Filter and rename tools with prefix, blocklist, and guardrails.
Docker sandboxes Run untrusted code in isolated containers. Mount volumes, expose ports, execute shell and Python — attach to any agent as a tool provider.
Prompt caching Prompt caching via CachePolicySTATIC (system + tools) or AUTO (+ conversation prefix). Reduce latency and cost on long conversations.
SDK compatibility Drop-in for OpenAI Agents SDK, Claude Agent SDK, and Google ADK. Change the import, keep your code — get tracing and cloud deployment for free.
Lifecycle hooks Three-level hook system (global, per-task name, per-task type). Tap into task_start, task_end, task_error for logging, metrics, or custom logic.

Contributing

Open source from Day 1. We believe the infrastructure for agentic inference should be open. See the Contributing Guide to get started, or come say hi on Slack. Let's build together!

License

Apache 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lithosai_motus-0.1.0.tar.gz (201.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lithosai_motus-0.1.0-py3-none-any.whl (236.2 kB view details)

Uploaded Python 3

File details

Details for the file lithosai_motus-0.1.0.tar.gz.

File metadata

  • Download URL: lithosai_motus-0.1.0.tar.gz
  • Upload date:
  • Size: 201.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for lithosai_motus-0.1.0.tar.gz
Algorithm Hash digest
SHA256 13e0d96e6a05051d8348f3af27d843b15eade11921b24d7094eb7f36c99e06a3
MD5 0d0fbf69f70544a736291eee510f34a2
BLAKE2b-256 6bc0c5804cdf85d056c69a27553759dea766131555c1bc9a25e3e0c78419ab70

See more details on using hashes here.

File details

Details for the file lithosai_motus-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: lithosai_motus-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 236.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for lithosai_motus-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 29c06812e59c576c1dfcf5380237198d2fb05fdc67ae1797dd0a412e20e8dcc3
MD5 629c1310407c638e71fc7624ee2aa193
BLAKE2b-256 9af462c0468b23d8ea265ccc67928c5df92048f9ae0d3e4d7a0513ecc593de27

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page