Skip to main content

A minimal, hackable agentic framework for Ollama and BitNet - local-first AI agent toolkit

Project description

๐Ÿฆž LocalClaw R03

A minimal, hackable agentic framework engineered to run entirely locally with Ollama or BitNet.

Inspired by the architecture of OpenClaw, rebuilt from scratch for local-first operation.

Written by VTSTech ยท GitHub

PyPI version fury.io PyPI status GitHub commits

PyPI download month PyPI download week PyPI download day


๐Ÿ“š Documentation

Document Description
Architecture.md Technical documentation for developers (directory structure, core design, orchestrator modes)
CHANGELOG.md Version history and release notes (R00โ€“R03)
TESTS.md Benchmark results, model recommendations, and testing guide

Installation

From PyPI (Recommended)

pip install localclaw

# Or install from GitHub for the latest development version:
pip install git+https://github.com/VTSTech/LocalClaw.git

From Source

git clone https://github.com/VTSTech/LocalClaw.git
cd LocalClaw
pip install -e .

No Installation Required

LocalClaw uses only Python stdlib โ€” no dependencies! You can also just copy the localclaw directory into your project:

cp -r localclaw /path/to/your/project/

Quick Start

1. Single prompt

# Simple Q&A
localclaw run "What is the capital of Japan?"

# With streaming output
localclaw run "Tell me a joke." --stream

# Specify a model
localclaw run "Explain quantum computing" -m llama3.2:3b

2. Interactive chat

# Start interactive session
localclaw chat -m qwen2.5-coder:0.5b

# With tools enabled
localclaw chat -m llama3.1:8b --tools calculator,shell,read_file,write_file

# With skills loaded
localclaw chat -m llama3.2:3b --skills skill-creator --tools write_file,shell

# Fast mode (reduced context for speed)
localclaw chat -m qwen2.5-coder:0.5b --fast --verbose

3. Using BitNet backend

localclaw chat --backend bitnet --force-react
localclaw run "Calculate 17 * 23" --backend bitnet --tools calculator

Key Features

  • Zero dependencies โ€” uses Python stdlib only
  • Ollama + BitNet backends โ€” switch with --backend flag
  • Native tool calling โ€” auto-detected for supported models, ReAct fallback for others
  • Agent Skills โ€” follows Agent Skills specification
  • Small model support โ€” fuzzy matching, argument auto-fixing for models โ‰ค1.5B params
  • Built-in security โ€” path validation, command blocklist, SSRF protection

CLI Commands

Command Description
run "prompt" Run single prompt and exit
chat Interactive multi-turn conversation
models List available Ollama models with tool support info
tools List built-in tools
skills List available Agent Skills
test [example] Run example/test scripts (--list to see all)
modelfile [model] Show model's Modelfile system prompt

Key Flags

Flag Description
-m, --model Model name (default: qwen2.5-coder:0.5b)
--tools Comma-separated tool list
--skills Comma-separated skill list
--backend ollama or bitnet
--stream Stream output token-by-token
--fast Preset: reduced context for speed
-v, --verbose Show tool calls and timing
--acp Enable ACP (Agent Control Panel) integration
--use-mf-sys Use Modelfile system prompt instead of LocalClaw default
--debug Show debug info (parsed tool calls, fuzzy matching)

Models Command

# List models with family, context size, and tool support
localclaw models

# Test each model for native tool support
localclaw models --tool_support

Output shows:

  • Model - Model name
  • Family - Model family from Ollama API
  • Context - Context window size
  • Tool Support - โœ“ native, ReAct, โ—‹ none, or ReAct (?) (untested)
๐Ÿฆž LocalClaw R03 Models
  Model                                      Family       Context    Tool Support
  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  driaforall/tiny-agent-a:1.5b               qwen2        32K        ReAct
  gemma3:270m                                gemma3       32K        โ—‹ none
  granite4:350m                              granite      32K        โœ“ native
  qwen2.5-coder:0.5b-instruct-q4_k_m         qwen2        32K        ReAct

  2 model(s) untested. Use --tool_support to detect native support.

Test Command Examples

# List all available tests
localclaw test --list

# Run a quick test suite
localclaw test quick

# Run a specific test
localclaw test 01
localclaw test 07

# Run with ACP integration (activity tracking)
localclaw test 01 --acp
localclaw test 14_acp    # shorthand, auto-enables --acp

# Run with Modelfile system prompt
localclaw test 01 --use-mf-sys --model qwen2.5-coder:0.5b

# Run with debug output
localclaw test 02 --debug --verbose

Built-in Tools

Tool Description
calculator Evaluate math expressions
python_repl Execute Python code
shell Run shell commands
read_file Read file contents
write_file Write content to file
list_directory List directory contents
http_get HTTP GET request
save_note / get_note Save and retrieve notes

Configuration

Variable Description Default
OLLAMA_BASE_URL Ollama server URL http://localhost:11434
BITNET_BASE_URL BitNet server URL http://localhost:8765
ACP_BASE_URL ACP (Agent Control Panel) server URL http://localhost:8766
LOCALCLAW_BACKEND Backend: ollama or bitnet ollama
LOCALCLAW_MODEL Default model qwen2.5-coder:0.5b-instruct-q4_k_m
LOCALCLAW_SECURITY_MODE Security mode: strict, permissive, disabled permissive

Setup Ollama

# Make sure Ollama is running:
ollama serve

# Pull a model:
ollama pull qwen2.5-coder:0.5b-instruct-q4_k_m

About

๐Ÿฆž LocalClaw R03 is written and maintained by VTSTech.


For more details, see:

  • Architecture.md โ€” Technical architecture and design decisions
  • CHANGELOG.md โ€” Version history and release notes
  • TESTS.md โ€” Benchmark results and model recommendations

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

localclaw-0.3.0.10.tar.gz (166.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

localclaw-0.3.0.10-py3-none-any.whl (197.6 kB view details)

Uploaded Python 3

File details

Details for the file localclaw-0.3.0.10.tar.gz.

File metadata

  • Download URL: localclaw-0.3.0.10.tar.gz
  • Upload date:
  • Size: 166.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for localclaw-0.3.0.10.tar.gz
Algorithm Hash digest
SHA256 e6d0a93119a86440a31cca693de70761bc70f73bcae0083f8f8d74f00ffdb201
MD5 0445862d48dc4ca48a690dcb5d1f4c89
BLAKE2b-256 3e5f08855b7e244a4063cb935711f8a85c3410c48e5ecb741d6b0a6d4eaca71d

See more details on using hashes here.

File details

Details for the file localclaw-0.3.0.10-py3-none-any.whl.

File metadata

  • Download URL: localclaw-0.3.0.10-py3-none-any.whl
  • Upload date:
  • Size: 197.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for localclaw-0.3.0.10-py3-none-any.whl
Algorithm Hash digest
SHA256 84c3ff8929f36306a297b938c2eb26de4412b28e2d9409c74724bbae2c0efbfe
MD5 1fc4f77ff8014974e88e9f1f2511ccd0
BLAKE2b-256 922ac9087d5778a26ea0737c03e079090f3dac34a52379214aa5fcd932a69a5e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page