Skip to main content

A minimal, hackable agentic framework for Ollama and BitNet - local-first AI agent toolkit

Project description

โš›๏ธ AgentNova R02

A minimal, hackable agentic framework engineered to run entirely locally with Ollama or BitNet.

Inspired by the architecture of OpenClaw, rebuilt from scratch for local-first operation.

Written by VTSTech ยท GitHub

PyPI version fury.io PyPI status GitHub commits

PyPI download month PyPI download week PyPI download day


๐Ÿ“š Documentation

Document Description
Architecture.md Technical documentation for developers (directory structure, core design, orchestrator modes)
CHANGELOG.md Version history and release notes (includes LocalClaw history)
TESTS.md Benchmark results, model recommendations, and testing guide

Installation

From PyPI (Recommended)

pip install agentnova

# Or install from GitHub for the latest development version:
pip install git+https://github.com/VTSTech/AgentNova.git

Backward Compatibility

The package was previously named localclaw. For backward compatibility:

# Old package name still works (shows deprecation warning)
pip install localclaw

# Old CLI command still works
localclaw run "What is the capital of Japan?"  # Redirects to agentnova

# Old imports still work (with deprecation warning)
import localclaw  # Re-exports from agentnova

We recommend updating to the new package name:

# Old
import localclaw
from localclaw import Agent

# New
import agentnova
from agentnova import Agent

From Source

git clone https://github.com/VTSTech/AgentNova.git
cd AgentNova
pip install -e .

No Installation Required

AgentNova uses only Python stdlib โ€” no dependencies! You can also just copy the agentnova directory into your project:

cp -r agentnova /path/to/your/project/

Quick Start

1. Test Model Tool Support (Recommended First Step)

# Test all models for native tool support
agentnova models --tool_support

# Results saved to tested_models.json for future reference

2. Single prompt

# Simple Q&A
agentnova run "What is the capital of Japan?"

# With streaming output
agentnova run "Tell me a joke." --stream

# Specify a model
agentnova run "Explain quantum computing" -m llama3.2:3b

3. Interactive chat

# Start interactive session
agentnova chat -m qwen2.5-coder:0.5b

# With tools enabled
agentnova chat -m llama3.1:8b --tools calculator,shell,read_file,write_file

# With skills loaded
agentnova chat -m llama3.2:3b --skills skill-creator --tools write_file,shell

# Fast mode (reduced context for speed)
agentnova chat -m qwen2.5-coder:0.5b --fast --verbose

4. Using BitNet backend

agentnova chat --backend bitnet --force-react
agentnova run "Calculate 17 * 23" --backend bitnet --tools calculator

Key Features

  • Zero dependencies โ€” uses Python stdlib only
  • Ollama + BitNet backends โ€” switch with --backend flag
  • Three-tier tool support โ€” native, ReAct, or none (auto-detected per model)
  • Agent Skills โ€” follows Agent Skills specification
  • Small model optimized โ€” pure reasoning mode for sub-500M models
  • Built-in security โ€” path validation, command blocklist, SSRF protection

Tool Support Levels

AgentNova automatically detects each model's tool support level:

Level Description When to Use
native Ollama API tool-calling Models trained for function calling
react Text-based ReAct prompting Models that accept tools but need format guidance
none No tool support Models that reject tools; use pure reasoning

Testing Tool Support

# Test all models
agentnova models --tool_support

# Example output:
  Model                                      Family       Context    Tool Support
  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  gemma3:270m                                gemma3       32K        โ—‹ none
  granite4:350m                              granite      32K        โœ“ native
  qwen2.5-coder:0.5b-instruct-q4_k_m         qwen2        32K        ReAct
  functiongemma:270m                         gemma3       32K        โœ“ native

Performance by Tool Support

Recent test results with native tool synthesis:

Model Params Tool Support Calculator Shell Python
qwen2.5:0.5b 494M native 100% 100% 100%
qwen2.5-coder:0.5b 494M ReAct 100% 100% 100%
granite4:350m 350M native ~90% โœ… โœ…
gemma3:270m 270M none 64% N/A N/A

Key improvements in R01:

  • Native tool synthesis extracts expressions from natural language
  • Two-tier retry: hint โ†’ synthesize (bypasses confused models)
  • Bare expression wrapping: 2**20 โ†’ print(2**20)
  • Hallucinated mention detection for models that talk about tools but don't call them

CLI Commands

Command Description
run "prompt" Run single prompt and exit
chat Interactive multi-turn conversation
models List available Ollama models with tool support info
tools List built-in tools
skills List available Agent Skills
test [example] Run example/test scripts (--list to see all)
modelfile [model] Show model's Modelfile system prompt

Key Flags

Flag Description
-m, --model Model name (default: qwen2.5-coder:0.5b)
--tools Comma-separated tool list
--skills Comma-separated skill list
--backend ollama or bitnet
--stream Stream output token-by-token
--fast Preset: reduced context for speed
-v, --verbose Show tool calls and timing
--acp Enable ACP (Agent Control Panel) integration
--use-mf-sys Use Modelfile system prompt instead of AgentNova default
--force-react Force ReAct mode for all models
--debug Show debug info (parsed tool calls, fuzzy matching)
--num-ctx Context window size for test commands
--num-predict Max tokens to predict for test commands

Models Command

# List models with family, context size, and tool support
agentnova models

# Test each model for native tool support (recommended)
agentnova models --tool_support

Output shows:

  • Model - Model name
  • Family - Model family from Ollama API
  • Context - Context window size
  • Tool Support - โœ“ native, ReAct, โ—‹ none, or untested
โš›๏ธ AgentNova R02 Models
  Model                                      Family       Context    Tool Support
  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  gemma3:270m                                gemma3       32K        โ—‹ none
  granite4:350m                              granite      32K        โœ“ native
  qwen2.5-coder:0.5b-instruct-q4_k_m         qwen2        32K        ReAct
  functiongemma:270m                         gemma3       32K        untested

  1 model(s) untested. Use --tool_support to detect native support.

Test Command Examples

# List all available tests
agentnova test --list

# Run a quick test suite
agentnova test quick

# Run GSM8K benchmark (50 math questions)
agentnova test 14 --acp --timeout 6400

# Run with debug output
agentnova test 02 --debug --verbose

Built-in Tools

Tool Description
calculator Evaluate math expressions
python_repl Execute Python code
shell Run shell commands
read_file Read file contents
write_file Write content to file
list_directory List directory contents
http_get HTTP GET request
save_note / get_note Save and retrieve notes

Configuration

Variable Description Default
OLLAMA_BASE_URL Ollama server URL http://localhost:11434
BITNET_BASE_URL BitNet server URL http://localhost:8765
ACP_BASE_URL ACP (Agent Control Panel) server URL http://localhost:8766
AGENTNOVA_BACKEND Backend: ollama or bitnet ollama
AGENTNOVA_MODEL Default model qwen2.5-coder:0.5b-instruct-q4_k_m
AGENTNOVA_SECURITY_MODE Security mode: strict, permissive, disabled permissive

Setup Ollama

# Make sure Ollama is running:
ollama serve

# Pull a model:
ollama pull qwen2.5-coder:0.5b-instruct-q4_k_m

# Test tool support:
agentnova models --tool_support

About

**โš›๏ธ AgentNova ** is written and maintained by VTSTech.


For more details, see:

  • Architecture.md โ€” Technical architecture and design decisions
  • CHANGELOG.md โ€” Version history and release notes
  • TESTS.md โ€” Benchmark results and model recommendations

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentnova-0.2.tar.gz (207.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentnova-0.2-py3-none-any.whl (243.8 kB view details)

Uploaded Python 3

File details

Details for the file agentnova-0.2.tar.gz.

File metadata

  • Download URL: agentnova-0.2.tar.gz
  • Upload date:
  • Size: 207.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for agentnova-0.2.tar.gz
Algorithm Hash digest
SHA256 a060c29d6f198f175941d9317f84c4ae9dbb77900f4d2c7fdf79aaf7d62ba15f
MD5 ee3540f06b948b9bd7eb96b4a4e0a618
BLAKE2b-256 0e2c7eeb2498d65bf93a79e70d3cc58db412c1cd78a6793daa823207639236bf

See more details on using hashes here.

File details

Details for the file agentnova-0.2-py3-none-any.whl.

File metadata

  • Download URL: agentnova-0.2-py3-none-any.whl
  • Upload date:
  • Size: 243.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for agentnova-0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c63c73c95d45ff2d5911be157e084c77c812e3acd8b3a0201da0fb450e317bc2
MD5 9805c8a247ae15a1eff40d6f02d3905f
BLAKE2b-256 e74ffb5dc78fc1b9d229f6f01c36eb1dff88ea70cdedfcfcf915a130d54f16ad

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page