Skip to main content

A minimal, hackable agentic framework for Ollama and BitNet - local-first AI agent toolkit

Project description

โš›๏ธ AgentNova R00

A minimal, hackable agentic framework engineered to run entirely locally with Ollama or BitNet.

Inspired by the architecture of OpenClaw, rebuilt from scratch for local-first operation.

Written by VTSTech ยท GitHub

PyPI version fury.io PyPI status GitHub commits

PyPI download month PyPI download week PyPI download day


๐Ÿ“š Documentation

Document Description
Architecture.md Technical documentation for developers (directory structure, core design, orchestrator modes)
CHANGELOG.md Version history and release notes (includes LocalClaw history)
TESTS.md Benchmark results, model recommendations, and testing guide

Installation

From PyPI (Recommended)

pip install agentnova

# Or install from GitHub for the latest development version:
pip install git+https://github.com/VTSTech/AgentNova.git

Backward Compatibility

The package was previously named localclaw. For backward compatibility:

# Old package name still works (shows deprecation warning)
pip install localclaw

# Old CLI command still works
localclaw run "What is the capital of Japan?"  # Redirects to agentnova

# Old imports still work (with deprecation warning)
import localclaw  # Re-exports from agentnova

We recommend updating to the new package name:

# Old
import localclaw
from localclaw import Agent

# New
import agentnova
from agentnova import Agent

From Source

git clone https://github.com/VTSTech/AgentNova.git
cd AgentNova
pip install -e .

No Installation Required

AgentNova uses only Python stdlib โ€” no dependencies! You can also just copy the agentnova directory into your project:

cp -r agentnova /path/to/your/project/

Quick Start

1. Test Model Tool Support (Recommended First Step)

# Test all models for native tool support
agentnova models --tool_support

# Results saved to tested_models.json for future reference

2. Single prompt

# Simple Q&A
agentnova run "What is the capital of Japan?"

# With streaming output
agentnova run "Tell me a joke." --stream

# Specify a model
agentnova run "Explain quantum computing" -m llama3.2:3b

3. Interactive chat

# Start interactive session
agentnova chat -m qwen2.5-coder:0.5b

# With tools enabled
agentnova chat -m llama3.1:8b --tools calculator,shell,read_file,write_file

# With skills loaded
agentnova chat -m llama3.2:3b --skills skill-creator --tools write_file,shell

# Fast mode (reduced context for speed)
agentnova chat -m qwen2.5-coder:0.5b --fast --verbose

4. Using BitNet backend

agentnova chat --backend bitnet --force-react
agentnova run "Calculate 17 * 23" --backend bitnet --tools calculator

Key Features

  • Zero dependencies โ€” uses Python stdlib only
  • Ollama + BitNet backends โ€” switch with --backend flag
  • Three-tier tool support โ€” native, ReAct, or none (auto-detected per model)
  • Agent Skills โ€” follows Agent Skills specification
  • Small model optimized โ€” pure reasoning mode for sub-500M models
  • Built-in security โ€” path validation, command blocklist, SSRF protection

Tool Support Levels

AgentNova automatically detects each model's tool support level:

Level Description When to Use
native Ollama API tool-calling Models trained for function calling
react Text-based ReAct prompting Models that accept tools but need format guidance
none No tool support Models that reject tools; use pure reasoning

Testing Tool Support

# Test all models
agentnova models --tool_support

# Example output:
  Model                                      Family       Context    Tool Support
  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  gemma3:270m                                gemma3       32K        โ—‹ none
  granite4:350m                              granite      32K        โœ“ native
  qwen2.5-coder:0.5b-instruct-q4_k_m         qwen2        32K        ReAct
  functiongemma:270m                         gemma3       32K        โœ“ native

Performance by Tool Support

Recent GSM8K benchmark results (50 math questions):

Model Params Tool Support Score
gemma3:270m 270M none 64%
functiongemma:270m 270M native 36%
granite4:350m 350M native ~40%

Key insight: Sub-500M models often perform better with none (pure reasoning) than with tools!


CLI Commands

Command Description
run "prompt" Run single prompt and exit
chat Interactive multi-turn conversation
models List available Ollama models with tool support info
tools List built-in tools
skills List available Agent Skills
test [example] Run example/test scripts (--list to see all)
modelfile [model] Show model's Modelfile system prompt

Key Flags

Flag Description
-m, --model Model name (default: qwen2.5-coder:0.5b)
--tools Comma-separated tool list
--skills Comma-separated skill list
--backend ollama or bitnet
--stream Stream output token-by-token
--fast Preset: reduced context for speed
-v, --verbose Show tool calls and timing
--acp Enable ACP (Agent Control Panel) integration
--use-mf-sys Use Modelfile system prompt instead of AgentNova default
--force-react Force ReAct mode for all models
--debug Show debug info (parsed tool calls, fuzzy matching)
--num-ctx Context window size for test commands
--num-predict Max tokens to predict for test commands

Models Command

# List models with family, context size, and tool support
agentnova models

# Test each model for native tool support (recommended)
agentnova models --tool_support

Output shows:

  • Model - Model name
  • Family - Model family from Ollama API
  • Context - Context window size
  • Tool Support - โœ“ native, ReAct, โ—‹ none, or untested
โš›๏ธ AgentNova R00 Models
  Model                                      Family       Context    Tool Support
  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  gemma3:270m                                gemma3       32K        โ—‹ none
  granite4:350m                              granite      32K        โœ“ native
  qwen2.5-coder:0.5b-instruct-q4_k_m         qwen2        32K        ReAct
  functiongemma:270m                         gemma3       32K        untested

  1 model(s) untested. Use --tool_support to detect native support.

Test Command Examples

# List all available tests
agentnova test --list

# Run a quick test suite
agentnova test quick

# Run GSM8K benchmark (50 math questions)
agentnova test 14 --acp --timeout 6400

# Run with debug output
agentnova test 02 --debug --verbose

Built-in Tools

Tool Description
calculator Evaluate math expressions
python_repl Execute Python code
shell Run shell commands
read_file Read file contents
write_file Write content to file
list_directory List directory contents
http_get HTTP GET request
save_note / get_note Save and retrieve notes

Configuration

Variable Description Default
OLLAMA_BASE_URL Ollama server URL http://localhost:11434
BITNET_BASE_URL BitNet server URL http://localhost:8765
ACP_BASE_URL ACP (Agent Control Panel) server URL http://localhost:8766
AGENTNOVA_BACKEND Backend: ollama or bitnet ollama
AGENTNOVA_MODEL Default model qwen2.5-coder:0.5b-instruct-q4_k_m
AGENTNOVA_SECURITY_MODE Security mode: strict, permissive, disabled permissive

Setup Ollama

# Make sure Ollama is running:
ollama serve

# Pull a model:
ollama pull qwen2.5-coder:0.5b-instruct-q4_k_m

# Test tool support:
agentnova models --tool_support

About

**โš›๏ธ AgentNova ** is written and maintained by VTSTech.


For more details, see:

  • Architecture.md โ€” Technical architecture and design decisions
  • CHANGELOG.md โ€” Version history and release notes
  • TESTS.md โ€” Benchmark results and model recommendations

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentnova-0.0.tar.gz (188.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentnova-0.0-py3-none-any.whl (222.1 kB view details)

Uploaded Python 3

File details

Details for the file agentnova-0.0.tar.gz.

File metadata

  • Download URL: agentnova-0.0.tar.gz
  • Upload date:
  • Size: 188.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for agentnova-0.0.tar.gz
Algorithm Hash digest
SHA256 c519af064021bb0995ac835ec55a9cff930318b7d7c00ba038861f7310c6adda
MD5 044f9c7330d2afa834e92e70eea9d5e1
BLAKE2b-256 b31325441f0629d9e4c95165c25d91126668786d59f76362300e2ca7ab428462

See more details on using hashes here.

File details

Details for the file agentnova-0.0-py3-none-any.whl.

File metadata

  • Download URL: agentnova-0.0-py3-none-any.whl
  • Upload date:
  • Size: 222.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for agentnova-0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 fe1ad3c86e431ae99782b5bbfc6f3d1b61f037ed589a7d4190974db542e5960b
MD5 7b57129f11c141abd7e2f9cef2112cfd
BLAKE2b-256 64497cffe12dd73c449ea91fd02b02e0a2b0e491f493c82417c14ef4f80ab87a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page