Seriously lightweight agent TUI for Nexus environments
Project description
Nexus-Nancy
Nexus-Nancy is a seriously lightweight, pip-installable terminal agent focused on OpenAI-style API compatibility and local tool execution.
Design goals
- Minimal TUI and command surface
- Single provider protocol: OpenAI-compatible
/chat/completions - Primary tool: shell (
bash) with sandbox defaults - Notebook-aware local tools for
.ipynbread/edit workflows - Context controls:
/newand/handoff - Plain-text session logs
- Attachment shorthand:
@path/to/file
Install
python -m venv .venv
source .venv/bin/activate
pip install -e .
Configure
On first run in any directory, Nexus-Nancy creates local files in .agents/:
.agents/nnancy.yaml.agents/secrets/openai.key.agents/sandbox_allowlist.txt.agents/instructions.txt.agents/relay_instructions.txt.agents/hand-off_instructions.txt
The prompt templates are bundled inside the installed package and copied into the working directory on first run. Nexus-Nancy does not invent ad hoc fallback prompt text at runtime.
API key resolution order:
- local key file from
api_key_file(default.agents/secrets/openai.key) - env var from
api_key_env(defaultOPENAI_API_KEY)
For shared environments, using the local key file is recommended.
To add your API key before first run, use:
nnancy secrets
Before any provider call, Nexus-Nancy runs strict preflight validation: API key/base URL sanity, required message structure (system + user), tool spec integrity (including bash), and request-size guard via max_preflight_tokens.
The live system prompt is read from .agents/instructions.txt and rendered at runtime, including a dynamically generated tools block.
Set user_display_name in .agents/nnancy.yaml to control the user label shown in the TUI transcript (default: USER).
Execution routing is controlled in .agents/nnancy.yaml:
execution_strategy: autouses native OpenAI-style tool calls only after support is verified.execution_strategy: universalalways uses the compatibility text harness.execution_strategy: native_openairequires verified native tool support and fails loudly otherwise.native_tools,reasoning_channel, andparallel_tool_callsdefault toauto; set a boolean only when you want an explicit override.capability_probe: trueenables a cheap live probe that asks the provider to return a synthetic tool call without executing any local tool.
Edit these with:
nnancy config
nnancy instructions
nnancy secrets
For API key management during chat sessions:
/configopens.agents/nnancy.yaml/keyreplaces the API key value (does not print current key)
Guides
- Models & Authentication - Using Gemma 4, ChatGPT Plus ($20/mo), and standard API.
- Extending Nancy - How to write and install custom tools.
- Capability Detection - How Nancy detects tool-calling and reasoning support.
Usage
nnancy
nnancy -t "summarize @README.md"
nnancy doctor
nnancy config
nnancy instructions
nnancy secrets
nnancy doctor checks workspace bootstrap files, sandbox root, API key source, key-file permissions, selected execution route, detected capability status, and base URL health via <base_url>/models.
sandbox_allowlist.txt supports one substring per line. If a substring appears in a command, substring-based sandbox blocks are bypassed for that command.
Interactive mode uses a Python Textual TUI when running in a real terminal. If TTY support is missing (for example some notebook terminal environments), it automatically falls back to a plain line-input mode.
The TUI status line shows model, mode (sandbox/yolo), current working directory, and approximate context token count.
Each TUI session gets an id shown in that status line.
Transcripts are always saved for posterity at:
.agents/transcripts/<id>.txt
These transcripts and the logs/session-*.log files are plain local files.
Anyone with access to the workspace can read them.
Use Ctrl+Y in TUI to show copy mode info with the current transcript path.
Ctrl+Y suspends the TUI and opens the transcript in your terminal (less if available, else cat) so native terminal text selection/copy works, then returns to the app.
Default model is gpt-5.4-mini. Context token estimate uses tiktoken when available and falls back to a simple character heuristic otherwise.
Inside the prompt:
/newstarts a fresh in-process context/handoffwrites a JSON continuation snapshot tologs/handoff.json/handoff path/to/handoff.jsonloads prior context/configopens workspace config file.agents/nnancy.yaml/key NEW_API_KEYreplaces API key file contents@relative/pathinlines file content into your prompt
Universal assistant protocol:
- User-visible assistant text must be inside
[RESPONSE]...[/RESPONSE] - Any other assistant text is treated as private raw/debug output
- Each completed assistant turn must end with
[EOT] - Tool calls must use JSON arguments that exactly match the surfaced tool schema
Native OpenAI route:
- Native mode sends tools through the OpenAI-compatible
toolspayload. - Assistant text is shown directly and is not parsed for
[RESPONSE]wrappers. - If a provider returns a valid JSON tool call in plain text instead of
tool_calls, Nexus-Nancy treats it as a raw function call safety net. - Local models such as Gemma or Llama variants are most reliable when their backend supports native chat templates, for example llama.cpp with Jinja templating enabled.
- Providers that claim OpenAI compatibility may still reject tools, ignore tools, or return malformed calls; leave
execution_strategy: autounless native support is known or verified.
In the Textual TUI, you can also run /key with no argument to set the key via hidden prompts (value + confirmation) without echoing the key to screen.
Notes
- Tool execution is local.
- Sandbox mode is default.
- Chat logs are written to
logs/session-*.log. - In the TUI, only
[RESPONSE]blocks are shown as assistant replies; non-response assistant text and tool output are shown as collapsed raw/debug blocks. - Tool calls outside the allowlist prompt for
yes,no, orrespondapproval in sandbox mode. nnancy yoloexists but is intentionally not advertised in help output.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nexus_nancy-1.1.1.tar.gz.
File metadata
- Download URL: nexus_nancy-1.1.1.tar.gz
- Upload date:
- Size: 36.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2f0987b85938e1b8ace888a44dbfe67f3e0f55c48bc3127c1f259d9fd6df1f42
|
|
| MD5 |
413366220cbbdfbbe426339d1419fda4
|
|
| BLAKE2b-256 |
fb14f218f95c1243ff77c5db7184358d7b3926bebf603ac1762ea4e1dbf26381
|
File details
Details for the file nexus_nancy-1.1.1-py3-none-any.whl.
File metadata
- Download URL: nexus_nancy-1.1.1-py3-none-any.whl
- Upload date:
- Size: 41.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
14c15b24408c3d1e68472daef7a61d80f979e2a091b92451f89d4d47f83d2544
|
|
| MD5 |
72e80ce29693456e733ec6eabe436a9c
|
|
| BLAKE2b-256 |
23ea241f8197666c3ba304ead1e0adea3ecd12032716a7adfa36a4ad3d89254c
|