Skip to main content

Terminal interface for Deep Agents - interactive AI agent with file operations, shell access, and sub-agent capabilities.

Project description

Invincat CLI

中文文档

A Python-based terminal AI programming assistant — collaborate with AI directly in your project directory: read/write files, execute commands, browse the web, and maintain memory across sessions.


Installation

Requirements: Python 3.11+

# Install from PyPI
pip install invincat-cli

Or install from source:

git clone https://github.com/dog-qiuqiu/invincat.git
cd invincat
pip install -e .

Quick Start

# Start in your project directory
cd ~/my-project
invincat-cli

After the first launch, run /model to configure the model and API Key, then you can start the conversation directly.


Model Configuration

Configure via Interface

Run /model command to open the model management interface:

  1. Press Ctrl+N to register a new model
  2. Fill in the provider, model name, and API Key
  3. Select from the list and press Enter to activate

Supported Providers

Provider Example Models
anthropic claude-sonnet-4-6, claude-opus-4-7
openai gpt-4o, o3
google_genai gemini-2.0-flash, gemini-2.5-pro
openrouter Supports all models on OpenRouter

For OpenAI-compatible interfaces (DeepSeek, Zhipu, local Ollama, etc.), simply set the base_url to connect.

Environment Variables

Variable Description
ANTHROPIC_API_KEY Anthropic API Key
OPENAI_API_KEY OpenAI API Key
GOOGLE_API_KEY Google API Key
OPENROUTER_API_KEY OpenRouter API Key
TAVILY_API_KEY Tavily web search Key (optional)

Basic Usage

Type your question or task directly in the input box and press Enter to send. AI will automatically select the appropriate tools to complete the task:

Search for the latest usage of LangGraph interrupt

Command Mode (/ prefix)

/clear
/threads
/model
... ...

Press Tab to autocomplete available commands. See Slash Commands for the complete list.


File References

Use @ in your message to reference files, and AI will read and understand their content:

@src/main.py Are there any potential performance issues in this file?

Tool Approval

When AI performs operations like file writing, shell commands, or network requests, it will pause by default for confirmation:

Auto-approve Mode: Press Shift+Tab to toggle. When enabled, all tool calls are automatically approved, suitable for trusted task scenarios. The status bar will display an AUTO indicator.

⚠️ It's recommended to enable auto-approve only after you're familiar with the task content.

Input Line Breaks

Press Ctrl+J in the input box to insert a line break, suitable for entering longer code or paragraphs.


Context Management

Micro Compression

A lightweight compression that runs automatically before each model call, no LLM involved, taking <1ms.

How it works: Groups conversation messages by "tool call groups", keeps the last 3 groups complete, and replaces large tool outputs in older groups with brief placeholders (keeping the first line summary and line count).

Compressible Tool Outputs:

Tool Compression Effect
read_file File content → [cleared — read_file, 500 lines: def main():…]
edit_file diff output → placeholder
execute shell output → placeholder
grep/glob/ls search/list results → placeholder
web_search/fetch_url web content → placeholder

Not Compressed: agent/subagent results, ask_user responses, MCP tool outputs, compact_conversation results.

💡 Micro compression only affects the context sent to the model, does not modify persisted state, and complete history is still saved in checkpoints.

Auto Compression

When context window usage exceeds 80%, the system automatically compresses older messages into summaries to free up space, requiring no manual operation. The status bar token count turns orange above 70% and red above 90% as warnings.

Manual Compression

/offload

Or equivalently /compact. After execution, it shows how many messages were compressed and how many tokens were freed.

Memory System

AI can remember your preferences, project conventions, and important information across sessions.

Memory Files

Type Path Scope
Global Memory ~/.invincat/agent/AGENTS.md Universal for all projects (coding style, personal preferences)
Project Memory {project root}/.invincat/AGENTS.md Only for current Git repository (architecture conventions, tech stack)

Manual Memory Update

/remember

Triggers AI to actively organize content worth saving from the conversation and write it to memory files.

Auto Memory Update

The system automatically checks for new content to save every certain number of rounds, or triggers early when detecting keywords like "standards", "conventions", "preferences" in the conversation.

Configure Auto Memory: Run /auto-memory to open the configuration interface, or manually set in ~/.invincat/config.toml:

[auto_memory]
enabled = true   # Enable auto memory (default: true)
interval = 10    # Number of rounds between checks (default: 10)
on_exit = true   # Write marker on exit, trigger early next startup (default: true)

Skill System

Skills are predefined workflow templates for reusing complex task steps.

Using Skills

/skill:web-research Search for LangGraph best practices
/skill:code-review Check code quality in src/ directory

Skill Locations

Location Path Description
Built-in Skills Installed with package remember, skill-creator
Global Custom ~/.invincat/agent/skills/ Available across projects
Project-level .invincat/skills/ Only available in current project

Creating Custom Skills

/skill-creator

Starts an interactive wizard that guides you through creating and saving new skills.


Session Management

View and Switch Sessions

/threads

Opens the session browser, displaying all historical conversations (time, message count, branch, etc.).

Start New Conversation

/clear

Clears the current conversation and starts a new session (old sessions are still saved and can be retrieved via /threads).


Slash Commands

Type / in the input box and press Tab to view and autocomplete all commands.

Session

Command Description
/clear Clear current conversation, start new session
/threads Browse and restore historical sessions
/quit / /q Exit program

Model & Interface

Command Description
/model Switch or manage model configurations
/theme Switch color theme
/language Switch interface language (Chinese / English)
/tokens View token usage details

Context & Memory

Command Description
/offload / /compact Manually compress context, free tokens
/remember Manually trigger memory update
/auto-memory Configure auto memory behavior

Tools & Extensions

Command Description
/mcp View connected MCP servers and tools
/editor Edit current input in external editor
/skill-creator Interactive wizard for creating new skills

Others

Command Description
/help Display help information
/version Display version number
/reload Reload configuration files
/trace Open current conversation in LangSmith (requires configuration)

FAQ

Q: No response on first launch? You need to configure the model first. Run /model → Press Ctrl+N to register a model → Fill in the API Key.

Q: How to interrupt a running task? Press Esc to interrupt the current AI response; if AI is waiting for tool approval, Esc acts as a rejection.

Q: Context too long causing slow response? Run /offload to manually compress history, or wait for automatic compression (triggers when usage exceeds 80%).

Q: How to make AI remember my coding preferences? Just tell AI directly, for example "Remember: my project uses 4-space indentation, no semicolons", and AI will automatically save it to memory files at the appropriate time. You can also run /remember to manually trigger saving.

Q: How to share skills across different projects? Place skill files in the ~/.invincat/agent/skills/ directory for global availability; place in .invincat/skills/ for current project only.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

invincat_cli-0.1.1.tar.gz (8.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

invincat_cli-0.1.1-py3-none-any.whl (460.3 kB view details)

Uploaded Python 3

File details

Details for the file invincat_cli-0.1.1.tar.gz.

File metadata

  • Download URL: invincat_cli-0.1.1.tar.gz
  • Upload date:
  • Size: 8.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for invincat_cli-0.1.1.tar.gz
Algorithm Hash digest
SHA256 42e08d181e8b0c4e289259f1e1cb466af61097d538b6d681c192a52a820ae56d
MD5 061dcb5f21c24cb02c991485af3a8e22
BLAKE2b-256 b9d33028e51e1224b1cc2b15e93b7dc6a89b0d6710d19ab460dc858a8e68170a

See more details on using hashes here.

Provenance

The following attestation bundles were made for invincat_cli-0.1.1.tar.gz:

Publisher: publish.yml on dog-qiuqiu/invincat

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file invincat_cli-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: invincat_cli-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 460.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for invincat_cli-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 94c029707cbd51c7d185e1b542fd9ec249a034d4d335dff53a0580abd6a77c0d
MD5 617349cd17677de3264627ed5355364f
BLAKE2b-256 93c84de2ce345a60e78377008e77c35e168be00823a7163f1707cfd566bcda5e

See more details on using hashes here.

Provenance

The following attestation bundles were made for invincat_cli-0.1.1-py3-none-any.whl:

Publisher: publish.yml on dog-qiuqiu/invincat

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page