Interactive Terminal Environment — an AI coding agent for your terminal
Project description
iTE - Interactive Terminal Environment
An AI coding agent for your terminal. Connect the model service you want to use and get to work.
Installation
Recommended public install
Install ite globally with pipx:
pipx install ite-agent
Then launch it:
ite
Option 1: Development Install (Local)
Clone the repository and install in editable mode:
git clone https://github.com/ThatSaxyDev/ite.git
cd ite
pip install -e .
Option 2: Install from Source (Global)
Install globally using pipx (recommended) or pip:
Method 1: pipx (recommended)
We recommend pipx so the ite command is available globally without conflicting with other Python packages.
brew install pipx
pipx ensurepath
pipx install ite-agent
ite
Method 2: Standard pip
If you prefer standard pip:
pip install ite-agent
Note: you may need to add your Python binary location to your PATH to run ite directly.
Option 3: Install from Git
pipx install git+https://github.com/ThatSaxyDev/ite.git
Distribution
To distribute ITE, you can build a wheel file:
-
Install
build:pip install build
-
Build the package:
python -m build
This generates dist/ite_agent-0.0.23-py3-none-any.whl, which can be shared and installed anywhere:
pipx install ite_agent-0.0.23-py3-none-any.whl
Usage
# Start interactive session
ite
Quickstart
The launch path is intentionally narrow:
- Install
ite. - Start the app with
ite. - Sign in through the hosted browser flow at
https://ite.kiishi.space. - Run
/setup. - Enter your OpenAI-compatible
base_url,api_key, andmodel. - Send a prompt.
By default, ite now talks to the hosted cloud API at https://ite-cloud-api.onrender.com.
For local API development, override it with ITE_CLOUD_API_URL=http://127.0.0.1:4000.
Supported setup:
- OpenAI-compatible provider endpoint
- provider API key
- exact model name exposed by that provider
Examples include OpenAI-compatible hosted providers and self-hosted gateways that expose the same chat API shape.
Skills
ite supports interoperable SKILL.md bundles for reusable specialist behavior.
- Put shared skills in
.agents/skills - Put private local overrides in
.ite/skills - Use
/skillsto list, inspect, and activate them - Trust repo-provided skills with
/skills trust - Install local packs with
/skills add <path>
See docs/SKILLS.md for the supported roots, frontmatter, and commands.
Configuration
iTE works with the model service and credentials you choose.
# Start iTE
ite
# Configure your provider (required on first run)
/setup
# Enter your provider base URL, API key, and model name
# Then start prompting immediately
If sign-in fails:
- verify that the browser opened the real hosted app
- verify that the API origin and web origin match the deployed environment
- sign out and run the hosted login flow again
If setup fails:
- confirm the provider uses an OpenAI-compatible API shape
- confirm the base URL is correct, including
/v1when required - confirm the model name is exactly what the provider exposes
- confirm the API key is valid for that provider
What's available now:
- Sign in with GitHub for session management
- setup with any OpenAI-compatible provider
- Full terminal AI coding experience
Coming soon:
- Bundled model access (no provider key needed)
- Paid plans with hosted inference
Architecture
ITE is an AI coding agent combining four major capabilities: a terminal UI for interactive conversation, tool execution with policy controls, persistent multi-layered memory, and session-based state management with recovery features.
Core Components
CLI (main.py) — The main orchestrator handles terminal interaction: reading multi-line input with paste detection, processing slash commands, detecting user intent for planning vs execution, and rendering the TUI. It includes intelligent paste detection that waits briefly for follow-up lines, intent detection that prompts users to enable/disable plan mode appropriately, and auto-save after every agent turn plus checkpoints every 5 turns.
Agent (agent/agent.py) — Drives the core agentic loop with plan mode handling, memory integration, response controls, and automatic todo progression.
- Plan Mode: A state machine with phases "idle" → "asking_questions" → "writing_plan" → "awaiting_implementation_confirmation" → "executing". The agent asks 3-5 clarifying questions based on task complexity, then writes a plan for user approval before execution begins.
- Execution Mode: Seeds todo items automatically from multi-step user requests, then auto-completes them based on tool invocations (file writes complete implementation todos, test/lint commands complete verification todos).
- Memory Integration: Intercepts explicit memory instructions ("remember...") and exact recall probes ("what phrase did I ask you to remember") for direct processing without LLM calls.
- Response Controls: Reads user preferences from memory and adjusts outputs (e.g., flattening bullet points if user prefers that).
Session (agent/session.py) — Maintains all per-conversation state including the LLM client, tool registry, context manager, memory manager, plan state, todo tool, change history, hook system, and MCP manager.
Session Persistence (agent/session_manager.py)
- Sessions stored in
{data_dir}/sessions/{session_id}.json - Checkpoints in
{data_dir}/checkpoints/{session_id}_{timestamp}.json - Snapshot compaction keeps the 12 most recent tool results intact, truncates older ones to 480 chars, tool call args to 320 chars, and messages over 12K chars
- Atomic writes with fsync for reliability
- Automatic quarantine of corrupt session files
Memory System (memory/manager.py)
Four stores with different scopes:
| Store | Scope | Location |
|---|---|---|
| short_term | Per-session | {data_dir}/memory/sessions/{session_id}/ |
| long_term | User-global | {data_dir}/memory/long_term.json |
| semantic | Per-workspace | {data_dir}/memory/projects/{hash}/semantic.json |
| episodic | Per-workspace | {data_dir}/memory/projects/{hash}/episodic.json |
Retrieval uses weighted scoring: lexical match (query terms in key/summary/value), hotness (recency × access frequency with 7-day half-life decay), and store-specific boosts to favor recent over global memory. Supports conditional preferences that activate only in specific query contexts (e.g., "prefer bullet lists when explaining architecture").
Tools System
- Registry (
tools/registry.py): Maintains available tools with OpenAI function-calling schemas - Builtin Tools (
tools/builtin/): todos (planning/execution scopes), memory, plan_question, file operations, shell, glob, grep, web_search, web_fetch - MCP Integration (
tools/mcp/): Connects to external MCP servers, transparently registering their tools - Policy (
tools/policy.py): Sandbox restrictions for file operations - Approval Manager: Gates dangerous operations (deletions, destructive shell) with TUI callbacks
Context Management (context/manager.py)
Maintains conversation history with token estimation, triggers compression via ChatCompactor when nearing limits, and prunes tool outputs. Compression generates a summary using the LLM and records it as an episodic memory entry.
Additional Subsystems
- Loop Detector (
context/loop_detector.py): Detects repetitive patterns and injects loop-breaker prompts - Change History (
agent/change_history.py): Records file diffs from successful writes - Hook System (
hooks/): Lifecycle callbacks (before/after agent runs) - TUI (
ui/tui.py): Rich-based terminal UI with streaming, tool visualization, and confirmation prompts
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ite_agent-0.0.23.tar.gz.
File metadata
- Download URL: ite_agent-0.0.23.tar.gz
- Upload date:
- Size: 1.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
603b2c80b7dd241d4f18fa44ff30f3654fe780fc708ccc56c97ddd5738333e40
|
|
| MD5 |
e3e467e0bb84248638714a4b85e7bd73
|
|
| BLAKE2b-256 |
485251547862feb88dc74396d606bc206f716c14117d73d4c86bb1239468d62e
|
File details
Details for the file ite_agent-0.0.23-py3-none-any.whl.
File metadata
- Download URL: ite_agent-0.0.23-py3-none-any.whl
- Upload date:
- Size: 934.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
145e4e19f60c3915a38cbe034ebf97bb1eabc11683c6d5a2e873ce0e41c83c18
|
|
| MD5 |
c9667670b156a9a0b9923246c5ef30ab
|
|
| BLAKE2b-256 |
533901b6f132b9b4ecc6642cb965eab4c285490fbd17bd62b1308f9cdc84be6a
|