Skip to main content

Pythinker Code is your next CLI agent.

Project description

Pythinker logo Pythinker Code

Your terminal-native AI engineering agent.

Read code. Edit files. Run commands. Search the web. Plug into your IDE. All from the shell you already live in.


PyPI Python License: Apache-2.0 CI

Code style: Ruff ACP ready MCP tools Homepage


๐ŸŒ Website ย ยทย  โšก Quick Start ย ยทย  โœจ Features ย ยทย  ๐Ÿงฉ IDE Integration ย ยทย  ๐Ÿ”Œ MCP ย ยทย  ๐Ÿ” Privacy ย ยทย  ๐Ÿ› ๏ธ Develop



Pythinker Code terminal demo

๐Ÿ’ก What is Pythinker?

Pythinker Code is an open-source AI coding agent that lives in your terminal. Unlike chat-based assistants stuck behind a browser tab, Pythinker can read your repo, edit files, run shell commands, browse the web, and call MCP tools โ€” all in a single iterative loop driven by the model of your choice.

It speaks the Agent Client Protocol (ACP), so it slots cleanly into ACP-aware editors like Zed and JetBrains. It loads Model Context Protocol (MCP) servers, so the same tools your other agents use just work. And it's hackable: subagents, skills, hooks, and plugins are all first-class extension points.

๐ŸŽฏ One agent, one shell, one workflow. No tab-switching. No context loss. No magic.


๐Ÿ†• What's New in 2.1.0

A focused refresh of the TUI and slash-command UX.

  • Selectors package โ€” interactive /theme, /thinking, /model, /login, /settings, /extension, and /show-images panels replace the old numeric/text prompts.
  • /thinking slash command โ€” toggle reasoning effort live, mid-session.
  • /settings panel โ€” a real SettingsList over your Config (theme, default model, TUI style, default thinking, telemetry, loop limits, background tasks).
  • Card-style TUI polish โ€” bordered shell card, footer/toolbar, and a full set of tool renderers (read / write / edit / grep / find / bash / agent), plus a diff component. Subagent cards show a running-dots spinner while they work.
  • Selector framework โ€” SelectorHeader sentinel and per-row on_change callback for richer custom selectors.
  • Prompt templates โ€” discovery is now ~/.pythinker/prompts and <project>/.pythinker/prompts. The legacy directory lookup has been retired.
  • TUI style flag โ€” only card (default) and pythinker are accepted; the legacy alias has been dropped.

Upgrade with pythinker update or pip install --upgrade pythinker-code.


โœจ Features

๐Ÿ–ฅ๏ธ Terminal-First

Plan, edit, run, and verify without leaving your shell. Every action is visible, scriptable, and auditable.

โšก Shell Command Mode

Press Ctrl-X to drop into a direct shell prompt inside the agent. Run commands, then snap back into AI mode with full context preserved.

๐Ÿงฉ ACP IDE Integration

Run pythinker acp and any Agent Client Protocol editor โ€” Zed, JetBrains, and more โ€” gets a full Pythinker session inline.

๐Ÿ”Œ MCP Tool Loading

Manage stdio and HTTP MCP servers with pythinker mcp. OAuth-backed servers, persistent config, ad-hoc files โ€” all supported.

๐Ÿค– Subagents & Skills

Delegate focused work to built-in subagents. Load reusable instructions via /skill:<name> and bundled prompt flows via /flow:<name>.

๐Ÿช Hooks & Plugins

Observe or block tool execution with hook events. Install community extensions with pythinker plugin.

๐ŸŒ Web & Visualization UIs

Optional web frontend and visualization frontend ship alongside the CLI for richer inspection workflows.

๐Ÿค– Bring Your Own Model

Swap providers and models per-session: --model openai/gpt-5.5, hosted Pythinker models, or your own keys.

[!NOTE] Built-in shell commands such as cd are not yet supported in shell command mode.

Shell command mode demo

โšก Quick Start

โœจ Recommended install (clean, with logo)

curl -fsSL https://raw.githubusercontent.com/mohamed-elkholy95/Pythinker-Code/main/scripts/install.sh | sh

Windows PowerShell:

irm https://raw.githubusercontent.com/mohamed-elkholy95/Pythinker-Code/main/scripts/install.ps1 | iex

The installer fetches uv if missing, installs pythinker-code quietly, and prints a single-line confirmation instead of the full dependency wall.

๐Ÿš€ One-off run with uvx

uvx pythinker-code

๐Ÿ“ฆ Install as a uv tool

uv tool install pythinker-code
pythinker

๐Ÿ” Authenticate (optional)

For hosted Pythinker models or ACP terminal auth:

pythinker login

๐Ÿ’ฌ Try it out

# Interactive session
pythinker

# One-shot prompt
pythinker --prompt "summarize this repository and suggest the next test to add"

# Pick a specific model
pythinker --model openai/gpt-5.5

# Inline config override
pythinker --config '{"default_thinking": true}'

๐Ÿ  Using Local Models (LM Studio & Ollama)

Run Pythinker entirely on your own machine โ€” no API key, no cloud. Pythinker speaks each runtime's OpenAI-compatible API, so tools, streaming, JSON mode, vision, and reasoning_effort all work the same as with hosted providers.

LM Studio

1. Set up LM Studio.

  • Install LM Studio and download at least one chat model.
  • In the LM Studio app, open the model and raise its Context Length (gear icon โ†’ Context Length). See Context length matters below.
  • Start the server: Developer โ†’ Status: Running (or lms server start --port 1234).

2. Connect Pythinker.

pythinker login --lm-studio

This auto-discovers every chat-capable model loaded in LM Studio, registers each as lm-studio/<model-id>, and picks the largest-context one as your default. Embedding models are filtered out.

3. Use it.

# Default LM Studio model
pythinker -p "explain quicksort"

# Specific model
pythinker -m lm-studio/qwen/qwen3-coder-next -p "write a python http server"

# Interactive shell, then switch models with /model
pythinker

4. Disconnect.

pythinker logout --lm-studio

Ollama

# 1. start the server in one terminal
ollama serve

# 2. pull a model
ollama pull llama3.1:8b

# 3. connect Pythinker
pythinker login --ollama

# 4. use it
pythinker -p "explain monad transformers"
pythinker -m ollama/llama3.1:8b -p "..."
pythinker logout --ollama

Discovery uses Ollama's /api/tags for the model list and /api/show per model to read the real context window.

Remote LM Studio / Ollama (LAN host or alternate port)

pythinker login --lm-studio --base-url http://192.168.1.10:1234/v1
pythinker login --ollama    --base-url http://lan-box:11434/v1

The override is saved in your config and used by every subsequent run.

From inside the interactive shell

The same wiring is available as slash commands:

/login lm-studio        # or  /login lmstudio  (no dash also accepted)
/login ollama
/logout lm-studio
/logout ollama
/login                  # opens a chooser; entries 9 and 10 are the local providers
/model lm-studio/google/gemma-4-e4b   # switch model mid-session

โš ๏ธ Context length matters (a common gotcha)

Pythinker's agent prompt โ€” system instructions + tool schemas + skills + your message + recent history โ€” is large. Tens of thousands of tokens before you've even sent your first message.

LM Studio loads a model with a small default context window (often 4096). If you start chatting against that, you'll see:

LLM provider error: Error: The number of tokens to keep from the initial
prompt is greater than the context length (n_keep: 16690 >= n_ctx: 4096).

The shell now prints a friendly recovery hint when this happens, but the cure is in LM Studio:

  1. In LM Studio, open the model in the Chat tab and click the gear/settings icon (or My Models โ†’ Edit).
  2. Set Context Length to at least 32768, and prefer 131072 if your VRAM allows. Practical experience: 64k still triggers errors during longer sessions; 128k is a safer floor.
  3. Reload the model (LM Studio prompts you).
  4. Restart Pythinker so it picks up the new state (Ctrl+D then pythinker, or pythinker -r <session-id> to resume).

Tip: the bigger you set the context, the more VRAM the model uses. If you OOM, try a smaller quantization (e.g., Q4_K_M instead of Q8_0) or a smaller model variant.

Ollama configures context per-request and Pythinker reads the model's max from /api/show, so this gotcha is mostly LM-Studio-specific.

VRAM-friendly model picks

Local models vary wildly in memory use. Rough guide on a 16 GB GPU (e.g., RTX 5080 mobile):

Model size Quant Approx. VRAM Fits 16 GB?
2-4 B Q4-Q8 2-4 GB Yes, easily
7-8 B Q4 5-6 GB Yes
7-8 B Q8 8-9 GB Yes
13-14 B Q4 8-10 GB Yes
27-31 B Q4 17-20 GB Tight / no
27-31 B Q8 30-35 GB No

If LM Studio errors with Failed to load model, you've exceeded VRAM โ€” pick a smaller model or lower-bit quantization.

Environment variables

These override the defaults at both login and runtime:

Variable Purpose
LM_STUDIO_BASE_URL Override http://localhost:1234/v1
LM_STUDIO_API_KEY Set if you've enabled token auth in LM Studio
OLLAMA_BASE_URL Override http://localhost:11434/v1
OLLAMA_API_KEY Rarely needed (Ollama is unauthenticated by default)

Example:

LM_STUDIO_BASE_URL=http://workstation.lan:1234/v1 pythinker -p "..."

Refreshing the model list

If you load/unload models in LM Studio (or ollama pull/rm), re-run login to refresh:

pythinker login --lm-studio    # or --ollama

(Pythinker intentionally does NOT auto-refresh local providers in the background โ€” login owns that state, so manual edits to your config aren't silently overwritten.)


๐Ÿงฉ IDE Integration via ACP

Pythinker speaks Agent Client Protocol natively. Point your ACP-compatible editor at pythinker acp and you get a multi-session agent server inside your IDE.

๐Ÿ“ Configuration for Zed / JetBrains
{
  "agent_servers": {
    "Pythinker Code": {
      "type": "custom",
      "command": "pythinker",
      "args": ["acp"],
      "env": {}
    }
  }
}

The ACP server provides:

Capability Description
๐Ÿ”‘ Terminal auth pythinker login flow exposed to the IDE
๐Ÿ“‚ Session listing & resume Pick up where you left off
๐Ÿ”„ Hot model swap Change models for a running ACP session
ACP IDE integration demo

๐Ÿ”Œ MCP Tooling

Pythinker loads Model Context Protocol tools from persistent config or ad-hoc files. Same tools, every agent โ€” no rewriting.

๐Ÿ› ๏ธ Manage persistent MCP servers

# ๐ŸŒ Streamable HTTP server with API key
pythinker mcp add --transport http context7 https://mcp.context7.com/mcp \
  --header "CONTEXT7_API_KEY: ctx7sk-your-key"

# ๐Ÿ” Streamable HTTP server with OAuth
pythinker mcp add --transport http --auth oauth linear https://mcp.linear.app/mcp

# ๐Ÿ’ป stdio server
pythinker mcp add --transport stdio chrome-devtools -- npx chrome-devtools-mcp@latest

# ๐Ÿ“‹ List, authorize, test, and remove
pythinker mcp list
pythinker mcp auth linear
pythinker mcp test chrome-devtools
pythinker mcp remove chrome-devtools

๐Ÿ“„ Use an ad-hoc MCP config file

{
  "mcpServers": {
    "context7": {
      "url": "https://mcp.context7.com/mcp",
      "headers": {
        "CONTEXT7_API_KEY": "YOUR_API_KEY"
      }
    },
    "chrome-devtools": {
      "command": "npx",
      "args": ["-y", "chrome-devtools-mcp@latest"]
    }
  }
}
pythinker --mcp-config-file /path/to/mcp.json

๐Ÿงฌ Extensibility

Pythinker is a small, extensible runtime โ€” not a monolith. Build on it.

Extension Point What it does Where to look
๐Ÿค– Agents & subagents YAML specs define tools, prompts, and built-in subagent types src/pythinker_code/agents/
๐ŸŽ“ Skills /skill:<name> loads reusable instructions on demand bundled & user-defined
๐ŸŒŠ Flows /flow:<name> executes bundled prompt flows bundled & user-defined
๐Ÿช Hooks Observe or block tool execution; integrate policy or automation hook events API
๐Ÿงฉ Plugins Installable extension packages pythinker plugin

๐Ÿ—๏ธ Architecture

Pythinker Code architecture diagram

๐Ÿ” Privacy & Telemetry

Pythinker is the agent framework, not the LLM. You bring your own API key (OpenAI, Anthropic, your local LM Studio model, etc.); your prompts and the model's responses go directly between your terminal and the model provider you configured. Pythinker never sees, stores, or forwards them.

To improve the framework itself we collect a small amount of diagnostic telemetry about how the agent runs. It's strictly anonymous, never includes your prompts, model output, file contents, file paths, or any user-identifying data. Two channels:

Channel What lands there Endpoint
Errors (Sentry-protocol) Unhandled exceptions and crash stack traces, with absolute paths above site-packages/ rewritten to <env>/ so home directories don't leak errors.pythinker.com (self-hosted Bugsink)
Traces + structured logs (OpenTelemetry) Lifecycle events (session_started, started, model_switch), agent-loop spans (pythinker.turn / pythinker.llm / pythinker.tool), and per-event counters otel.pythinker.com (self-hosted SigNoz)

What we collect

  • Lifecycle events: session start, command-line flags actually used (booleans only), startup timing, model name (just the identifier, e.g. claude-opus-4-7), thinking-mode toggle, plan-mode toggle.
  • Agent-loop spans: turn duration, step count, stop reason (no_tool_calls / max_steps / error), tool name (Read, Bash, Edit, โ€ฆ), tool success/failure, tool duration, LLM call duration, input/output token counts (numbers โ€” never the content).
  • Crashes: exception class name, scrubbed stack trace, library versions. We do not send local variable values.
  • Static context: pythinker version, OS family, Python version, terminal type (TERM_PROGRAM), CI flag (CI env var presence), locale.
  • A persistent, random device_id so we can count "how many distinct installs" without identifying a person.

What we never collect

  • Your prompts, the model's responses, or any conversation content
  • File contents, file paths, working directory names, or workspace structure
  • Your API keys, OAuth tokens, environment variables
  • Your real name, email, IP address, hostname (host name field is dropped at the edge collector)
  • Tool arguments (e.g. what file you read, what command you ran)

Opting out

Pick whichever fits your workflow โ€” all three are equivalent:

# 1. Per-invocation CLI flag
pythinker --no-telemetry

# 2. Environment variable (works in shells, .env files, CI configs)
export PYTHINKER_DISABLE_TELEMETRY=1
pythinker

# 3. Permanently in your config file (~/.pythinker/config.toml)
[default]
telemetry = false

Setting any of these at startup short-circuits Sentry initialization, OTel exporter creation, and the in-process event sink. No network requests are made to the telemetry endpoints.

Pointing telemetry at your own infrastructure

If you operate pythinker for a team and want telemetry routed to your own SigNoz / Bugsink instead, override the endpoints via environment variables:

export PYTHINKER_SENTRY_DSN="https://<key>@your-bugsink.example.com/<project>"
export PYTHINKER_OTEL_ENDPOINT="https://your-otel-collector.example.com"
export PYTHINKER_OTEL_TOKEN="<your bearer token>"

The defaults point at infrastructure operated by the pythinker maintainers; you don't need to set anything to use them.


๐Ÿ› ๏ธ Development

๐Ÿ Prepare the workspace

git clone https://github.com/mohamed-elkholy95/Pythinker-Code.git
cd Pythinker-Code
make prepare

๐Ÿงฐ Common commands

โ–ถ๏ธ Run & iterate

uv run pythinker          # CLI from source
make format               # format all packages
make check                # lint + type-check

๐Ÿงช Test

make test                 # all unit + e2e tests
make ai-test              # AI-driven tests
make test-pythinker-code   # CLI only
make test-pythinker-core  # Core only
make test-pythinker-host  # Host only
make test-pythinker-sdk   # SDK only

๐ŸŒ Frontends

make web-back             # web backend
make web-front            # web frontend
make vis-back             # vis backend
make vis-front            # vis frontend

๐Ÿ“ฆ Build

make build                # Python packages
make build-bin            # standalone binary
make help                 # all targets

๐Ÿ’ก make build and make build-bin build and embed the web and visualization frontends before packaging.


๐Ÿ—‚๏ธ Project Layout

pythinker-code/
โ”œโ”€โ”€ ๐Ÿ“ฆ src/pythinker_code/         CLI runtime ยท tools ยท UIs ยท ACP ยท MCP ยท hooks ยท plugins ยท skills ยท web ยท vis backends
โ”œโ”€โ”€ ๐Ÿงฑ packages/
โ”‚   โ”œโ”€โ”€ pythinker-core/           Provider-agnostic message, tool, and chat-provider abstractions
โ”‚   โ”œโ”€โ”€ pythinker-host/           Local/remote host filesystem and command execution
โ”‚   โ””โ”€โ”€ pythinker-code/           Console-script distribution package
โ”œโ”€โ”€ ๐Ÿงฐ sdks/pythinker-sdk/        Python SDK
โ””โ”€โ”€ ๐Ÿงช tests/ ยท tests_e2e/ ยท tests_ai/   Unit ยท wire/CLI e2e ยท AI-driven test suites

๐Ÿค Contributing

Contributions are warmly welcome โ€” bug reports, PRs, plugins, skills, and docs all help.

If Pythinker helps you, a โญ on GitHub goes a long way.


๐Ÿ“œ License

Distributed under the Apache-2.0 License. See LICENSE for the full text and NOTICE for attributions.


Built with โค๏ธ for engineers who live in the terminal.

๐ŸŒ pythinker.com ย ยทย  ๐Ÿ“ฆ PyPI ย ยทย  ๐Ÿ™ GitHub ย ยทย  ๐Ÿงฉ ACP ย ยทย  ๐Ÿ”Œ MCP

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pythinker_code-2.1.1.tar.gz (5.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pythinker_code-2.1.1-py3-none-any.whl (6.3 MB view details)

Uploaded Python 3

File details

Details for the file pythinker_code-2.1.1.tar.gz.

File metadata

  • Download URL: pythinker_code-2.1.1.tar.gz
  • Upload date:
  • Size: 5.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for pythinker_code-2.1.1.tar.gz
Algorithm Hash digest
SHA256 ed0d37586503eff728d0e2a08469bd1dfba7606d97c06c80f4a0a8f0bce54acf
MD5 679ac074f4b1095f04676094e7f40a7e
BLAKE2b-256 3da893c14d3cdcdba8ecc96dbc2088074332e394e67a436735ae75b462db6d19

See more details on using hashes here.

File details

Details for the file pythinker_code-2.1.1-py3-none-any.whl.

File metadata

  • Download URL: pythinker_code-2.1.1-py3-none-any.whl
  • Upload date:
  • Size: 6.3 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for pythinker_code-2.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 62a8d3ff6a90fcf3d26b8f3f58255c74471533829f69b63d40c7548673350ccb
MD5 b2168a8ee9d8fef827044dd56fa611ab
BLAKE2b-256 13e21d8bfc2a1b8dc97f9dda760cd53f49268737f465c3e6b322020635b3f113

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page