Skip to main content

A powerful CLI for interacting with multiple LLM providers. Support 10+ providers with smart chat management, encryption, MCP servers, and rich tools ecosystem.

Project description

Gede

๐Ÿš€ A powerful and feature-rich CLI for interacting with multiple LLM providers

Gede is a powerful command-line interface that seamlessly integrates with multiple LLM providers including OpenAI, Anthropic, and DeepSeek. It features local chat history management, built-in tool calling capabilities, and MCP (Model Context Protocol) integration for enhanced AI interactions.

Features

  • ๐Ÿค– Multi-Provider Support: OpenAI, Anthropic, DeepSeek, Qwen, Baidu, OpenRouter, Moonshot, and more
  • ๐Ÿ’ฌ Chat Management: Create public, private (encrypted), and cloned conversations
  • ๐Ÿ› ๏ธ Rich Tools Ecosystem: Built-in web search, URL reading, and custom tools
  • ๐Ÿ”Œ MCP Server Integration: Connect to Model Context Protocol servers
  • ๐Ÿ“ฆ Profile Support: Manage multiple configurations with profiles
  • ๐ŸŒ Web Search: Enable AI model's built-in web search capability

Quick Start

Prerequisites

  • Python 3.10 or higher
  • uv package manager

Install

TODO

Quick Example

# Start a new chat
gede

# Or start with a specific model
gede --model openai:gpt-4o

# Start in private mode
gede --private

# Use with tools enabled
gede --tools web_search,now

Slash Commands

When using Gede, you can use slash commands to perform various operations. Type /help to see all commands, or /help KEYWORD to search for specific commands.

Chat Management

Command Description
/new Start a new public chat (plain text)
/new-private Start a new private chat (password-encrypted)
/chat-info Display current chat details (ID, title, model, message count, tools, MCP servers)
/clone-chat Create a new chat with same settings (instruction, model, parameters)
/quit Exit the application (unsaved private chats won't persist)

Instruction & Prompt Management

Command Description
/set-instruction <TEXT> Set system instruction. Use \\ for multi-line mode (Esc+Enter to submit)
/get-instruction Display current system instruction
/select-instruction Choose from predefined instructions in ~/.gede/instructions/
/select-prompt Select a predefined prompt as input message from ~/.gede/prompts/

Model Settings

Command Description
/select-llm [PROVIDER] [--no-cache] Switch AI model. Use --no-cache to refresh model list
/set-message-num NUMBER Control chat history length (0 = all messages)
/set-model-settings KEY VALUE Adjust parameters: temperature (0-2), top_p (0-1), max_tokens, frequency_penalty (-2 to 2), presence_penalty (-2 to 2), reasoning_effort
/get-model-settings Display current model parameters
/set-model-reasoning <LEVEL> Control reasoning depth: minimal, low, medium, high, auto, or off
/set-model-web-search <on|off|auto> Toggle web search capability

File Operations

Command Description
/save Save current chat. Public: auto-saved with generated title. Private: requires password
/load-chat Load a public chat from ~/.gede/chats/public/ (interactive selection)
/load-private-chat Load private chat from ~/.gede/chats/private/ (password required)
/export <FILEPATH> Export chat to text file. Relative paths save to ~/.gede/chats/exports/ or specificed file path.

Tools & MCP

Command Description
/select-tools Enable/disable built-in tools (Space to toggle, Enter to confirm)
/select-mcp Connect to MCP servers (Space to toggle, Enter to confirm)

Utility

Command Description
/cleanup Clear terminal screen
/help [KEYWORD] Show all commands or search by keyword

CLI Usage

Command Line Arguments

Gede supports the following command line arguments:

  • --profile <profile_name>: Use specified configuration profile (default: default)
  • --log-level <level>: Set log level, options: DEBUG, INFO, WARNING, ERROR, CRITICAL
  • --model <provider_id:model_id>: Specify default model, e.g.: openai:gpt-4o
  • --instruction <text>: Set system prompt
  • --private: Start private session
  • --reasoning-effort <effort>: Set reasoning mode, options: minimal, low, medium, high, off, auto
  • --web-search <mode>: Enable or disable model's built-in web search, options: on, off, auto
  • --tools <tool_list>: Set enabled tools list, multiple tools separated by commas, e.g.: web_search,now,read_page
  • --trace: Enable trace mode for analyzing detailed execution information of agent calls. Uses Arize Phoenix if the arize-trace extra is installed, otherwise uses OpenAI's default tracing (requires OPENAI_API_KEY)
  • --mcp <server_list>: Enable MCP servers, multiple servers separated by commas

Usage Examples

# Start with default configuration
gede

# Start with specified model
gede --model openai:gpt-4o

# Enable tools and private mode
gede --tools web_search,now --private

# Set reasoning mode and log level
gede --reasoning-effort high --log-level DEBUG

# Use specific profile
gede --profile my_profile

Configuration

Storage

On first launch, Gede will automatically create a configuration directory at ~/.gede/ with:

  • config
    • .env - Configuration file for API keys
    • mcp.json - MCP server confirugation
    • profiles.json - Profile confirugation
  • chats/public/ - Public chat storage
  • chats/private/ - Encrypted private chat storage
  • instructions/ - Custom system instructions
  • prompts/ - Predefined prompts

Gede uses environment variables to store API keys for various LLM providers. The configuration file is located at ~/.gede/config/.env. Edit this file to add your actual API keys.

Supported Providers

When you first run Gede, a default config file will be automatically created. Supported providers include:

  • 302.ai: AI302_API_KEY
  • OpenRouter: OPENROUTER_API_KEY
  • OpenAI: OPENAI_API_KEY
  • Anthropic: ANTHROPIC_API_KEY
  • Baidu (ERNIE): WENXIN_API_KEY
  • SiliconFlow: SILICONFLOW_API_KEY
  • Aliyun (Qwen): QWEN_API_KEY
  • VoiceEngine (Doubao): DOUBAO_API_KEY
  • DeepSeek: DEEPSEEK_API_KEY
  • Moonshot (Kimi): MOONSHOT_API_KEY

The config file also supports:

  • Generate Title Model: Use specific model for chat title generation
  • Phoenix Tracing: Configure observability with Arize Phoenix

Profile

Gede supports profile management to save and reuse your preferred configurations. The profile configuration file is located at ~/.gede/config/profiles.json.

Profile Structure

Each profile can contain the following settings:

  • model: Default model to use (format: provider:model_id)
  • instruction: System instruction/prompt
  • private: Whether to start in private mode (boolean)
  • reasoning_effort: Reasoning depth level (minimal, low, medium, high, auto, off)
  • web_search: Web search mode (on, off, auto)
  • tools: List of enabled tools (e.g., ["web_search", "now", "read_page"])
  • trace: Enable trace mode (boolean)
  • log_level: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
  • mcp: List of MCP servers to auto-connect

Example Configuration

{
  "default": {
    "model": "openai:gpt-4o",
    "instruction": "You are a helpful assistant.",
    "private": false,
    "reasoning_effort": "medium",
    "web_search": "auto",
    "tools": ["web_search", "now", "read_page"],
    "trace": false,
    "log_level": "INFO"
  },
  "coding": {
    "model": "anthropic:claude-sonnet-4-20250514",
    "instruction": "You are an expert programming assistant.",
    "reasoning_effort": "high",
    "tools": ["web_search", "read_page"],
    "log_level": "DEBUG"
  },
  "research": {
    "model": "openai:gpt-4o",
    "instruction": "You are a research assistant specialized in finding and analyzing information.",
    "web_search": "on",
    "tools": ["web_search", "read_page"],
    "mcp": ["filesystem"]
  }
}

Usage

# Use default profile
gede

# Use specific profile
gede --profile coding

# Use profile and override settings
gede --profile research --model deepseek:deepseek-reasoner

Note: Command-line arguments will override profile settings for the current session.

MCP

The MCP configuration file is located at ~/.gede/config/mcp.json. It allows you to define multiple MCP servers that Gede can connect to.

STDIO Server

Connects to a local process via standard input/output.

  • command (required): The executable command to run.
  • args (optional): List of arguments for the command.
  • env (optional): Dictionary of environment variables.
  • cwd (optional): Working directory for the process.
  • auto_select (optional, default: false): Whether to automatically select this server on startup.
  • enable (optional, default: true): Whether this server is enabled.

Remote Server (SSE / Streamable HTTP)

Connects to a remote MCP server.

  • type (required): Must be either sse or streamable-http.
  • url (required): The URL of the server endpoint.
  • headers (optional): Dictionary of HTTP headers.
  • note (optional): Description or note for the server.
  • auto_select (optional, default: false): Whether to automatically select this server on startup.
  • enable (optional, default: true): Whether this server is enabled.

Example Configuration

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/username/Desktop"
      ],
      "auto_select": true
    },
    "remote-echo": {
      "type": "sse",
      "url": "https://example.com/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_TOKEN"
      },
      "note": "My remote MCP server",
      "auto_select": false,
      "enable": true
    }
  }
}

Build-in Tools

  • web_search: exa web search
  • read_page: URL reading
  • now: time information

Optional Dependencies

Gede supports optional extensions for enhanced functionality:

Arize Phoenix Tracing (arize-trace)

Enable advanced tracing and observability with Arize Phoenix. This extension is used when you enable trace mode with the --trace flag.

Installation:

uv pip install "gede[arize-trace]"

Usage:

When the arize-trace extension is installed and --trace is enabled, Gede will automatically use Arize Phoenix for tracing:

gede --trace

If the extension is not installed, Gede will fall back to OpenAI's built-in tracing (if OPENAI_API_KEY is set).

Configuration:

To use Arize Phoenix, edit ~/.gede/config/.env and configure:

# Phoenix trace endpoint (customize with your project token if needed)
PHOENIX_COLLECTOR_ENDPOINT=https://app.phoenix.arize.com/s/your-project-token/v1/traces

If not configured, it defaults to https://app.phoenix.arize.com.

Develop

# Clone the repository
git clone https://github.com/adow/gede.git
cd gede

# Install dependencies using uv
uv sync

# Run Gede
python3 -m gede.gede

Project Structure

gede/
โ”œโ”€โ”€ gede/
โ”‚   โ”œโ”€โ”€ commands/              # Slash command implementations
โ”‚   โ”‚   โ”œโ”€โ”€ base.py           # Command base class
โ”‚   โ”‚   โ”œโ”€โ”€ chat_commands.py  # Chat management commands
โ”‚   โ”‚   โ”œโ”€โ”€ model_commands.py # Model selection and settings
โ”‚   โ”‚   โ”œโ”€โ”€ file_commands.py  # File operations (save, load, export)
โ”‚   โ”‚   โ””โ”€โ”€ ...              # Other command modules
โ”‚   โ”œโ”€โ”€ llm/
โ”‚   โ”‚   โ”œโ”€โ”€ providers.py       # LLM provider registry
โ”‚   โ”‚   โ”œโ”€โ”€ *_provider.py     # Individual provider implementations
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ openai_provider.py
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ anthropic_provider.py
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ deepseek_provider.py
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ ...          # Other providers
โ”‚   โ”‚   โ”œโ”€โ”€ tools/            # Built-in tools
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ web_search.py
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ read_url_tool.py
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ time_tool.py
โ”‚   โ”‚   โ””โ”€โ”€ mcp/              # Model Context Protocol integration
โ”‚   โ”œโ”€โ”€ chatcore.py           # Core chat logic
โ”‚   โ”œโ”€โ”€ gede.py             # Main CLI entry point
โ”‚   โ”œโ”€โ”€ config.py             # Configuration management
โ”‚   โ”œโ”€โ”€ encrypt.py            # Encryption utilities
โ”‚   โ”œโ”€โ”€ profiles.py           # Profile management
โ”‚   โ””โ”€โ”€ top.py                # Top-level utilities
โ”œโ”€โ”€ CONTRIBUTING.md           # Contribution guidelines
โ”œโ”€โ”€ CODE_OF_CONDUCT.md       # Community code of conduct
โ”œโ”€โ”€ CHANGELOG.md             # Version history
โ”œโ”€โ”€ LICENSE                  # MIT License
โ”œโ”€โ”€ pyproject.toml           # Python project configuration
โ”œโ”€โ”€ Dockerfile               # Docker configuration
โ””โ”€โ”€ README.md               # This file

Technology Stack

  • Language: Python 3.10+
  • CLI Framework: rich, inquirer, prompt-toolkit,
  • Encryption: cryptography
  • HTTP Client: httpx
  • Agent Framework: OpenAI Agent
  • Build: uv

Security

  • Password-protected private chats with AES encryption
  • User data stays local by default - chat history is ephemeral and only persisted when explicitly saved using /save command

Community

License

Acknowledgments

Thanks to all contributors and the open-source community for support and feedback!

Disclaimer

Gede is provided "as-is" for research and personal use. Users are responsible for complying with LLM provider terms of service and applicable laws when using this tool.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gede-0.3.35-py3-none-any.whl (145.2 kB view details)

Uploaded Python 3

File details

Details for the file gede-0.3.35-py3-none-any.whl.

File metadata

  • Download URL: gede-0.3.35-py3-none-any.whl
  • Upload date:
  • Size: 145.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for gede-0.3.35-py3-none-any.whl
Algorithm Hash digest
SHA256 91bd48c8248c85ab7727826ea1d0790203c39a6a1d2e40bffdcf84087a5e22ac
MD5 201110c518005c13dc4dfd4272d76082
BLAKE2b-256 cf10a1fb11c15bae01c72a097f240c367012b956a947da14a1bd7668bec04ab4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page