CLI tool for interacting with Gemini 2.5 Pro
Project description
Gemini Code
A powerful AI coding assistant for your terminal, powered by Gemini 2.5 Pro with support for other LLM models.
Features
- Interactive chat sessions in your terminal
- Multiple model support (currently Gemini 2.5 Pro, more coming soon)
- Intelligent context management with auto-compaction warnings and the
/compactcommand - Markdown rendering in the terminal
- Automatic tool usage by the assistant:
- File operations (view, edit, list, grep, glob)
- System commands (bash)
- Web content fetching
Installation
# Clone the repository
git clone https://github.com/raizamartin/gemini-code.git
cd gemini-code
# Install the package
pip install -e .
Setup
Before using Gemini CLI, you need to set up your API keys:
# Set up Google API key for Gemini models
gemini setup YOUR_GOOGLE_API_KEY
Usage
# Start an interactive session with the default model
gemini
# Start a session with a specific model
gemini --model gemini-2.5-pro
# Set default model
gemini set-default-model gemini-2.5-pro
Interactive Commands
During an interactive session, you can use these commands:
/exit- Exit the chat session/help- Display help information/compact- Summarize the conversation to reduce token usage
How It Works
Tool Usage
Unlike direct command-line tools, the Gemini CLI's tools are used automatically by the assistant to help answer your questions. For example:
- You ask: "What files are in the current directory?"
- The assistant uses the
lstool behind the scenes - The assistant provides you with a formatted response
This approach makes the interaction more natural and similar to how Claude Code works.
Context Management
Gemini CLI intelligently manages the conversation context:
- Warning Threshold (80%): When you reach 80% of the token limit, you'll see a warning panel suggesting to use
/compact - Auto-Compact Prompt (95%): At 95% of the limit, the CLI will ask if you want to automatically compact the conversation
- Manual Compaction: You can use
/compactat any time to summarize the conversation and reduce token usage
The summarization process preserves important context while significantly reducing token count, allowing for virtually unlimited conversation length.
Development
This project is under active development. More models and features will be added soon!
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gemini_code-0.1.7.tar.gz.
File metadata
- Download URL: gemini_code-0.1.7.tar.gz
- Upload date:
- Size: 14.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d3e7d7a9c50c89e36c416b87cd09973ffb0e52de61546350d0d6e3bbcd0f6d4b
|
|
| MD5 |
481d7603c6efd276454a3059c27d4d39
|
|
| BLAKE2b-256 |
dde733860c2381d73451da13d208cdd56694d82cd9cf8ff4bc167b9ce19c917c
|
File details
Details for the file gemini_code-0.1.7-py3-none-any.whl.
File metadata
- Download URL: gemini_code-0.1.7-py3-none-any.whl
- Upload date:
- Size: 16.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f8f1bed98c84e9b223d0172f4bcd2c061b8d394e17428361e45ccc9a2f8ea723
|
|
| MD5 |
e7d7a67895eae4e5fd4802b15bf40e4b
|
|
| BLAKE2b-256 |
9e107d2dba871d9d74cbe920faba5fb479410f5f6ec97510ed4ff4dc431df6f5
|