A Claif provider for Google Gemini, compatible with the OpenAI Responses API.
Project description
claif_gem - Gemini Provider for Claif
A Claif provider for Google Gemini with full OpenAI client API compatibility. This package wraps the gemini-cli command-line tool to provide a consistent interface following the client.chat.completions.create() pattern.
Features
- OpenAI Client API Compatible: Use the familiar
client.chat.completions.create()pattern - Full Type Safety: Returns standard
ChatCompletionandChatCompletionChunkobjects - Streaming Support: Real-time streaming with proper chunk handling
- Subprocess Management: Reliable communication with Gemini CLI
- Auto-approval Mode: Streamlined workflows without interruptions
- Cross-platform Support: Works on Windows, macOS, and Linux
- Fire-based CLI: Rich terminal interface with multiple output formats
Quickstart
# Install
pip install claif_gem
# Basic usage - OpenAI compatible
python -c "
from claif_gem import GeminiClient
client = GeminiClient()
response = client.chat.completions.create(
messages=[{'role': 'user', 'content': 'Hello Gemini!'}],
model='gemini-1.5-flash'
)
print(response.choices[0].message.content)
"
# CLI usage
claif-gem query "Explain quantum computing"
claif-gem chat --model gemini-1.5-pro
What is claif_gem?
claif_gem is the Google Gemini provider for the Claif framework with full OpenAI client API compatibility. It wraps the Gemini CLI tool to integrate Google's powerful Gemini language models into the unified Claif ecosystem through subprocess management and clean message translation.
Key Features:
- Subprocess-based integration - Reliable communication with Gemini CLI
- Auto-approve & yes-mode - Streamlined workflows without interruptions
- Cross-platform CLI discovery - Works on Windows, macOS, and Linux
- Async/await throughout - Built on anyio for efficiency
- Rich CLI interface - Beautiful terminal output with Fire
- Type-safe API - Comprehensive type hints for IDE support
- Robust error handling - Timeout protection and graceful failures
Installation
Prerequisites
Install the Gemini CLI via npm:
npm install -g @google/gemini-cli
Or set the path to an existing installation:
export GEMINI_CLI_PATH=/path/to/gemini
Basic Installation
# Core package only
pip install claif_gem
# With Claif framework
pip install claif claif_gem
# All Claif providers
pip install claif[all]
Installing Gemini CLI with Claif
# Using Claif's installer (recommended)
pip install claif && claif install gemini
# Or using claif_gem's installer
python -m claif_gem.install
# Manual installation with bun (faster)
bun add -g @google/gemini-cli
Development Installation
git clone https://github.com/twardoch/claif_gem.git
cd claif_gem
pip install -e ".[dev,test]"
Usage
Basic Usage (OpenAI-Compatible)
from claif_gem import GeminiClient
# Initialize the client
client = GeminiClient(
api_key="your-api-key", # Optional, uses GEMINI_API_KEY env var
cli_path="/path/to/gemini" # Optional, auto-discovers
)
# Create a chat completion - exactly like OpenAI
response = client.chat.completions.create(
model="gemini-1.5-flash",
messages=[
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Explain machine learning"}
],
temperature=0.7,
max_tokens=1000
)
# Access the response
print(response.choices[0].message.content)
print(f"Model: {response.model}")
print(f"Usage: {response.usage}")
Streaming Responses
from claif_gem import GeminiClient
client = GeminiClient()
# Stream responses in real-time
stream = client.chat.completions.create(
model="gemini-1.5-pro",
messages=[
{"role": "user", "content": "Write a story about space exploration"}
],
stream=True
)
# Process streaming chunks
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
CLI Usage
# Basic query
claif-gem query "Explain machine learning"
# With specific model
claif-gem query "Write Python code for binary search" --model gemini-1.5-pro
# Interactive chat mode
claif-gem chat --model gemini-2.0-flash-exp
# With system prompt
claif-gem query "Translate to French" --system "You are a professional translator"
# Stream responses
claif-gem stream "Create a detailed tutorial on REST APIs"
# Health check
claif-gem health
# List models
claif-gem models
# Show configuration
claif-gem config show
Advanced Options
# Control tool approval
claif-gem query "Process these files" --auto-approve # Auto-approve tool use
claif-gem query "Analyze code" --no-auto-approve # Manual approval
# Yes mode for all prompts
claif-gem query "Refactor this module" --yes-mode
# Verbose output for debugging
claif-gem query "Debug this error" --verbose
# Custom timeout
claif-gem query "Complex analysis" --timeout 300
# Show response metrics
claif-gem query "Quick question" --show-metrics
Configuration Management
# Show current config
claif-gem config show
# Set values
claif-gem config set --default-model gemini-2.5-pro
claif-gem config set --auto-approve true
claif-gem config set --timeout 180
# Save configuration
claif-gem config save
Python API Usage
Basic Usage
import asyncio
from claif_gem import query, GeminiOptions
async def main():
# Simple query
async for message in query("Hello, Gemini!"):
print(message.content)
# Query with options
options = GeminiOptions(
model="gemini-2.5-pro",
temperature=0.7,
system_prompt="You are a helpful coding assistant",
auto_approve=True,
yes_mode=True
)
async for message in query("Explain Python decorators", options):
print(message.content)
asyncio.run(main())
Direct Client Usage
from claif_gem.client import GeminiClient
from claif_gem.types import GeminiOptions
async def use_client():
client = GeminiClient()
options = GeminiOptions(
model="gemini-2.5-pro",
verbose=True,
max_context_length=16000
)
async for message in client.query("What is machine learning?", options):
print(f"[{message.role}]: {message.content}")
asyncio.run(use_client())
Transport Layer Access
from claif_gem.transport import GeminiTransport
from claif_gem.types import GeminiOptions
async def direct_transport():
transport = GeminiTransport()
options = GeminiOptions(
timeout=120,
auto_approve=True
)
async for response in transport.send_query("Explain async programming", options):
if hasattr(response, 'content'):
print(response.content)
asyncio.run(direct_transport())
Error Handling
from claif.common import ProviderError, TransportError
from claif_gem import query, GeminiOptions
async def safe_query():
try:
options = GeminiOptions(timeout=60)
async for message in query("Complex task", options):
print(message.content)
except TransportError as e:
print(f"Transport error: {e}")
# Retry with different settings
except ProviderError as e:
print(f"Gemini error: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
asyncio.run(safe_query())
Using with Claif Framework
from claif import query as claif_query, Provider, ClaifOptions
async def use_with_claif():
options = ClaifOptions(
provider=Provider.GEMINI,
model="gemini-2.5-pro",
temperature=0.5,
system_prompt="You are a data science expert"
)
async for message in claif_query("Explain neural networks", options):
print(message.content)
asyncio.run(use_with_claif())
How It Works
Architecture Overview
┌─────────────────────┐
│ User Application │
├─────────────────────┤
│ Claif Core │
├─────────────────────┤
│ claif_gem │
│ ┌───────────────┐ │
│ │ __init__.py │ │ ← Main entry point, Claif interface
│ ├───────────────┤ │
│ │ cli.py │ │ ← Fire-based CLI commands
│ ├───────────────┤ │
│ │ client.py │ │ ← Client orchestration
│ ├───────────────┤ │
│ │ transport.py │ │ ← Subprocess management
│ ├───────────────┤ │
│ │ types.py │ │ ← Type definitions
│ └───────────────┘ │
├─────────────────────┤
│ Subprocess Layer │
├─────────────────────┤
│ Gemini CLI Binary │ ← External Node.js CLI
└─────────────────────┘
Core Components
Main Module (__init__.py)
Entry point providing the query() function:
async def query(
prompt: str,
options: ClaifOptions | None = None
) -> AsyncIterator[Message]:
"""Query Gemini with Claif-compatible interface."""
# Convert options
gemini_options = _convert_options(options) if options else GeminiOptions()
# Delegate to client
async for message in _client.query(prompt, gemini_options):
yield message
Features:
- Thin wrapper design
- Option conversion between Claif and Gemini formats
- Module-level client instance
- Clean async generator interface
CLI Module (cli.py)
Fire-based command-line interface:
class GeminiCLI:
def query(self, prompt: str, **kwargs):
"""Execute a query to Gemini."""
def stream(self, prompt: str, **kwargs):
"""Stream responses in real-time."""
def health(self):
"""Check Gemini CLI availability."""
def config(self, action: str = "show", **kwargs):
"""Manage configuration."""
Features:
- Rich console output with progress indicators
- Response formatting and metrics
- Async execution with error handling
- Configuration persistence
Client Module (client.py)
Manages the query lifecycle:
class GeminiClient:
def __init__(self):
self.transport = GeminiTransport()
async def query(self, prompt: str, options: GeminiOptions):
# Send query via transport
async for gemini_msg in self.transport.send_query(prompt, options):
# Convert to Claif format
yield self._convert_message(gemini_msg)
Features:
- Transport lifecycle management
- Message format conversion
- Error propagation
- Clean separation of concerns
Transport Module (transport.py)
Handles subprocess communication:
class GeminiTransport:
async def send_query(self, prompt: str, options: GeminiOptions):
# Find CLI
cli_path = self._find_cli_path()
# Build command
cmd = self._build_command(cli_path, prompt, options)
# Execute and stream
async with await anyio.open_process(cmd) as proc:
async for line in proc.stdout:
yield self._parse_line(line)
Key methods:
_find_cli_path()- Multi-location CLI discovery_build_command()- Safe argument construction_parse_output_line()- JSON and plain text parsing- Timeout management with process cleanup
Types Module (types.py)
Type definitions and conversions:
@dataclass
class GeminiOptions:
model: str | None = None
temperature: float | None = None
system_prompt: str | None = None
auto_approve: bool = True
yes_mode: bool = True
max_context_length: int | None = None
timeout: int | None = None
verbose: bool = False
@dataclass
class GeminiMessage:
role: str
content: str
metadata: dict[str, Any] | None = None
def to_claif_message(self) -> Message:
"""Convert to Claif format."""
Message Flow
- User Input → CLI command or API call
- Option Translation → ClaifOptions → GeminiOptions
- Client Processing → GeminiClient prepares query
- Transport Execution:
- Find Gemini CLI binary
- Build command with arguments
- Spawn subprocess with anyio
- Read stdout/stderr streams
- Response Parsing:
- Try JSON parsing first
- Fallback to plain text
- Convert to GeminiMessage
- Message Conversion → GeminiMessage → Claif Message
- Async Yield → Messages yielded to caller
CLI Discovery
The transport searches for Gemini CLI in this order:
GEMINI_CLI_PATHenvironment variable- System PATH (
which gemini) - Common installation paths:
~/.local/bin/gemini/usr/local/bin/gemini/opt/gemini/bin/gemini
- NPM global paths:
- Windows:
%APPDATA%/npm/gemini.cmd - Unix:
~/.npm-global/bin/gemini - System:
/usr/local/lib/node_modules/.bin/gemini
- Windows:
Command Construction
The Gemini CLI is invoked with arguments based on options:
gemini \
-m <model> \
-a # auto-approve
-y # yes-mode
-t <temp> # temperature
-s <prompt> # system prompt
--max-context <length> \
-p "user prompt"
Code Structure
claif_gem/
├── src/claif_gem/
│ ├── __init__.py # Main entry point
│ ├── cli.py # Fire CLI implementation
│ ├── client.py # Client orchestration
│ ├── transport.py # Subprocess management
│ ├── types.py # Type definitions
│ └── install.py # CLI installation helper
├── tests/
│ └── test_package.py # Basic tests
├── pyproject.toml # Package configuration
├── README.md # This file
└── CLAUDE.md # Development guide
Configuration
Environment variables:
GEMINI_CLI_PATH- Path to Gemini CLI binaryGEMINI_SDK=1- Set by transport to indicate SDK usageCLAIF_PROVIDER=gemini- Provider identification
Config file (~/.claif/config.json):
{
"providers": {
"gemini": {
"model": "gemini-2.5-pro",
"auto_approve": true,
"yes_mode": true,
"max_context_length": 32000,
"timeout": 180
}
}
}
Installation with Bun
For faster installation, use Bun:
# Install bun
curl -fsSL https://bun.sh/install | bash
# Install Gemini CLI
bun add -g @google/gemini-cli
# Or use Claif's bundled installer
pip install claif
claif install gemini # Uses bun internally
Benefits of Bun:
- 10x faster npm installs
- Creates standalone executables
- No Node.js version conflicts
- Cross-platform compatibility
Why Use claif_gem?
1. Unified Interface
- Access Gemini through standard Claif API
- Switch between providers with one parameter
- Consistent error handling across providers
2. Cross-Platform
- Automatic CLI discovery on all platforms
- Platform-specific path handling
- Works in diverse environments
3. Developer Experience
- Full type hints for IDE support
- Rich CLI with progress indicators
- Clean async/await patterns
- Comprehensive error messages
4. Production Ready
- Robust subprocess management
- Timeout protection
- Graceful error recovery
- Extensive logging
5. Flexible Configuration
- Environment variables
- Config files
- CLI arguments
- Sensible defaults
Testing
The claif_gem package includes comprehensive tests to ensure robust functionality:
Running Tests
# Install with test dependencies
pip install -e ".[test]"
# Run all tests
uvx hatch test
# Run specific test modules
uvx hatch test -- tests/test_functional.py -v
uvx hatch test -- tests/test_client.py -v
# Run with coverage
uvx hatch test -- --cov=src/claif_gem --cov-report=html
Test Structure
tests/
├── test_functional.py # End-to-end functionality tests
├── test_client.py # Client API tests
├── test_transport.py # Subprocess communication tests
├── test_types.py # Type conversion tests
└── conftest.py # Test fixtures and configuration
Example Test Usage
The functional tests demonstrate how to use claif_gem effectively:
# Test basic query functionality
def test_basic_query():
client = GeminiClient()
response = client.chat.completions.create(
model="gemini-1.5-flash",
messages=[{"role": "user", "content": "Hello Gemini"}]
)
assert isinstance(response, ChatCompletion)
assert response.choices[0].message.role == "assistant"
assert len(response.choices[0].message.content) > 0
# Test streaming responses
def test_streaming():
client = GeminiClient()
stream = client.chat.completions.create(
model="gemini-1.5-flash",
messages=[{"role": "user", "content": "Count to 3"}],
stream=True
)
chunks = list(stream)
assert len(chunks) > 0
# Reconstruct full message
content = "".join(
chunk.choices[0].delta.content or ""
for chunk in chunks
if chunk.choices and chunk.choices[0].delta.content
)
assert len(content) > 0
# Test parameter passing
def test_with_parameters():
client = GeminiClient()
response = client.chat.completions.create(
model="gemini-1.5-pro",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a hello world function"}
],
temperature=0.7,
max_tokens=100
)
assert response.model == "gemini-1.5-pro"
assert response.usage.total_tokens > 0
# Test model name mapping
def test_model_mapping():
client = GeminiClient()
# OpenAI model names should be mapped to Gemini equivalents
response = client.chat.completions.create(
model="gpt-3.5-turbo", # Maps to gemini-1.5-flash
messages=[{"role": "user", "content": "Hello"}]
)
# Verify the mapping worked correctly
assert "gemini" in response.model.lower()
Mock Testing for CI/CD
The tests use comprehensive mocking to work in CI environments without requiring the actual Gemini CLI:
from unittest.mock import patch, MagicMock
import subprocess
@patch("claif_gem.client.subprocess.run")
@patch("shutil.which")
def test_client_mocking(mock_which, mock_run):
# Mock CLI discovery
mock_which.return_value = "/usr/local/bin/gemini"
# Mock subprocess response
mock_response = {
"candidates": [{
"content": {"parts": [{"text": "Hello from Gemini!"}]}
}]
}
mock_run.return_value = MagicMock(
returncode=0,
stdout=json.dumps(mock_response),
stderr=""
)
client = GeminiClient()
response = client.chat.completions.create(
model="gemini-1.5-flash",
messages=[{"role": "user", "content": "Test"}]
)
# Verify subprocess was called correctly
mock_run.assert_called_once()
call_args = mock_run.call_args
cmd = call_args[0][0]
assert "gemini" in cmd[0]
assert "--model" in cmd
assert "gemini-1.5-flash" in cmd
Testing Subprocess Communication
# Test CLI command generation
def test_command_building():
from claif_gem.client import GeminiClient
# Mock the transport to intercept command building
with patch("claif_gem.client.subprocess.run") as mock_run:
mock_run.return_value = MagicMock(returncode=0, stdout="test", stderr="")
client = GeminiClient()
client.chat.completions.create(
model="gemini-1.5-pro",
messages=[{"role": "user", "content": "Test"}],
temperature=0.8,
max_tokens=500
)
# Verify command structure
call_args = mock_run.call_args[0][0]
assert "--temperature" in call_args
assert "0.8" in call_args
assert "--max-output-tokens" in call_args
assert "500" in call_args
# Test error handling
def test_error_handling():
from subprocess import CalledProcessError
with patch("claif_gem.client.subprocess.run") as mock_run:
mock_run.side_effect = CalledProcessError(
returncode=1,
cmd=["gemini"],
stderr="API quota exceeded"
)
client = GeminiClient()
with pytest.raises(RuntimeError) as exc_info:
client.chat.completions.create(
model="gemini-1.5-flash",
messages=[{"role": "user", "content": "Test"}]
)
assert "Gemini CLI error" in str(exc_info.value)
assert "API quota exceeded" in str(exc_info.value)
API Compatibility
This package is fully compatible with the OpenAI Python client API:
# You can use it as a drop-in replacement
from claif_gem import GeminiClient as OpenAI
client = OpenAI()
# Now use exactly like the OpenAI client
response = client.chat.completions.create(
model="gemini-1.5-flash",
messages=[{"role": "user", "content": "Hello!"}]
)
Migration from Old Async API
If you were using the old async-based Claif API:
# Old API (deprecated)
import asyncio
from claif_gem import query
async def old_way():
async for message in query("Hello Gemini"):
print(message.content)
# New API (OpenAI-compatible)
from claif_gem import GeminiClient
def new_way():
client = GeminiClient()
response = client.chat.completions.create(
messages=[{"role": "user", "content": "Hello Gemini"}],
model="gemini-1.5-flash"
)
print(response.choices[0].message.content)
Key Changes
- Synchronous by default: No more
async/awaitfor basic usage - OpenAI-compatible structure:
client.chat.completions.create()pattern - Standard message format:
[{"role": "user", "content": "..."}] - Streaming support: Use
stream=Truefor real-time responses - Type-safe responses: Returns
ChatCompletionobjects from OpenAI types
Best Practices
- Use auto-approve for trusted operations - Speeds up workflows
- Set appropriate timeouts - Prevent hanging on complex queries
- Enable verbose mode for debugging - See full subprocess communication
- Use system prompts - Set context for better responses
- Configure max context length - Based on your use case
- Handle errors gracefully - Implement retry logic
- Use streaming for long responses - Better user experience
Contributing
See CLAUDE.md for development guidelines.
Development Setup
# Clone repository
git clone https://github.com/twardoch/claif_gem.git
cd claif_gem
# Install with dev dependencies
pip install -e ".[dev,test]"
# Run tests
pytest
# Format code
ruff format src/claif_gem tests
# Lint
ruff check src/claif_gem tests
# Type check
mypy src/claif_gem
Testing
# Run all tests
pytest
# Run with coverage
pytest --cov=claif_gem --cov-report=html
# Run specific test
pytest tests/test_transport.py -v
# Test CLI commands
python -m claif_gem.cli health
python -m claif_gem.cli models
License
MIT License - see LICENSE file for details.
Copyright (c) 2025 Adam Twardoch
Links
claif_gem Resources
- GitHub Repository - Source code
- PyPI Package - Latest release
- Issue Tracker - Bug reports
- Discussions - Q&A
Related Projects
Claif Ecosystem:
Upstream Projects:
- Gemini CLI - Google's CLI
- Google AI Studio - Gemini documentation
- Google AI Python SDK - Python SDK
Tools & Libraries:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file claif_gem-1.0.30.tar.gz.
File metadata
- Download URL: claif_gem-1.0.30.tar.gz
- Upload date:
- Size: 52.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.28.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1052166c991713fdc2f0adb52f7cf63fbc239f39577b68b3ed5dcf23cf8e8922
|
|
| MD5 |
3c0458bae8180fec9c9210b3dbe7c2a9
|
|
| BLAKE2b-256 |
79f68a1562ba8f42e7ac6aee1288edd521f1bd018e570c5db9bb509302b0a306
|
File details
Details for the file claif_gem-1.0.30-py3-none-any.whl.
File metadata
- Download URL: claif_gem-1.0.30-py3-none-any.whl
- Upload date:
- Size: 16.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.28.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
afa484b1098abfaa0c9e160e3a25f8fd73f5b100f9672752be254af0e2a68220
|
|
| MD5 |
2bd39c0bd5ec6018917408a7fa650f27
|
|
| BLAKE2b-256 |
97d0be3b8460d19778dbad5e09a96b29fc9ea5a3cbb25fd21ad162afc493bef8
|