Python SDK for Google Gemini CLI / Code Assist API - similar to GitHub Copilot SDK
Project description
GeminiCLI SDK
A multi-language SDK for Google Gemini Code Assist API, inspired by the GitHub Copilot SDK.
GeminiCLI SDK provides high-level interfaces for interacting with the Gemini Code Assist API in Python, TypeScript, Rust, Go, and C++, supporting:
- ๐ OAuth Authentication - Seamless authentication using Gemini CLI credentials
- ๐ Streaming Responses - Real-time streaming with Server-Sent Events (SSE)
- ๐ ๏ธ Tool Calling - Define and use custom tools with the model
- ๐ฌ Session Management - Manage conversation state and history
- ๐ง Thinking/Reasoning - Support for model thinking/reasoning content
Available SDKs
| Language | Location | Package Name | Status |
|---|---|---|---|
| Python | src/python/ |
geminisdk |
โ Production Ready |
| TypeScript | src/typescript/ |
geminisdk |
โ Production Ready |
| Rust | src/rust/ |
geminisdk |
โ Production Ready |
| Go | src/go/ |
geminisdk |
โ Production Ready |
| C++ | src/cpp/ |
geminisdk |
โ Production Ready |
Prerequisites
Before using any SDK, you need to authenticate with Google. The easiest way is to use the Gemini CLI:
# Install Gemini CLI
npm install -g @google/gemini-cli
# Authenticate
gemini auth login
This will store your OAuth credentials in ~/.gemini/oauth_creds.json.
Quick Start
Python
pip install geminisdk
import asyncio
from geminisdk import GeminiClient
async def main():
async with GeminiClient() as client:
session = await client.create_session({
"model": "gemini-2.5-pro",
"streaming": True,
})
response = await session.send_and_wait({
"prompt": "Explain Python decorators in simple terms.",
})
print(response.data["content"])
asyncio.run(main())
TypeScript
npm install geminisdk
import { GeminiClient, EventType } from 'geminisdk';
async function main() {
const client = new GeminiClient();
const session = await client.createSession({
model: 'gemini-2.5-pro',
streaming: true,
});
session.on((event) => {
if (event.type === EventType.ASSISTANT_MESSAGE_DELTA) {
process.stdout.write((event.data as any).deltaContent);
}
});
await session.send({ prompt: 'What is TypeScript?' });
await client.close();
}
main();
Rust
# Cargo.toml
[dependencies]
geminisdk = { path = "src/rust" }
tokio = { version = "1", features = ["full"] }
use geminisdk::{GeminiClient, SessionConfig, MessageOptions};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = GeminiClient::with_defaults();
client.start().await?;
let session = client.create_session(SessionConfig {
model: Some("gemini-2.5-pro".to_string()),
..Default::default()
}).await?;
let response = session.send_and_wait(MessageOptions {
prompt: "Hello, Gemini!".to_string(),
..Default::default()
}).await?;
println!("{:?}", response);
client.close().await?;
Ok(())
}
Go
package main
import (
"context"
"fmt"
"github.com/OEvortex/geminicli-sdk/src/go"
)
func main() {
client := geminisdk.NewClient(nil)
client.Start(context.Background())
defer client.Close()
session, _ := client.CreateSession(&geminisdk.SessionConfig{
Model: "gemini-2.5-pro",
})
response, _ := session.SendAndWait(context.Background(), &geminisdk.MessageOptions{
Prompt: "Hello, Gemini!",
})
fmt.Println(response.Data)
}
C++
#include <geminisdk/geminisdk.hpp>
#include <iostream>
int main() {
geminisdk::Client client;
client.start();
geminisdk::SessionConfig config;
config.model = "gemini-2.5-pro";
auto session = client.create_session(config);
geminisdk::MessageOptions options;
options.prompt = "Hello, Gemini!";
auto response = session->send_and_wait(options);
std::cout << response.data["content"].get<std::string>() << std::endl;
client.close();
return 0;
}
Python SDK (Full Documentation)
import asyncio
from geminisdk import GeminiClient
async def main():
# Create a client (uses Gemini CLI credentials by default)
async with GeminiClient() as client:
# Create a session
session = await client.create_session({
"model": "gemini-2.5-pro",
"streaming": True,
})
# Send a message and wait for response
response = await session.send_and_wait({
"prompt": "Explain Python decorators in simple terms.",
})
print(response.data["content"])
if __name__ == "__main__":
asyncio.run(main())
Streaming Example
import asyncio
from geminisdk import GeminiClient, EventType
async def main():
async with GeminiClient() as client:
session = await client.create_session({
"model": "gemini-2.5-pro",
})
# Subscribe to events
def on_event(event):
if event.type == EventType.ASSISTANT_MESSAGE_DELTA:
# Print streaming content
print(event.data["delta_content"], end="", flush=True)
elif event.type == EventType.ASSISTANT_MESSAGE:
# Final message
print("\n--- Done ---")
session.on(on_event)
# Send message (events will be emitted)
await session.send({
"prompt": "Write a haiku about programming.",
})
asyncio.run(main())
Tool Calling Example
import asyncio
from geminisdk import GeminiClient, define_tool
# Define a tool using the decorator
@define_tool(
name="get_weather",
description="Get the current weather for a location",
)
def get_weather(city: str, country: str = "US") -> str:
"""Get weather information.
Args:
city: The city name.
country: The country code.
"""
# In a real app, call a weather API
return f"Weather in {city}, {country}: Sunny, 72ยฐF"
@define_tool(
name="calculate",
description="Perform a mathematical calculation",
)
def calculate(expression: str) -> str:
"""Evaluate a math expression.
Args:
expression: The math expression to evaluate.
"""
try:
result = eval(expression)
return f"Result: {result}"
except Exception as e:
return f"Error: {e}"
async def main():
async with GeminiClient() as client:
session = await client.create_session({
"model": "gemini-2.5-pro",
"tools": [get_weather, calculate],
})
response = await session.send_and_wait({
"prompt": "What's the weather in Tokyo? Also, what is 15 * 23?",
})
print(response.data["content"])
asyncio.run(main())
Backend API (Low-Level)
For more control, you can use the backend directly:
import asyncio
from geminisdk import GeminiBackend, Message, Role, GenerationConfig
async def main():
async with GeminiBackend() as backend:
messages = [
Message(role=Role.SYSTEM, content="You are a helpful assistant."),
Message(role=Role.USER, content="Hello!"),
]
# Non-streaming
response = await backend.complete(
model="gemini-2.5-pro",
messages=messages,
generation_config=GenerationConfig(
temperature=0.7,
max_output_tokens=1000,
),
)
print(response.content)
# Streaming
async for chunk in backend.complete_streaming(
model="gemini-2.5-pro",
messages=messages,
):
if chunk.content:
print(chunk.content, end="", flush=True)
asyncio.run(main())
Configuration
Client Options
from geminisdk import GeminiClient
client = GeminiClient({
# Custom OAuth credentials path
"oauth_path": "/path/to/oauth_creds.json",
# Request timeout (default: 720 seconds)
"timeout": 300.0,
# Auto-refresh tokens in background (default: True)
"auto_refresh": True,
# Custom OAuth client credentials (optional)
"client_id": "your-client-id",
"client_secret": "your-client-secret",
})
Session Options
session = await client.create_session({
# Model selection
"model": "gemini-2.5-pro",
# Enable streaming (default: True)
"streaming": True,
# System message
"system_message": "You are a helpful coding assistant.",
# Tools
"tools": [my_tool1, my_tool2],
# Generation config
"generation_config": GenerationConfig(
temperature=0.7,
max_output_tokens=2048,
top_p=0.9,
),
# Enable thinking/reasoning
"thinking_config": ThinkingConfig(
include_thoughts=True,
thinking_budget=1024,
),
})
Available Models
| Model ID | Description | Features |
|---|---|---|
gemini-3-pro-preview |
Gemini 3 Pro Preview | Tools, Thinking |
gemini-3-flash-preview |
Gemini 3 Flash Preview | Tools, Thinking |
gemini-2.5-pro |
Gemini 2.5 Pro | Tools, Thinking |
gemini-2.5-flash |
Gemini 2.5 Flash | Tools, Thinking |
gemini-2.5-flash-lite |
Gemini 2.5 Flash Lite | Tools |
auto-gemini-3 |
Auto (Gemini 3) | Tools, Thinking |
auto-gemini-2.5 |
Auto (Gemini 2.5) | Tools, Thinking |
auto |
Auto (Default) | Tools, Thinking |
Events
The SDK emits various events during a session:
| Event Type | Description |
|---|---|
ASSISTANT_MESSAGE_DELTA |
Partial content received (streaming) |
ASSISTANT_MESSAGE |
Complete message received |
ASSISTANT_REASONING_DELTA |
Partial reasoning content (streaming) |
ASSISTANT_REASONING |
Complete reasoning content |
TOOL_CALL |
Model is calling a tool |
TOOL_RESULT |
Tool execution result |
SESSION_IDLE |
Session is idle |
SESSION_ERROR |
An error occurred |
Error Handling
from geminisdk import (
GeminiClient,
AuthenticationError,
APIError,
RateLimitError,
)
async def main():
try:
async with GeminiClient() as client:
session = await client.create_session()
response = await session.send_and_wait({"prompt": "Hello"})
except AuthenticationError as e:
print(f"Authentication failed: {e}")
except RateLimitError as e:
print(f"Rate limited: {e}")
except APIError as e:
print(f"API error: {e}")
Architecture
The SDK follows a layered architecture:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ GeminiClient โ High-level client
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ GeminiSession โ Session management
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ GeminiBackend โ API communication
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ GeminiOAuthManager โ Authentication
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
References
This SDK is inspired by:
- GitHub Copilot SDK - SDK architecture and patterns
- Gemini CLI - OAuth credentials and API
- Revibe - GeminiCLI backend implementation
- Better-Copilot-Chat - Provider patterns
License
MIT License
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file geminisdk-0.1.1.tar.gz.
File metadata
- Download URL: geminisdk-0.1.1.tar.gz
- Upload date:
- Size: 134.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
217d1bb82f630a24a0cd8aa6a4e7b784bd866f02ee7b55052a0e47d702c21960
|
|
| MD5 |
69a788d19fd783697655d894c4a6666c
|
|
| BLAKE2b-256 |
20c1bb4ef0a07f060fcc616ea31c13d19a76183895456484af3196788f8869a7
|
File details
Details for the file geminisdk-0.1.1-py3-none-any.whl.
File metadata
- Download URL: geminisdk-0.1.1-py3-none-any.whl
- Upload date:
- Size: 33.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9f3bcaefab55f3feb40198d4fb5b8f6c45b66300c6a74812e7da98c6574f8274
|
|
| MD5 |
32283267b4a9e198887f735ed3008527
|
|
| BLAKE2b-256 |
88bb89a84bebed873d4e3f4b18bf4c5bdb4c173134d2369e8c23ee3728c943c7
|