Python SDK for GitHub Copilot CLI
Project description
Copilot Python SDK
Python SDK for programmatic control of GitHub Copilot CLI via JSON-RPC.
Note: This SDK is in technical preview and may change in breaking ways.
Installation
pip install -e ".[telemetry,dev]"
# or
uv pip install -e ".[telemetry,dev]"
Run the Sample
Try the interactive chat sample (from the repo root):
cd python/samples
python chat.py
Quick Start
import asyncio
from copilot import CopilotClient, PermissionHandler
async def main():
# Create and start client
client = CopilotClient()
await client.start()
# Create a session (on_permission_request is required)
session = await client.create_session(on_permission_request=PermissionHandler.approve_all, model="gpt-5")
# Wait for response using session.idle event
done = asyncio.Event()
def on_event(event):
if event.type.value == "assistant.message":
print(event.data.content)
elif event.type.value == "session.idle":
done.set()
session.on(on_event)
# Send a message and wait for completion
await session.send("What is 2+2?")
await done.wait()
# Clean up
await session.disconnect()
await client.stop()
asyncio.run(main())
Sessions also support the async with context manager pattern for automatic cleanup:
async with await client.create_session(on_permission_request=PermissionHandler.approve_all, model="gpt-5") as session:
await session.send("What is 2+2?")
# session is automatically disconnected when leaving the block
Features
- ✅ Full JSON-RPC protocol support
- ✅ stdio and TCP transports
- ✅ Real-time streaming events
- ✅ Session history with
get_messages() - ✅ Type hints throughout
- ✅ Async/await native
API Reference
CopilotClient
from copilot import CopilotClient, SubprocessConfig
# Spawn a local CLI process (default)
client = CopilotClient() # uses bundled CLI, stdio transport
await client.start()
session = await client.create_session(on_permission_request=PermissionHandler.approve_all, model="gpt-5")
def on_event(event):
print(f"Event: {event['type']}")
session.on(on_event)
await session.send("Hello!")
# ... wait for events ...
await session.disconnect()
await client.stop()
from copilot import CopilotClient, ExternalServerConfig
# Connect to an existing CLI server
client = CopilotClient(ExternalServerConfig(url="localhost:3000"))
CopilotClient Constructor:
CopilotClient(
config=None, # SubprocessConfig | ExternalServerConfig | None
*,
auto_start=True, # auto-start server on first use
on_list_models=None, # custom handler for list_models()
)
SubprocessConfig — spawn a local CLI process:
cli_path(str | None): Path to CLI executable (default: bundled binary)cli_args(list[str]): Extra arguments for the CLI executablecwd(str | None): Working directory for CLI process (default: current dir)use_stdio(bool): Use stdio transport instead of TCP (default: True)port(int): Server port for TCP mode (default: 0 for random)log_level(str): Log level (default: "info")env(dict | None): Environment variables for the CLI processgithub_token(str | None): GitHub token for authentication. When provided, takes priority over other auth methods.use_logged_in_user(bool | None): Whether to use logged-in user for authentication (default: True, but False whengithub_tokenis provided).telemetry(dict | None): OpenTelemetry configuration for the CLI process. Providing this enables telemetry — no separate flag needed. See Telemetry below.
ExternalServerConfig — connect to an existing CLI server:
url(str): Server URL (e.g.,"localhost:8080","http://127.0.0.1:9000", or just"8080").
create_session Parameters:
All parameters are keyword-only:
on_permission_request(callable): Required. Handler called before each tool execution to approve or deny it. UsePermissionHandler.approve_allto allow everything, or provide a custom function for fine-grained control. See Permission Handling section.model(str): Model to use ("gpt-5", "claude-sonnet-4.5", etc.).session_id(str): Custom session ID for resuming or identifying sessions.client_name(str): Client name to identify the application using the SDK. Included in the User-Agent header for API requests.reasoning_effort(str): Reasoning effort level for models that support it ("low", "medium", "high", "xhigh"). Uselist_models()to check which models support this option.tools(list): Custom tools exposed to the CLI.system_message(dict): System message configuration. Supports three modes:- append (default): Appends
contentafter the SDK-managed prompt - replace: Replaces the entire prompt with
content - customize: Selectively override individual sections via
sectionsdict (keys:"identity","tone","tool_efficiency","environment_context","code_change_rules","guidelines","safety","tool_instructions","custom_instructions","last_instructions"; values:SectionOverridewithactionand optionalcontent)
- append (default): Appends
available_tools(list[str]): List of tool names to allow. Takes precedence overexcluded_tools.excluded_tools(list[str]): List of tool names to disable. Ignored ifavailable_toolsis set.on_user_input_request(callable): Handler for user input requests from the agent (enables ask_user tool). See User Input Requests section.hooks(dict): Hook handlers for session lifecycle events. See Session Hooks section.working_directory(str): Working directory for the session. Tool operations will be relative to this directory.provider(dict): Custom API provider configuration (BYOK). See Custom Providers section.streaming(bool): Enable streaming delta events.mcp_servers(dict): MCP server configurations for the session.custom_agents(list): Custom agent configurations for the session.config_dir(str): Override the default configuration directory location.skill_directories(list[str]): Directories to load skills from.disabled_skills(list[str]): List of skill names to disable.infinite_sessions(dict): Automatic context compaction configuration.
resume_session Parameters:
session_id(str): Required. The ID of the session to resume.
The parameters below are keyword-only:
on_permission_request(callable): Required. Handler called before each tool execution to approve or deny it. UsePermissionHandler.approve_allto allow everything, or provide a custom function for fine-grained control. See Permission Handling section.model(str): Model to use (can change the model when resuming).client_name(str): Client name to identify the application using the SDK.reasoning_effort(str): Reasoning effort level ("low", "medium", "high", "xhigh").tools(list): Custom tools exposed to the CLI.system_message(dict): System message configuration.available_tools(list[str]): List of tool names to allow. Takes precedence overexcluded_tools.excluded_tools(list[str]): List of tool names to disable. Ignored ifavailable_toolsis set.on_user_input_request(callable): Handler for user input requests from the agent (enables ask_user tool).hooks(dict): Hook handlers for session lifecycle events.working_directory(str): Working directory for the session.provider(dict): Custom API provider configuration (BYOK).streaming(bool): Enable streaming delta events.mcp_servers(dict): MCP server configurations for the session.custom_agents(list): Custom agent configurations for the session.agent(str): Name of the custom agent to activate when the session starts.config_dir(str): Override the default configuration directory location.skill_directories(list[str]): Directories to load skills from.disabled_skills(list[str]): List of skill names to disable.infinite_sessions(dict): Automatic context compaction configuration.disable_resume(bool): Skip emitting the session.resume event (default: False).on_event(callable): Event handler registered before the session.resume RPC.
Session Lifecycle Methods:
# Get the session currently displayed in TUI (TUI+server mode only)
session_id = await client.get_foreground_session_id()
# Request TUI to display a specific session (TUI+server mode only)
await client.set_foreground_session_id("session-123")
# Subscribe to all lifecycle events
def on_lifecycle(event):
print(f"{event.type}: {event.sessionId}")
unsubscribe = client.on(on_lifecycle)
# Subscribe to specific event type
unsubscribe = client.on("session.foreground", lambda e: print(f"Foreground: {e.sessionId}"))
# Later, to stop receiving events:
unsubscribe()
Lifecycle Event Types:
session.created- A new session was createdsession.deleted- A session was deletedsession.updated- A session was updatedsession.foreground- A session became the foreground session in TUIsession.background- A session is no longer the foreground session
System Message Customization
Control the system prompt using system_message in session config:
session = await client.create_session(
system_message={
"content": "Always check for security vulnerabilities before suggesting changes."
}
)
The SDK auto-injects environment context, tool instructions, and security guardrails. The default CLI persona is preserved, and your content is appended after SDK-managed sections. To change the persona or fully redefine the prompt, use mode: "replace" or mode: "customize".
Customize Mode
Use mode: "customize" to selectively override individual sections of the prompt while preserving the rest:
from copilot import SYSTEM_PROMPT_SECTIONS
session = await client.create_session(
system_message={
"mode": "customize",
"sections": {
# Replace the tone/style section
"tone": {"action": "replace", "content": "Respond in a warm, professional tone. Be thorough in explanations."},
# Remove coding-specific rules
"code_change_rules": {"action": "remove"},
# Append to existing guidelines
"guidelines": {"action": "append", "content": "\n* Always cite data sources"},
},
# Additional instructions appended after all sections
"content": "Focus on financial analysis and reporting.",
}
)
Available section IDs: "identity", "tone", "tool_efficiency", "environment_context", "code_change_rules", "guidelines", "safety", "tool_instructions", "custom_instructions", "last_instructions". Use the SYSTEM_PROMPT_SECTIONS dict for descriptions of each section.
Each section override supports four actions:
replace— Replace the section content entirelyremove— Remove the section from the promptappend— Add content after the existing sectionprepend— Add content before the existing section
Unknown section IDs are handled gracefully: content from replace/append/prepend overrides is appended to additional instructions, and remove overrides are silently ignored.
Tools
Define tools with automatic JSON schema generation using the @define_tool decorator and Pydantic models:
from pydantic import BaseModel, Field
from copilot import CopilotClient, define_tool, PermissionHandler
class LookupIssueParams(BaseModel):
id: str = Field(description="Issue identifier")
@define_tool(description="Fetch issue details from our tracker")
async def lookup_issue(params: LookupIssueParams) -> str:
issue = await fetch_issue(params.id)
return issue.summary
session = await client.create_session(
on_permission_request=PermissionHandler.approve_all,
model="gpt-5",
tools=[lookup_issue],
)
Note: When using
from __future__ import annotations, define Pydantic models at module level (not inside functions).
Low-level API (without Pydantic):
For users who prefer manual schema definition:
from copilot import CopilotClient, Tool, PermissionHandler
async def lookup_issue(invocation):
issue_id = invocation["arguments"]["id"]
issue = await fetch_issue(issue_id)
return {
"textResultForLlm": issue.summary,
"resultType": "success",
"sessionLog": f"Fetched issue {issue_id}",
}
session = await client.create_session(
on_permission_request=PermissionHandler.approve_all,
model="gpt-5",
tools=[
Tool(
name="lookup_issue",
description="Fetch issue details from our tracker",
parameters={
"type": "object",
"properties": {
"id": {"type": "string", "description": "Issue identifier"},
},
"required": ["id"],
},
handler=lookup_issue,
)
],
)
The SDK automatically handles tool.call, executes your handler (sync or async), and responds with the final result when the tool completes.
Overriding Built-in Tools
If you register a tool with the same name as a built-in CLI tool (e.g. edit_file, read_file), the SDK will throw an error unless you explicitly opt in by setting overrides_built_in_tool=True. This flag signals that you intend to replace the built-in tool with your custom implementation.
class EditFileParams(BaseModel):
path: str = Field(description="File path")
content: str = Field(description="New file content")
@define_tool(name="edit_file", description="Custom file editor with project-specific validation", overrides_built_in_tool=True)
async def edit_file(params: EditFileParams) -> str:
# your logic
Skipping Permission Prompts
Set skip_permission=True on a tool definition to allow it to execute without triggering a permission prompt:
@define_tool(name="safe_lookup", description="A read-only lookup that needs no confirmation", skip_permission=True)
async def safe_lookup(params: LookupParams) -> str:
# your logic
Image Support
The SDK supports image attachments via the attachments parameter. You can attach images by providing their file path, or by passing base64-encoded data directly using a blob attachment:
# File attachment — runtime reads from disk
await session.send(
"What's in this image?",
attachments=[
{
"type": "file",
"path": "/path/to/image.jpg",
}
],
)
# Blob attachment — provide base64 data directly
await session.send(
"What's in this image?",
attachments=[
{
"type": "blob",
"data": base64_image_data,
"mimeType": "image/png",
}
],
)
Supported image formats include JPG, PNG, GIF, and other common image types. The agent's view tool can also read images directly from the filesystem, so you can also ask questions like:
await session.send("What does the most recent jpg in this directory portray?")
Streaming
Enable streaming to receive assistant response chunks as they're generated:
import asyncio
from copilot import CopilotClient, PermissionHandler
async def main():
client = CopilotClient()
await client.start()
session = await client.create_session(
on_permission_request=PermissionHandler.approve_all,
model="gpt-5",
streaming=True,
)
# Use asyncio.Event to wait for completion
done = asyncio.Event()
def on_event(event):
if event.type.value == "assistant.message_delta":
# Streaming message chunk - print incrementally
delta = event.data.delta_content or ""
print(delta, end="", flush=True)
elif event.type.value == "assistant.reasoning_delta":
# Streaming reasoning chunk (if model supports reasoning)
delta = event.data.delta_content or ""
print(delta, end="", flush=True)
elif event.type.value == "assistant.message":
# Final message - complete content
print("\n--- Final message ---")
print(event.data.content)
elif event.type.value == "assistant.reasoning":
# Final reasoning content (if model supports reasoning)
print("--- Reasoning ---")
print(event.data.content)
elif event.type.value == "session.idle":
# Session finished processing
done.set()
session.on(on_event)
await session.send("Tell me a short story")
await done.wait() # Wait for streaming to complete
await session.disconnect()
await client.stop()
asyncio.run(main())
When streaming=True:
assistant.message_deltaevents are sent withdelta_contentcontaining incremental textassistant.reasoning_deltaevents are sent withdelta_contentfor reasoning/chain-of-thought (model-dependent)- Accumulate
delta_contentvalues to build the full response progressively - The final
assistant.messageandassistant.reasoningevents contain the complete content
Note: assistant.message and assistant.reasoning (final events) are always sent regardless of streaming setting.
Infinite Sessions
By default, sessions use infinite sessions which automatically manage context window limits through background compaction and persist state to a workspace directory.
# Default: infinite sessions enabled with default thresholds
session = await client.create_session(on_permission_request=PermissionHandler.approve_all, model="gpt-5")
# Access the workspace path for checkpoints and files
print(session.workspace_path)
# => ~/.copilot/session-state/{session_id}/
# Custom thresholds
session = await client.create_session(
on_permission_request=PermissionHandler.approve_all,
model="gpt-5",
infinite_sessions={
"enabled": True,
"background_compaction_threshold": 0.80, # Start compacting at 80% context usage
"buffer_exhaustion_threshold": 0.95, # Block at 95% until compaction completes
},
)
# Disable infinite sessions
session = await client.create_session(
on_permission_request=PermissionHandler.approve_all,
model="gpt-5",
infinite_sessions={"enabled": False},
)
When enabled, sessions emit compaction events:
session.compaction_start- Background compaction startedsession.compaction_complete- Compaction finished (includes token counts)
Custom Providers
The SDK supports custom OpenAI-compatible API providers (BYOK - Bring Your Own Key), including local providers like Ollama. When using a custom provider, you must specify the model explicitly.
ProviderConfig fields:
type(str): Provider type -"openai","azure", or"anthropic"(default:"openai")base_url(str): API endpoint URL (required)api_key(str): API key (optional for local providers like Ollama)bearer_token(str): Bearer token for authentication (takes precedence overapi_key)wire_api(str): API format for OpenAI/Azure -"completions"or"responses"(default:"completions")azure(dict): Azure-specific options withapi_version(default:"2024-10-21")
Example with Ollama:
session = await client.create_session(
on_permission_request=PermissionHandler.approve_all,
model="deepseek-coder-v2:16b", # Model to use with the custom provider
provider={
"type": "openai",
"base_url": "http://localhost:11434/v1", # Ollama endpoint
# api_key not required for Ollama
},
)
await session.send("Hello!")
Example with custom OpenAI-compatible API:
import os
session = await client.create_session(
on_permission_request=PermissionHandler.approve_all,
model="gpt-4",
provider={
"type": "openai",
"base_url": "https://my-api.example.com/v1",
"api_key": os.environ["MY_API_KEY"],
},
)
Example with Azure OpenAI:
import os
session = await client.create_session(
on_permission_request=PermissionHandler.approve_all,
model="gpt-4",
provider={
"type": "azure", # Must be "azure" for Azure endpoints, NOT "openai"
"base_url": "https://my-resource.openai.azure.com", # Just the host, no path
"api_key": os.environ["AZURE_OPENAI_KEY"],
"azure": {
"api_version": "2024-10-21",
},
},
)
Important notes:
- For Azure OpenAI endpoints (
*.openai.azure.com), you must usetype: "azure", nottype: "openai".- The
base_urlshould be just the host (e.g.,https://my-resource.openai.azure.com). Do not include/openai/v1in the URL - the SDK handles path construction automatically.
Telemetry
The SDK supports OpenTelemetry for distributed tracing. Provide a telemetry config to enable trace export and automatic W3C Trace Context propagation.
from copilot import CopilotClient, SubprocessConfig
client = CopilotClient(SubprocessConfig(
telemetry={
"otlp_endpoint": "http://localhost:4318",
},
))
TelemetryConfig options:
otlp_endpoint(str): OTLP HTTP endpoint URLfile_path(str): File path for JSON-lines trace outputexporter_type(str):"otlp-http"or"file"source_name(str): Instrumentation scope namecapture_content(bool): Whether to capture message content
Trace context (traceparent/tracestate) is automatically propagated between the SDK and CLI on create_session, resume_session, and send calls, and inbound when the CLI invokes tool handlers.
Install with telemetry extras: pip install copilot-sdk[telemetry] (provides opentelemetry-api)
Permission Handling
An on_permission_request handler is required whenever you create or resume a session. The handler is called before the agent executes each tool (file writes, shell commands, custom tools, etc.) and must return a decision.
Approve All (simplest)
Use the built-in PermissionHandler.approve_all helper to allow every tool call without any checks:
from copilot import CopilotClient, PermissionHandler
session = await client.create_session({
"model": "gpt-5",
"on_permission_request": PermissionHandler.approve_all,
})
Custom Permission Handler
Provide your own function to inspect each request and apply custom logic (sync or async):
from copilot import PermissionRequest, PermissionRequestResult
def on_permission_request(request: PermissionRequest, invocation: dict) -> PermissionRequestResult:
# request.kind — what type of operation is being requested:
# "shell" — executing a shell command
# "write" — writing or editing a file
# "read" — reading a file
# "mcp" — calling an MCP tool
# "custom-tool" — calling one of your registered tools
# "url" — fetching a URL
# "memory" — accessing or updating session/workspace memory
# "hook" — invoking a registered hook
# request.tool_call_id — the tool call that triggered this request
# request.tool_name — name of the tool (for custom-tool / mcp)
# request.file_name — file being written (for write)
# request.full_command_text — full shell command (for shell)
if request.kind.value == "shell":
# Deny shell commands
return PermissionRequestResult(kind="denied-interactively-by-user")
return PermissionRequestResult(kind="approved")
session = await client.create_session({
"model": "gpt-5",
"on_permission_request": on_permission_request,
})
Async handlers are also supported:
async def on_permission_request(request: PermissionRequest, invocation: dict) -> PermissionRequestResult:
# Simulate an async approval check (e.g., prompting a user over a network)
await asyncio.sleep(0)
return PermissionRequestResult(kind="approved")
Permission Result Kinds
kind value |
Meaning |
|---|---|
"approved" |
Allow the tool to run |
"denied-interactively-by-user" |
User explicitly denied the request |
"denied-no-approval-rule-and-could-not-request-from-user" |
No approval rule matched and user could not be asked (default when no kind is specified) |
"denied-by-rules" |
Denied by a policy rule |
"denied-by-content-exclusion-policy" |
Denied due to a content exclusion policy |
"no-result" |
Leave the request unanswered (not allowed for protocol v2 permission requests) |
Resuming Sessions
Pass on_permission_request when resuming a session too — it is required:
session = await client.resume_session("session-id", {
"on_permission_request": PermissionHandler.approve_all,
})
Per-Tool Skip Permission
To let a specific custom tool bypass the permission prompt entirely, set skip_permission=True on the tool definition. See Skipping Permission Prompts under Tools.
User Input Requests
Enable the agent to ask questions to the user using the ask_user tool by providing an on_user_input_request handler:
async def handle_user_input(request, invocation):
# request["question"] - The question to ask
# request.get("choices") - Optional list of choices for multiple choice
# request.get("allowFreeform", True) - Whether freeform input is allowed
print(f"Agent asks: {request['question']}")
if request.get("choices"):
print(f"Choices: {', '.join(request['choices'])}")
# Return the user's response
return {
"answer": "User's answer here",
"wasFreeform": True, # Whether the answer was freeform (not from choices)
}
session = await client.create_session(
on_permission_request=PermissionHandler.approve_all,
model="gpt-5",
on_user_input_request=handle_user_input,
)
Session Hooks
Hook into session lifecycle events by providing handlers in the hooks configuration:
async def on_pre_tool_use(input, invocation):
print(f"About to run tool: {input['toolName']}")
# Return permission decision and optionally modify args
return {
"permissionDecision": "allow", # "allow", "deny", or "ask"
"modifiedArgs": input.get("toolArgs"), # Optionally modify tool arguments
"additionalContext": "Extra context for the model",
}
async def on_post_tool_use(input, invocation):
print(f"Tool {input['toolName']} completed")
return {
"additionalContext": "Post-execution notes",
}
async def on_user_prompt_submitted(input, invocation):
print(f"User prompt: {input['prompt']}")
return {
"modifiedPrompt": input["prompt"], # Optionally modify the prompt
}
async def on_session_start(input, invocation):
print(f"Session started from: {input['source']}") # "startup", "resume", "new"
return {
"additionalContext": "Session initialization context",
}
async def on_session_end(input, invocation):
print(f"Session ended: {input['reason']}")
async def on_error_occurred(input, invocation):
print(f"Error in {input['errorContext']}: {input['error']}")
return {
"errorHandling": "retry", # "retry", "skip", or "abort"
}
session = await client.create_session(
on_permission_request=PermissionHandler.approve_all,
model="gpt-5",
hooks={
"on_pre_tool_use": on_pre_tool_use,
"on_post_tool_use": on_post_tool_use,
"on_user_prompt_submitted": on_user_prompt_submitted,
"on_session_start": on_session_start,
"on_session_end": on_session_end,
"on_error_occurred": on_error_occurred,
},
)
Available hooks:
on_pre_tool_use- Intercept tool calls before execution. Can allow/deny or modify arguments.on_post_tool_use- Process tool results after execution. Can modify results or add context.on_user_prompt_submitted- Intercept user prompts. Can modify the prompt before processing.on_session_start- Run logic when a session starts or resumes.on_session_end- Cleanup or logging when session ends.on_error_occurred- Handle errors with retry/skip/abort strategies.
Requirements
- Python 3.11+
- GitHub Copilot CLI installed and accessible
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file github_copilot_sdk-0.2.0-py3-none-win_arm64.whl.
File metadata
- Download URL: github_copilot_sdk-0.2.0-py3-none-win_arm64.whl
- Upload date:
- Size: 54.5 MB
- Tags: Python 3, Windows ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e9f382f1145ed0c79cc6fd3fff20622d9ca47ef184cbbf91ffb74192ab489d2e
|
|
| MD5 |
1e4f9943950dfeb915172b179b4c62fa
|
|
| BLAKE2b-256 |
baf24a73bce97382bdbd6d9413a9baac8fa7cf10bf77ad25c0574470e28cfee4
|
Provenance
The following attestation bundles were made for github_copilot_sdk-0.2.0-py3-none-win_arm64.whl:
Publisher:
publish.yml on github/copilot-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
github_copilot_sdk-0.2.0-py3-none-win_arm64.whl -
Subject digest:
e9f382f1145ed0c79cc6fd3fff20622d9ca47ef184cbbf91ffb74192ab489d2e - Sigstore transparency entry: 1149974842
- Sigstore integration time:
-
Permalink:
github/copilot-sdk@1ff9e1b84a06cada43da99919526bcd87d445556 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/github
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1ff9e1b84a06cada43da99919526bcd87d445556 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file github_copilot_sdk-0.2.0-py3-none-win_amd64.whl.
File metadata
- Download URL: github_copilot_sdk-0.2.0-py3-none-win_amd64.whl
- Upload date:
- Size: 56.5 MB
- Tags: Python 3, Windows x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
07599d4b8c6801de0ea8c0ad5f80a0b832b67831dfc6c29f4c3178a0da2c8cf9
|
|
| MD5 |
f8a4e5866a38dbee1b1645c589b63d04
|
|
| BLAKE2b-256 |
d6a5341c5a2eb322e24d6550cc2beb190a2b808aeb9b55aa57e85e603858c7b1
|
Provenance
The following attestation bundles were made for github_copilot_sdk-0.2.0-py3-none-win_amd64.whl:
Publisher:
publish.yml on github/copilot-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
github_copilot_sdk-0.2.0-py3-none-win_amd64.whl -
Subject digest:
07599d4b8c6801de0ea8c0ad5f80a0b832b67831dfc6c29f4c3178a0da2c8cf9 - Sigstore transparency entry: 1149974865
- Sigstore integration time:
-
Permalink:
github/copilot-sdk@1ff9e1b84a06cada43da99919526bcd87d445556 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/github
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1ff9e1b84a06cada43da99919526bcd87d445556 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file github_copilot_sdk-0.2.0-py3-none-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: github_copilot_sdk-0.2.0-py3-none-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 62.0 MB
- Tags: Python 3, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0afbd64921daff2e678f895aa8e5348ec1d5d52f6144065022bb8556cb7888cf
|
|
| MD5 |
c740aa363e2e1b1e1fd4b35795829da3
|
|
| BLAKE2b-256 |
a0f68a5499e9ccb446de4233d51429e6890e0438a46d8e84bc0f72fe525214bd
|
Provenance
The following attestation bundles were made for github_copilot_sdk-0.2.0-py3-none-manylinux_2_28_x86_64.whl:
Publisher:
publish.yml on github/copilot-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
github_copilot_sdk-0.2.0-py3-none-manylinux_2_28_x86_64.whl -
Subject digest:
0afbd64921daff2e678f895aa8e5348ec1d5d52f6144065022bb8556cb7888cf - Sigstore transparency entry: 1149974911
- Sigstore integration time:
-
Permalink:
github/copilot-sdk@1ff9e1b84a06cada43da99919526bcd87d445556 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/github
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1ff9e1b84a06cada43da99919526bcd87d445556 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file github_copilot_sdk-0.2.0-py3-none-manylinux_2_28_aarch64.whl.
File metadata
- Download URL: github_copilot_sdk-0.2.0-py3-none-manylinux_2_28_aarch64.whl
- Upload date:
- Size: 63.9 MB
- Tags: Python 3, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e60ba6fd4d4619ea3ba6631d1f204b0393ad53aaef04d4e817c9d50a0e673d04
|
|
| MD5 |
189a7bf4c931e1dd571cea298ae900f5
|
|
| BLAKE2b-256 |
d9dd9f03607da3910daa87228b3b53949de17e5c929bd308fed865b12def600b
|
Provenance
The following attestation bundles were made for github_copilot_sdk-0.2.0-py3-none-manylinux_2_28_aarch64.whl:
Publisher:
publish.yml on github/copilot-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
github_copilot_sdk-0.2.0-py3-none-manylinux_2_28_aarch64.whl -
Subject digest:
e60ba6fd4d4619ea3ba6631d1f204b0393ad53aaef04d4e817c9d50a0e673d04 - Sigstore transparency entry: 1149974813
- Sigstore integration time:
-
Permalink:
github/copilot-sdk@1ff9e1b84a06cada43da99919526bcd87d445556 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/github
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1ff9e1b84a06cada43da99919526bcd87d445556 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file github_copilot_sdk-0.2.0-py3-none-macosx_11_0_arm64.whl.
File metadata
- Download URL: github_copilot_sdk-0.2.0-py3-none-macosx_11_0_arm64.whl
- Upload date:
- Size: 57.8 MB
- Tags: Python 3, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a54a7f7f68d6b552df4c510716de4505e9b2ff3677ce028b0b945ee6900c5b24
|
|
| MD5 |
443c71b931f4f118237107e24d18c7dc
|
|
| BLAKE2b-256 |
95cb5af63e461aa2029424732002a49d8630cce7fc1ec15d13d3889f72fd40bf
|
Provenance
The following attestation bundles were made for github_copilot_sdk-0.2.0-py3-none-macosx_11_0_arm64.whl:
Publisher:
publish.yml on github/copilot-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
github_copilot_sdk-0.2.0-py3-none-macosx_11_0_arm64.whl -
Subject digest:
a54a7f7f68d6b552df4c510716de4505e9b2ff3677ce028b0b945ee6900c5b24 - Sigstore transparency entry: 1149975130
- Sigstore integration time:
-
Permalink:
github/copilot-sdk@1ff9e1b84a06cada43da99919526bcd87d445556 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/github
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1ff9e1b84a06cada43da99919526bcd87d445556 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file github_copilot_sdk-0.2.0-py3-none-macosx_10_9_x86_64.whl.
File metadata
- Download URL: github_copilot_sdk-0.2.0-py3-none-macosx_10_9_x86_64.whl
- Upload date:
- Size: 61.0 MB
- Tags: Python 3, macOS 10.9+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec96ff07c895a141778684ac3b9bebfd5b8cb089d916b0fef1e473321f9a6f24
|
|
| MD5 |
3f43f332048a133ecde43633d8509055
|
|
| BLAKE2b-256 |
44e1ef44934ab7ee158e19a96e5bfa9e0d9ed3c0ba26b64ef01a7da39ee212a4
|
Provenance
The following attestation bundles were made for github_copilot_sdk-0.2.0-py3-none-macosx_10_9_x86_64.whl:
Publisher:
publish.yml on github/copilot-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
github_copilot_sdk-0.2.0-py3-none-macosx_10_9_x86_64.whl -
Subject digest:
ec96ff07c895a141778684ac3b9bebfd5b8cb089d916b0fef1e473321f9a6f24 - Sigstore transparency entry: 1149975000
- Sigstore integration time:
-
Permalink:
github/copilot-sdk@1ff9e1b84a06cada43da99919526bcd87d445556 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/github
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1ff9e1b84a06cada43da99919526bcd87d445556 -
Trigger Event:
workflow_dispatch
-
Statement type: