Skip to main content

Python SDK for GitHub Copilot CLI

Project description

Copilot Python SDK

Python SDK for programmatic control of GitHub Copilot CLI via JSON-RPC.

Note: This SDK is in public preview and may change in breaking ways.

Installation

pip install -e ".[telemetry,dev]"
# or
uv pip install -e ".[telemetry,dev]"

Run the Sample

Try the interactive chat sample (from the repo root):

cd python/samples
python chat.py

Quick Start

import asyncio

from copilot import CopilotClient
from copilot.generated.session_events import AssistantMessageData, SessionIdleData

async def main():
    # Client automatically starts on enter and cleans up on exit
    async with CopilotClient() as client:
        # Create a session with automatic cleanup
        async with await client.create_session(model="gpt-5") as session:
            # Wait for response using session.idle event
            done = asyncio.Event()

            def on_event(event):
                match event.data:
                    case AssistantMessageData() as data:
                        print(data.content)
                    case SessionIdleData():
                        done.set()

            session.on(on_event)

            # Send a message and wait for completion
            await session.send("What is 2+2?")
            await done.wait()

asyncio.run(main())

Manual Resource Management

If you need more control over the lifecycle, you can call start(), stop(), and disconnect() manually:

import asyncio

from copilot import CopilotClient
from copilot.generated.session_events import AssistantMessageData, SessionIdleData
from copilot.session import PermissionHandler

async def main():
    client = CopilotClient()
    await client.start()

    # Create a session (on_permission_request is required)
    session = await client.create_session(
        on_permission_request=PermissionHandler.approve_all,
        model="gpt-5",
    )

    done = asyncio.Event()

    def on_event(event):
        match event.data:
            case AssistantMessageData() as data:
                print(data.content)
            case SessionIdleData():
                done.set()

    session.on(on_event)
    await session.send("What is 2+2?")
    await done.wait()

    # Clean up manually
    await session.disconnect()
    await client.stop()

asyncio.run(main())

Features

  • ✅ Full JSON-RPC protocol support
  • ✅ stdio and TCP transports
  • ✅ Real-time streaming events
  • ✅ Session history with get_messages()
  • ✅ Type hints throughout
  • ✅ Async/await native
  • ✅ Async context manager support for automatic resource cleanup

API Reference

CopilotClient

from copilot import CopilotClient, SubprocessConfig
from copilot.session import PermissionHandler

async with CopilotClient() as client:
    async with await client.create_session(model="gpt-5") as session:
        def on_event(event):
            print(f"Event: {event.type}")

        session.on(on_event)
        await session.send("Hello!")

        # ... wait for events ...

Note: For manual lifecycle management, see Manual Resource Management above.

from copilot import CopilotClient, ExternalServerConfig

# Connect to an existing CLI server
client = CopilotClient(ExternalServerConfig(url="localhost:3000"))

CopilotClient Constructor:

CopilotClient(
    config=None,        # SubprocessConfig | ExternalServerConfig | None
    *,
    auto_start=True,    # auto-start server on first use
    on_list_models=None, # custom handler for list_models()
)

SubprocessConfig — spawn a local CLI process:

  • cli_path (str | None): Path to CLI executable (default: COPILOT_CLI_PATH env var, or bundled binary)
  • cli_args (list[str]): Extra arguments for the CLI executable
  • cwd (str | None): Working directory for CLI process (default: current dir)
  • use_stdio (bool): Use stdio transport instead of TCP (default: True)
  • port (int): Server port for TCP mode (default: 0 for random)
  • log_level (str): Log level (default: "info")
  • env (dict | None): Environment variables for the CLI process
  • github_token (str | None): GitHub token for authentication. When provided, takes priority over other auth methods.
  • copilot_home (str | None): Base directory for Copilot data (session state, config, etc.). Sets COPILOT_HOME on the spawned CLI process. When None, the CLI defaults to ~/.copilot. Useful in restricted environments where only specific directories are writable. Ignored when using ExternalServerConfig.
  • use_logged_in_user (bool | None): Whether to use logged-in user for authentication (default: True, but False when github_token is provided).
  • telemetry (dict | None): OpenTelemetry configuration for the CLI process. Providing this enables telemetry — no separate flag needed. See Telemetry below.

ExternalServerConfig — connect to an existing CLI server:

  • url (str): Server URL (e.g., "localhost:8080", "http://127.0.0.1:9000", or just "8080").

CopilotClient.create_session():

These are passed as keyword arguments to create_session():

  • model (str): Model to use ("gpt-5", "claude-sonnet-4.5", etc.). Required when using custom provider.
  • reasoning_effort (str): Reasoning effort level for models that support it ("low", "medium", "high", "xhigh"). Use list_models() to check which models support this option.
  • session_id (str): Custom session ID
  • tools (list): Custom tools exposed to the CLI
  • system_message (SystemMessageConfig): System message configuration
  • streaming (bool): Enable streaming delta events
  • provider (ProviderConfig): Custom API provider configuration (BYOK). See Custom Providers section.
  • infinite_sessions (InfiniteSessionConfig): Automatic context compaction configuration
  • on_permission_request (callable): Required. Handler called before each tool execution to approve or deny it. Use PermissionHandler.approve_all to allow everything, or provide a custom function for fine-grained control. See Permission Handling section.
  • on_user_input_request (callable): Handler for user input requests from the agent (enables ask_user tool). See User Input Requests section.
  • hooks (SessionHooks): Hook handlers for session lifecycle events. See Session Hooks section.

Session Lifecycle Methods:

# Get the session currently displayed in TUI (TUI+server mode only)
session_id = await client.get_foreground_session_id()

# Request TUI to display a specific session (TUI+server mode only)
await client.set_foreground_session_id("session-123")

# Subscribe to all lifecycle events
def on_lifecycle(event):
    print(f"{event.type}: {event.sessionId}")

unsubscribe = client.on(on_lifecycle)

# Subscribe to specific event type
unsubscribe = client.on("session.foreground", lambda e: print(f"Foreground: {e.sessionId}"))

# Later, to stop receiving events:
unsubscribe()

Lifecycle Event Types:

  • session.created - A new session was created
  • session.deleted - A session was deleted
  • session.updated - A session was updated
  • session.foreground - A session became the foreground session in TUI
  • session.background - A session is no longer the foreground session

Tools

Define tools with automatic JSON schema generation using the @define_tool decorator and Pydantic models:

from pydantic import BaseModel, Field
from copilot import CopilotClient, define_tool

class LookupIssueParams(BaseModel):
    id: str = Field(description="Issue identifier")

@define_tool(description="Fetch issue details from our tracker")
async def lookup_issue(params: LookupIssueParams) -> str:
    issue = await fetch_issue(params.id)
    return issue.summary

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    tools=[lookup_issue],
) as session:
    ...

Note: When using from __future__ import annotations, define Pydantic models at module level (not inside functions).

Low-level API (without Pydantic):

For users who prefer manual schema definition:

from copilot import CopilotClient
from copilot.tools import Tool, ToolInvocation, ToolResult
from copilot.session import PermissionHandler

async def lookup_issue(invocation: ToolInvocation) -> ToolResult:
    issue_id = invocation.arguments["id"]
    issue = await fetch_issue(issue_id)
    return ToolResult(
        text_result_for_llm=issue.summary,
        result_type="success",
        session_log=f"Fetched issue {issue_id}",
    )

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    tools=[
        Tool(
            name="lookup_issue",
            description="Fetch issue details from our tracker",
            parameters={
                "type": "object",
                "properties": {
                    "id": {"type": "string", "description": "Issue identifier"},
                },
                "required": ["id"],
            },
            handler=lookup_issue,
        )
    ],
) as session:
    ...

The SDK automatically handles tool.call, executes your handler (sync or async), and responds with the final result when the tool completes.

Overriding Built-in Tools

If you register a tool with the same name as a built-in CLI tool (e.g. edit_file, read_file), the SDK will throw an error unless you explicitly opt in by setting overrides_built_in_tool=True. This flag signals that you intend to replace the built-in tool with your custom implementation.

class EditFileParams(BaseModel):
    path: str = Field(description="File path")
    content: str = Field(description="New file content")

@define_tool(name="edit_file", description="Custom file editor with project-specific validation", overrides_built_in_tool=True)
async def edit_file(params: EditFileParams) -> str:
    # your logic

Skipping Permission Prompts

Set skip_permission=True on a tool definition to allow it to execute without triggering a permission prompt:

@define_tool(name="safe_lookup", description="A read-only lookup that needs no confirmation", skip_permission=True)
async def safe_lookup(params: LookupParams) -> str:
    # your logic

Image Support

The SDK supports image attachments via the attachments parameter. You can attach images by providing their file path, or by passing base64-encoded data directly using a blob attachment:

# File attachment — runtime reads from disk
await session.send(
    "What's in this image?",
    attachments=[
        {
            "type": "file",
            "path": "/path/to/image.jpg",
        }
    ],
)

# Blob attachment — provide base64 data directly
await session.send(
    "What's in this image?",
    attachments=[
        {
            "type": "blob",
            "data": base64_image_data,
            "mimeType": "image/png",
        }
    ],
)

Supported image formats include JPG, PNG, GIF, and other common image types. The agent's view tool can also read images directly from the filesystem, so you can also ask questions like:

await session.send("What does the most recent jpg in this directory portray?")

Streaming

Enable streaming to receive assistant response chunks as they're generated:

import asyncio

from copilot import CopilotClient
from copilot.generated.session_events import (
    AssistantMessageData,
    AssistantMessageDeltaData,
    AssistantReasoningData,
    AssistantReasoningDeltaData,
    SessionIdleData,
)
from copilot.session import PermissionHandler

async def main():
    async with CopilotClient() as client:
        async with await client.create_session(
            on_permission_request=PermissionHandler.approve_all,
            model="gpt-5",
            streaming=True,
        ) as session:
            # Use asyncio.Event to wait for completion
            done = asyncio.Event()

            def on_event(event):
                match event.data:
                    case AssistantMessageDeltaData() as data:
                        # Streaming message chunk - print incrementally
                        delta = data.delta_content or ""
                        print(delta, end="", flush=True)
                    case AssistantReasoningDeltaData() as data:
                        # Streaming reasoning chunk (if model supports reasoning)
                        delta = data.delta_content or ""
                        print(delta, end="", flush=True)
                    case AssistantMessageData() as data:
                        # Final message - complete content
                        print("\n--- Final message ---")
                        print(data.content)
                    case AssistantReasoningData() as data:
                        # Final reasoning content (if model supports reasoning)
                        print("--- Reasoning ---")
                        print(data.content)
                    case SessionIdleData():
                        # Session finished processing
                        done.set()

            session.on(on_event)
            await session.send("Tell me a short story")
            await done.wait()  # Wait for streaming to complete

asyncio.run(main())

When streaming=True:

  • assistant.message_delta events are sent with delta_content containing incremental text
  • assistant.reasoning_delta events are sent with delta_content for reasoning/chain-of-thought (model-dependent)
  • Accumulate delta_content values to build the full response progressively
  • The final assistant.message and assistant.reasoning events contain the complete content

Note: assistant.message and assistant.reasoning (final events) are always sent regardless of streaming setting.

Infinite Sessions

By default, sessions use infinite sessions which automatically manage context window limits through background compaction and persist state to a workspace directory.

# Default: infinite sessions enabled with default thresholds
async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
) as session:
    # Access the workspace path for checkpoints and files
    print(session.workspace_path)
    # => ~/.copilot/session-state/{session_id}/

# Custom thresholds
async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    infinite_sessions={
        "enabled": True,
        "background_compaction_threshold": 0.80,  # Start compacting at 80% context usage
        "buffer_exhaustion_threshold": 0.95,  # Block at 95% until compaction completes
    },
) as session:
    ...

# Disable infinite sessions
async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    infinite_sessions={"enabled": False},
) as session:
    ...

When enabled, sessions emit compaction events:

  • session.compaction_start - Background compaction started
  • session.compaction_complete - Compaction finished (includes token counts)

Custom Providers

The SDK supports custom OpenAI-compatible API providers (BYOK - Bring Your Own Key), including local providers like Ollama. When using a custom provider, you must specify the model explicitly.

ProviderConfig fields:

  • type (str): Provider type - "openai", "azure", or "anthropic" (default: "openai")
  • base_url (str): API endpoint URL (required)
  • api_key (str): API key (optional for local providers like Ollama)
  • bearer_token (str): Bearer token for authentication (takes precedence over api_key)
  • wire_api (str): API format for OpenAI/Azure - "completions" or "responses" (default: "completions")
  • azure (dict): Azure-specific options with api_version (default: "2024-10-21")

Example with Ollama:

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="deepseek-coder-v2:16b",  # Required when using custom provider
    provider={
        "type": "openai",
        "base_url": "http://localhost:11434/v1",  # Ollama endpoint
        # api_key not required for Ollama
    },
) as session:
    await session.send("Hello!")

Example with custom OpenAI-compatible API:

import os

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-4",
    provider={
        "type": "openai",
        "base_url": "https://my-api.example.com/v1",
        "api_key": os.environ["MY_API_KEY"],
    },
) as session:
    ...

Example with Azure OpenAI:

import os

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-4",
    provider={
        "type": "azure",  # Must be "azure" for Azure endpoints, NOT "openai"
        "base_url": "https://my-resource.openai.azure.com",  # Just the host, no path
        "api_key": os.environ["AZURE_OPENAI_KEY"],
        "azure": {
            "api_version": "2024-10-21",
        },
    },
) as session:
    ...

Important notes:

  • When using a custom provider, the model parameter is required. The SDK will throw an error if no model is specified.
  • For Azure OpenAI endpoints (*.openai.azure.com), you must use type: "azure", not type: "openai".
  • The base_url should be just the host (e.g., https://my-resource.openai.azure.com). Do not include /openai/v1 in the URL - the SDK handles path construction automatically.

Telemetry

The SDK supports OpenTelemetry for distributed tracing. Provide a telemetry config to enable trace export and automatic W3C Trace Context propagation.

from copilot import CopilotClient, SubprocessConfig

client = CopilotClient(SubprocessConfig(
    telemetry={
        "otlp_endpoint": "http://localhost:4318",
    },
))

TelemetryConfig options:

  • otlp_endpoint (str): OTLP HTTP endpoint URL
  • file_path (str): File path for JSON-lines trace output
  • exporter_type (str): "otlp-http" or "file"
  • source_name (str): Instrumentation scope name
  • capture_content (bool): Whether to capture message content

Trace context (traceparent/tracestate) is automatically propagated between the SDK and CLI on create_session, resume_session, and send calls, and inbound when the CLI invokes tool handlers.

Install with telemetry extras: pip install copilot-sdk[telemetry] (provides opentelemetry-api)

Permission Handling

An on_permission_request handler is required whenever you create or resume a session. The handler is called before the agent executes each tool (file writes, shell commands, custom tools, etc.) and must return a decision.

Approve All (simplest)

Use the built-in PermissionHandler.approve_all helper to allow every tool call without any checks:

from copilot import CopilotClient
from copilot.session import PermissionHandler

session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
)

Custom Permission Handler

Provide your own function to inspect each request and apply custom logic (sync or async):

from copilot.session import PermissionRequestResult
from copilot.generated.session_events import PermissionRequest

def on_permission_request(
    request: PermissionRequest, invocation: dict
) -> PermissionRequestResult:
    # request.kind — what type of operation is being requested:
    #   "shell"       — executing a shell command
    #   "write"       — writing or editing a file
    #   "read"        — reading a file
    #   "mcp"         — calling an MCP tool
    #   "custom-tool" — calling one of your registered tools
    #   "url"         — fetching a URL
    #   "memory"      — accessing or updating session/workspace memory
    #   "hook"        — invoking a registered hook
    # request.tool_call_id  — the tool call that triggered this request
    # request.tool_name     — name of the tool (for custom-tool / mcp)
    # request.file_name     — file being written (for write)
    # request.full_command_text — full shell command (for shell)

    if request.kind.value == "shell":
        # Deny shell commands
        return PermissionRequestResult(kind="denied-interactively-by-user")

    return PermissionRequestResult(kind="approved")

session = await client.create_session(
    on_permission_request=on_permission_request,
    model="gpt-5",
)

Async handlers are also supported:

async def on_permission_request(
    request: PermissionRequest, invocation: dict
) -> PermissionRequestResult:
    # Simulate an async approval check (e.g., prompting a user over a network)
    await asyncio.sleep(0)
    return PermissionRequestResult(kind="approved")

Permission Result Kinds

kind value Meaning
"approved" Allow the tool to run
"denied-interactively-by-user" User explicitly denied the request
"denied-no-approval-rule-and-could-not-request-from-user" No approval rule matched and user could not be asked (default when no kind is specified)
"denied-by-rules" Denied by a policy rule
"denied-by-content-exclusion-policy" Denied due to a content exclusion policy
"no-result" Leave the request unanswered (not allowed for protocol v2 permission requests)

Resuming Sessions

Pass on_permission_request when resuming a session too — it is required:

session = await client.resume_session(
    "session-id",
    on_permission_request=PermissionHandler.approve_all,
)

Per-Tool Skip Permission

To let a specific custom tool bypass the permission prompt entirely, set skip_permission=True on the tool definition. See Skipping Permission Prompts under Tools.

User Input Requests

Enable the agent to ask questions to the user using the ask_user tool by providing an on_user_input_request handler:

async def handle_user_input(request, invocation):
    # request["question"] - The question to ask
    # request.get("choices") - Optional list of choices for multiple choice
    # request.get("allowFreeform", True) - Whether freeform input is allowed

    print(f"Agent asks: {request['question']}")
    if request.get("choices"):
        print(f"Choices: {', '.join(request['choices'])}")

    # Return the user's response
    return {
        "answer": "User's answer here",
        "wasFreeform": True,  # Whether the answer was freeform (not from choices)
    }

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    on_user_input_request=handle_user_input,
) as session:
    ...

Session Hooks

Hook into session lifecycle events by providing handlers in the hooks configuration:

async def on_pre_tool_use(input, invocation):
    print(f"About to run tool: {input['toolName']}")
    # Return permission decision and optionally modify args
    return {
        "permissionDecision": "allow",  # "allow", "deny", or "ask"
        "modifiedArgs": input.get("toolArgs"),  # Optionally modify tool arguments
        "additionalContext": "Extra context for the model",
    }

async def on_post_tool_use(input, invocation):
    print(f"Tool {input['toolName']} completed")
    return {
        "additionalContext": "Post-execution notes",
    }

async def on_user_prompt_submitted(input, invocation):
    print(f"User prompt: {input['prompt']}")
    return {
        "modifiedPrompt": input["prompt"],  # Optionally modify the prompt
    }

async def on_session_start(input, invocation):
    print(f"Session started from: {input['source']}")  # "startup", "resume", "new"
    return {
        "additionalContext": "Session initialization context",
    }

async def on_session_end(input, invocation):
    print(f"Session ended: {input['reason']}")

async def on_error_occurred(input, invocation):
    print(f"Error in {input['errorContext']}: {input['error']}")
    return {
        "errorHandling": "retry",  # "retry", "skip", or "abort"
    }

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    hooks={
        "on_pre_tool_use": on_pre_tool_use,
        "on_post_tool_use": on_post_tool_use,
        "on_user_prompt_submitted": on_user_prompt_submitted,
        "on_session_start": on_session_start,
        "on_session_end": on_session_end,
        "on_error_occurred": on_error_occurred,
    },
) as session:
    ...

Available hooks:

  • on_pre_tool_use - Intercept tool calls before execution. Can allow/deny or modify arguments.
  • on_post_tool_use - Process tool results after execution. Can modify results or add context.
  • on_user_prompt_submitted - Intercept user prompts. Can modify the prompt before processing.
  • on_session_start - Run logic when a session starts or resumes.
  • on_session_end - Cleanup or logging when session ends.
  • on_error_occurred - Handle errors with retry/skip/abort strategies.

Commands

Register slash commands that users can invoke from the CLI TUI. When the user types /commandName, the SDK dispatches the event to your handler.

from copilot.session import CommandDefinition, CommandContext, PermissionHandler

async def handle_deploy(ctx: CommandContext) -> None:
    print(f"Deploying with args: {ctx.args}")
    # ctx.session_id  — the session where the command was invoked
    # ctx.command      — full command text (e.g. "/deploy production")
    # ctx.command_name — command name without leading / (e.g. "deploy")
    # ctx.args         — raw argument string (e.g. "production")

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    commands=[
        CommandDefinition(
            name="deploy",
            description="Deploy the app",
            handler=handle_deploy,
        ),
        CommandDefinition(
            name="rollback",
            description="Rollback to previous version",
            handler=lambda ctx: print("Rolling back..."),
        ),
    ],
) as session:
    ...

Commands can also be provided when resuming a session via resume_session(commands=[...]).

UI Elicitation

The session.ui API provides convenience methods for asking the user questions through interactive dialogs. These methods are only available when the CLI host supports elicitation — check session.capabilities before calling.

Capability Check

ui_caps = session.capabilities.get("ui", {})
if ui_caps.get("elicitation"):
    # Safe to call session.ui methods
    ...

Confirm

Shows a yes/no confirmation dialog:

ok = await session.ui.confirm("Deploy to production?")
if ok:
    print("Deploying...")

Select

Shows a selection dialog with a list of options:

env = await session.ui.select("Choose environment:", ["staging", "production", "dev"])
if env:
    print(f"Selected: {env}")

Input

Shows a text input dialog with optional constraints:

name = await session.ui.input("Enter your name:")

# With options
email = await session.ui.input("Enter email:", {
    "title": "Email Address",
    "description": "We'll use this for notifications",
    "format": "email",
})

Custom Elicitation

For full control, use the elicitation() method with a custom JSON schema:

result = await session.ui.elicitation({
    "message": "Configure deployment",
    "requestedSchema": {
        "type": "object",
        "properties": {
            "region": {"type": "string", "enum": ["us-east-1", "eu-west-1"]},
            "replicas": {"type": "number", "minimum": 1, "maximum": 10},
        },
        "required": ["region"],
    },
})

if result["action"] == "accept":
    region = result["content"]["region"]
    replicas = result["content"].get("replicas", 1)

Elicitation Request Handler

When the server (or an MCP tool) needs to ask the end-user a question, it sends an elicitation.requested event. Provide an on_elicitation_request handler to respond:

from copilot.session import ElicitationContext, ElicitationResult, PermissionHandler

async def handle_elicitation(
    context: ElicitationContext,
) -> ElicitationResult:
    # context["session_id"]           — the session ID
    # context["message"]              — what the server is asking
    # context.get("requestedSchema")  — optional JSON schema for form fields
    # context.get("mode")             — "form" or "url"

    print(f"Server asks: {context['message']}")

    # Return the user's response
    return {
        "action": "accept",  # or "decline" or "cancel"
        "content": {"answer": "yes"},
    }

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    on_elicitation_request=handle_elicitation,
) as session:
    ...

When on_elicitation_request is provided, the SDK automatically:

  • Sends requestElicitation: true to the server during session creation/resumption
  • Reports the elicitation capability on the session
  • Dispatches elicitation.requested events to your handler
  • Auto-cancels if your handler throws an error (so the server doesn't hang)

Requirements

  • Python 3.11+
  • GitHub Copilot CLI installed and accessible

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

github_copilot_sdk-1.0.0b3-py3-none-win_arm64.whl (69.7 MB view details)

Uploaded Python 3Windows ARM64

github_copilot_sdk-1.0.0b3-py3-none-win_amd64.whl (72.0 MB view details)

Uploaded Python 3Windows x86-64

github_copilot_sdk-1.0.0b3-py3-none-manylinux_2_28_x86_64.whl (77.5 MB view details)

Uploaded Python 3manylinux: glibc 2.28+ x86-64

github_copilot_sdk-1.0.0b3-py3-none-manylinux_2_28_aarch64.whl (78.9 MB view details)

Uploaded Python 3manylinux: glibc 2.28+ ARM64

github_copilot_sdk-1.0.0b3-py3-none-macosx_11_0_arm64.whl (73.2 MB view details)

Uploaded Python 3macOS 11.0+ ARM64

github_copilot_sdk-1.0.0b3-py3-none-macosx_10_9_x86_64.whl (76.5 MB view details)

Uploaded Python 3macOS 10.9+ x86-64

File details

Details for the file github_copilot_sdk-1.0.0b3-py3-none-win_arm64.whl.

File metadata

File hashes

Hashes for github_copilot_sdk-1.0.0b3-py3-none-win_arm64.whl
Algorithm Hash digest
SHA256 3a12cc1c74c3e136a0188f4d56b86e80b31faea28d84ed61c82480d72b294469
MD5 39d2ae34cca8701e281b7a699989fb4c
BLAKE2b-256 4c2ac9bc52596e567bf588f594f2779b3c06bbf3b09bf0b4df330436c3e486d0

See more details on using hashes here.

Provenance

The following attestation bundles were made for github_copilot_sdk-1.0.0b3-py3-none-win_arm64.whl:

Publisher: publish.yml on github/copilot-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file github_copilot_sdk-1.0.0b3-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for github_copilot_sdk-1.0.0b3-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 0491bebb8687a4acb1ee303efca94fcbb5ecce41bc29a7569dfc6ff93e8d299a
MD5 221b8b190f41df5454bde2a8c26b5452
BLAKE2b-256 2b94137999746e3ec54ef268c75cc0c40ad8c2811389432e56ba20cb105af65c

See more details on using hashes here.

Provenance

The following attestation bundles were made for github_copilot_sdk-1.0.0b3-py3-none-win_amd64.whl:

Publisher: publish.yml on github/copilot-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file github_copilot_sdk-1.0.0b3-py3-none-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for github_copilot_sdk-1.0.0b3-py3-none-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 97c285ed5f51aac98b549d75289e058ee3c7cee41159e87186c59f8b8035c71c
MD5 dd41db715dbb00caf7326ab12abab26d
BLAKE2b-256 659ed8d9cb8c8d18757ba07e94bc288ee8c562801e7c79a254bcae589f71089e

See more details on using hashes here.

Provenance

The following attestation bundles were made for github_copilot_sdk-1.0.0b3-py3-none-manylinux_2_28_x86_64.whl:

Publisher: publish.yml on github/copilot-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file github_copilot_sdk-1.0.0b3-py3-none-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for github_copilot_sdk-1.0.0b3-py3-none-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 8649aefb69c126f6ee6e2fb185d4d376e8d5db332e835c3cb4c2eb0ee775f58a
MD5 b4dd25c6d5e5defffcf195c4db161e40
BLAKE2b-256 e876946159bb0af0809df143eb895f6dcf9f05251034cabf937bf64e8ad57fae

See more details on using hashes here.

Provenance

The following attestation bundles were made for github_copilot_sdk-1.0.0b3-py3-none-manylinux_2_28_aarch64.whl:

Publisher: publish.yml on github/copilot-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file github_copilot_sdk-1.0.0b3-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for github_copilot_sdk-1.0.0b3-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b3115553e6f79e76027bc4e5d6249623bc9b5fdc4c93a692fed9b34fdea608f5
MD5 e5d4f18c5218470827bc04f50259ff18
BLAKE2b-256 9b2aa1ec3a8f0ff9cd78273813336d4bb2228ea433ef77bb8d6981a1cf5303aa

See more details on using hashes here.

Provenance

The following attestation bundles were made for github_copilot_sdk-1.0.0b3-py3-none-macosx_11_0_arm64.whl:

Publisher: publish.yml on github/copilot-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file github_copilot_sdk-1.0.0b3-py3-none-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for github_copilot_sdk-1.0.0b3-py3-none-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 a9a7d6dc948f709591638b6bf6195d8b207d81e56d412763204759d1bd7f54b5
MD5 2aa663e5f7f307f3af02370bc3d5e4e2
BLAKE2b-256 1992e23c7a0879dc7b888e6c503ebc1ddcd6fc5a264282e1a7c264dbc4ac8ef9

See more details on using hashes here.

Provenance

The following attestation bundles were made for github_copilot_sdk-1.0.0b3-py3-none-macosx_10_9_x86_64.whl:

Publisher: publish.yml on github/copilot-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page