A lean Claude Code clone in pure Python
Project description
PatchPal — A Claude Code–Style Agent in Python
A lightweight Claude Code–inspired coding and automation assistant -- supports both local and cloud LLMs.
PatchPal is an AI coding agent that helps you build software, debug issues, and automate tasks. Like Claude Code, it supports agent skills, tool use, and executable Python generation, enabling interactive workflows for tasks such as data analysis, visualization, web scraping, API interactions, and research with synthesized findings.
A key goal of this project is to approximate Claude Code's core functionality while remaining lean, accessible, and configurable, enabling learning, experimentation, and broad applicability across use cases.
$ls ./patchpal
__init__.py agent.py cli.py context.py permissions.py skills.py system_prompt.md tools.py
Installation
Install PatchPal from PyPI:
pip install patchpal
Supported Operating Systems: Linux, MacOS, MS Windows.
Setup
-
Get an API key or a Local LLM Engine:
- [Cloud] For Anthropic models (default): Sign up at https://console.anthropic.com/
- [Cloud] For OpenAI models: Get a key from https://platform.openai.com/
- [Local] For vLLM: Install from https://docs.vllm.ai/ (free - no API charges) Recommended for Local Use
- [Local] For Ollama: Install from https://ollama.com/ (⚠️ requires
OLLAMA_CONTEXT_LENGTH=32768- see Ollama section below) - For other providers: Check the LiteLLM documentation
-
Set up your API key as environment variable:
# For Anthropic (default)
export ANTHROPIC_API_KEY=your_api_key_here
# For OpenAI
export OPENAI_API_KEY=your_api_key_here
# For vLLM - API key required only if configured
export HOSTED_VLLM_API_BASE=http://localhost:8000 # depends on your vLLM setup
export HOSTED_VLLM_API_KEY=token-abc123 # optional depending on your vLLM setup
# For other providers, check LiteLLM docs
- Run PatchPal:
# Use default model (anthropic/claude-sonnet-4-5)
patchpal
# Use a specific model via command-line argument
patchpal --model openai/gpt-4o # or openai/gpt-5, anthropic/claude-opus-4-5 etc.
# Use vLLM (local)
# Note: vLLM server must be started with --tool-call-parser and --enable-auto-tool-choice
# See "Using Local Models (vLLM & Ollama)" section below for details
export HOSTED_VLLM_API_BASE=http://localhost:8000
export HOSTED_VLLM_API_KEY=token-abc123
patchpal --model hosted_vllm/openai/gpt-oss-20b
# Use Ollama (local - requires OLLAMA_CONTEXT_LENGTH=32768)
export OLLAMA_CONTEXT_LENGTH=32768
patchpal --model ollama_chat/qwen3:32b
# Or set the model via environment variable
export PATCHPAL_MODEL=openai/gpt-5
patchpal
Features
Tools
The agent has the following tools:
File Operations
- read_file: Read contents of files in the repository
- list_files: List all files in the repository
- get_file_info: Get detailed metadata for file(s) - size, modification time, type
- Supports single files:
get_file_info("file.txt") - Supports directories:
get_file_info("src/") - Supports glob patterns:
get_file_info("tests/*.py")
- Supports single files:
- find_files: Find files by name pattern using glob-style wildcards
- Example:
find_files("*.py")- all Python files - Example:
find_files("test_*.py")- all test files - Example:
find_files("**/*.md")- all markdown files recursively - Supports case-insensitive matching
- Example:
- tree: Show directory tree structure to understand folder organization
- Example:
tree(".")- show tree from current directory - Configurable max depth (default: 3, max: 10)
- Option to show/hide hidden files
- Example:
- grep_code: Search for patterns in code files (regex support, file filtering)
- edit_file: Edit a file by replacing an exact string (efficient for small changes)
- Example:
edit_file("config.py", "port = 3000", "port = 8080") - More efficient than apply_patch for targeted changes
- Old string must appear exactly once in the file
- Example:
- apply_patch: Modify files by providing complete new content
- run_shell: Execute shell commands (requires user permission; privilege escalation blocked)
Git Operations (No Permission Required)
- git_status: Show modified, staged, and untracked files
- git_diff: Show changes in working directory or staged area
- Optional parameters:
path(specific file),staged(show staged changes)
- Optional parameters:
- git_log: Show commit history
- Optional parameters:
max_count(number of commits, max 50),path(specific file history)
- Optional parameters:
Web Capabilities (Requires Permission)
- web_search: Search the web using DuckDuckGo (no API key required!)
- Look up error messages and solutions
- Find current documentation and best practices
- Research library versions and compatibility
- Requires permission to prevent information leakage about your codebase
- web_fetch: Fetch and read content from URLs
- Read documentation pages
- Access API references
- Extract readable text from HTML pages
- Requires permission to prevent information leakage about your codebase
Skills System
Skills are reusable workflows and custom commands that can be invoked by name or discovered automatically by the agent.
Creating Your Own Skills:
-
Choose a location:
- Personal skills (all projects):
~/.patchpal/skills/<skill-name>/SKILL.md - Project-specific skills:
<repo>/.patchpal/skills/<skill-name>/SKILL.md
- Personal skills (all projects):
-
Create the skill file:
# Create a personal skill
mkdir -p ~/.patchpal/skills/my-skill
cat > ~/.patchpal/skills/my-skill/SKILL.md <<'EOF'
---
name: my-skill
description: Brief description of what this skill does
---
# Instructions
Your detailed instructions here...
EOF
- Skill File Format:
---
name: skill-name
description: One-line description
---
# Detailed Instructions
- Step 1: Do this
- Step 2: Do that
- Use specific PatchPal tools like git_status, read_file, etc.
Example Skills:
The PatchPal repository includes example skills you can use as templates:
- commit: Best practices for creating git commits
- review: Comprehensive code review checklist
- add-tests: Add comprehensive pytest tests (includes code block templates)
- slack-gif-creator: Create animated GIFs for Slack (from Anthropic's official skills repo, demonstrates Claude Code compatibility)
- skill-creator: Guide for creating effective skills with bundled scripts and references (from Anthropic's official skills repo, demonstrates full bundled resources support)
After pip install patchpal, get examples:
# Quick way: Download examples directly from GitHub
curl -L https://github.com/amaiya/patchpal/archive/main.tar.gz | tar xz --strip=1 patchpal-main/examples
# Or clone the repository
git clone https://github.com/amaiya/patchpal.git
cd patchpal
# Copy examples to your personal skills directory
cp -r examples/skills/commit ~/.patchpal/skills/
cp -r examples/skills/review ~/.patchpal/skills/
cp -r examples/skills/add-tests ~/.patchpal/skills/
View examples online: Browse the examples/skills/ directory on GitHub to see the skill format and create your own.
You can also try out the example skills at anthropic/skills.
Using Skills:
There are two ways to invoke skills:
- Direct invocation - Type
/skillnameat the prompt:
$ patchpal
You: /commit Fix authentication bug
- Natural language - Just ask, and the agent discovers the right skill:
You: Help me commit these changes following best practices
# Agent automatically discovers and uses the commit skill
Finding Available Skills:
Ask the agent to list them:
You: list skills
Skill Priority:
Project skills (.patchpal/skills/) override personal skills (~/.patchpal/skills/) with the same name.
Model Configuration
PatchPal supports any LiteLLM-compatible model. You can configure the model in three ways (in order of priority):
1. Command-line Argument
patchpal --model openai/gpt-5
patchpal --model anthropic/claude-sonnet-4-5
patchpal --model hosted_vllm/openai/gpt-oss-20b # local model - no API charges
2. Environment Variable
export PATCHPAL_MODEL=openai/gpt-5
patchpal
3. Default Model
If no model is specified, PatchPal uses anthropic/claude-sonnet-4-5 (Claude Sonnet 4.5).
Supported Models
PatchPal works with any model supported by LiteLLM, including:
- Anthropic (Recommended):
anthropic/claude-sonnet-4-5,anthropic/claude-opus-4-5,anthropic/claude-3-7-sonnet-latest - OpenAI:
openai/gpt-5,openai/gpt-4o - AWS Bedrock:
bedrock/anthropic.claude-sonnet-4-5-v1:0 - vLLM (Local) (Recommended for local): See vLLM section below for setup
- Ollama (Local): See Ollama section below for setup
- Google:
gemini/gemini-pro,vertex_ai/gemini-pro - Others: Cohere, Azure OpenAI, and many more
See the LiteLLM providers documentation for the complete list.
Using Local Models (vLLM & Ollama)
Run models locally on your machine without needing API keys or internet access.
⚠️ IMPORTANT: For local models, we recommend vLLM.
vLLM provides:
- ✅ Robust multi-turn tool calling
- ✅ 3-10x faster inference than Ollama
- ✅ Production-ready reliability
vLLM (Recommended for Local Models)
vLLM is significantly faster than Ollama due to optimized inference with continuous batching and PagedAttention.
Important: vLLM >= 0.10.2 is required for proper tool calling support.
Using Local vLLM Server:
# 1. Install vLLM (>= 0.10.2)
pip install vllm
# 2. Start vLLM server with tool calling enabled
vllm serve openai/gpt-oss-20b \
--dtype auto \
--api-key token-abc123 \
--tool-call-parser openai \
--enable-auto-tool-choice
# 3. Use with PatchPal (in another terminal)
export HOSTED_VLLM_API_BASE=http://localhost:8000
export HOSTED_VLLM_API_KEY=token-abc123
patchpal --model hosted_vllm/openai/gpt-oss-20b
Using Remote/Hosted vLLM Server:
# For remote vLLM servers (e.g., hosted by your organization)
export HOSTED_VLLM_API_BASE=https://your-vllm-server.com
export HOSTED_VLLM_API_KEY=your_api_key_here
patchpal --model hosted_vllm/openai/gpt-oss-20b
Environment Variables:
- Use
HOSTED_VLLM_API_BASEandHOSTED_VLLM_API_KEY
Using YAML Configuration (Alternative):
Create a config.yaml:
host: "0.0.0.0"
port: 8000
api-key: "token-abc123"
tool-call-parser: "openai" # Use appropriate parser for your model
enable-auto-tool-choice: true
dtype: "auto"
Then start vLLM:
vllm serve openai/gpt-oss-20b --config config.yaml
# Use with PatchPal
export HOSTED_VLLM_API_BASE=http://localhost:8000
export HOSTED_VLLM_API_KEY=token-abc123
patchpal --model hosted_vllm/openai/gpt-oss-20b
Recommended models for vLLM:
openai/gpt-oss-20b- OpenAI's open-source model (use parser:openai)
Tool Call Parser Reference:
Different models require different parsers. Common parsers include: qwen3_xml, openai, deepseek_v3, llama3_json, mistral, hermes, pythonic, xlam. See vLLM Tool Calling docs for the complete list.
Ollama
Ollama v0.14+ supports tool calling for agentic workflows. However, proper configuration is critical for reliable operation.
Requirements:
- Ollama v0.14.0 or later - Required for tool calling support
- Sufficient context window - Default 4096 tokens is too small; increase to at least 32K
Setup Instructions:
For Native Ollama Installation:
# Set context window size (required!)
export OLLAMA_CONTEXT_LENGTH=32768
# Start Ollama server
ollama serve
# In another terminal, use with PatchPal
patchpal --model ollama_chat/gpt-oss:20b
For Docker:
# Stop existing container (if running)
docker stop ollama
docker rm ollama
# Start with proper configuration
docker run -d \
-e OLLAMA_CONTEXT_LENGTH=32768 \
-v ollama:/root/.ollama \
-p 11434:11434 \
--name ollama \
ollama/ollama
# Verify configuration
docker exec -it ollama ollama run gpt-oss:20b
# In the Ollama prompt, type: /show parameters
# Should show num_ctx much larger than default 4096
# Use with PatchPal
patchpal --model ollama_chat/gpt-oss:20b
Verifying Context Window Size:
# Check your Ollama container configuration
docker inspect ollama | grep OLLAMA_CONTEXT_LENGTH
# Or run a model and check parameters
docker exec -it ollama ollama run gpt-oss:20b
>>> /show parameters
Recommended Models for Tool Calling:
gpt-oss:20b- OpenAI's open-source model, excellent tool callingqwen3:32b- Qwen3 model with good agentic capabilitiesqwen3-coder- Specialized for coding tasks
Performance Note:
While Ollama now works with proper configuration, vLLM is still recommended for production use due to:
- 3-10x faster inference
- More robust tool calling implementation
- Better memory management
Examples:
# Ollama (works with proper configuration)
export OLLAMA_CONTEXT_LENGTH=32768
patchpal --model ollama_chat/qwen3:32b
patchpal --model ollama_chat/gpt-oss:20b
# vLLM (recommended for production)
patchpal --model hosted_vllm/openai/gpt-oss-20b
Air-Gapped and Offline Environments
For environments without internet access (air-gapped, offline, or restricted networks), you can disable web search and fetch tools:
# Disable web tools for air-gapped environment
export PATCHPAL_ENABLE_WEB=false
patchpal
# Or combine with local vLLM for complete offline operation (recommended)
export PATCHPAL_ENABLE_WEB=false
export HOSTED_VLLM_API_BASE=http://localhost:8000
export HOSTED_VLLM_API_KEY=token-abc123
patchpal --model hosted_vllm/openai/gpt-oss-20b
When web tools are disabled:
web_searchandweb_fetchare removed from available tools- With a local model, the agent won't attempt any network requests
- Perfect for secure, isolated, or offline development environments
Viewing Help
patchpal --help
Usage
Simply run the patchpal command and type your requests interactively:
$ patchpal
================================================================================
PatchPal - Claude Code Clone
================================================================================
Using model: anthropic/claude-sonnet-4-5
Type 'exit' to quit.
Use '/status' to check context window usage, '/compact' to manually compact.
Use 'list skills' or /skillname to invoke skills.
Press Ctrl-C during agent execution to interrupt the agent.
You: Add type hints and basic logging to my_module.py
The agent will process your request and show you the results. You can continue with follow-up tasks or type exit to quit.
Interactive Features:
- Path Autocompletion: Press
Tabwhile typing file paths to see suggestions (e.g.,./src/mo+ Tab →./src/models.py) - Skill Autocompletion: Type
/followed by Tab to see available skills (e.g.,/comm+ Tab →/commit) - Command History: Use ↑ (up arrow) and ↓ (down arrow) to navigate through previous commands within the current session
- Interrupt Agent: Press
Ctrl-Cduring agent execution to stop the current task without exiting PatchPal - Exit: Type
exit,quit, or pressCtrl-Cat the prompt to exit PatchPal
Example Tasks
Resolve this error message: "UnicodeDecodeError: 'charmap' codec can't decode"
Build a streamlit app to <whatever you want>
Create a bar chart for top 5 downloaded Python packages as of yesterday
Find and implement best practices for async/await in Python
Add GitHub CI/CD for this project
Add type hints and basic logging to mymodule.py
Create unit tests for the utils module
Refactor the authentication code for better security
Add error handling to all API calls
Look up the latest FastAPI documentation and add dependency injection
Safety
The agent operates with a security model inspired by Claude Code:
- Permission system: User approval required for all shell commands and file modifications (can be customized)
- Write boundary enforcement: Write operations restricted to repository (matches Claude Code)
- Read operations allowed anywhere (system files, libraries, debugging, automation)
- Write operations outside repository require explicit permission
- Privilege escalation blocking: Platform-aware blocking of privilege escalation commands
- Unix/Linux/macOS:
sudo,su - Windows:
runas,psexec
- Unix/Linux/macOS:
- Dangerous pattern detection: Blocks patterns like
> /dev/,rm -rf /,| dd,--force - Timeout protection: Shell commands timeout after 30 seconds
Security Guardrails ✅ FULLY ENABLED
PatchPal includes comprehensive security protections enabled by default:
Critical Security:
- Permission prompts: Agent asks for permission before executing commands or modifying files (like Claude Code)
- Sensitive file protection: Blocks access to
.env, credentials, API keys - File size limits: Prevents OOM with configurable size limits (10MB default)
- Binary file detection: Blocks reading non-text files
- Critical file warnings: Warns when modifying infrastructure files (package.json, Dockerfile, etc.)
- Read-only mode: Optional mode that prevents all modifications
- Command timeout: 30-second timeout on shell commands
- Pattern-based blocking: Blocks dangerous command patterns (
> /dev/,--force, etc.) - Write boundary protection: Requires permission for write operations
Operational Safety:
- Operation audit logging: All file operations and commands logged to
~/.patchpal/<repo-name>/audit.log(enabled by default)- Includes user prompts to show what triggered each operation
- Rotates at 10 MB with 3 backups (40 MB total max)
- Command history: User commands saved to
~/.patchpal/<repo-name>/history.txt(last 1000 commands)- Clean, user-friendly format for reviewing past interactions
- Automatic backups: Optional auto-backup of files to
~/.patchpal/<repo-name>/backups/before modification - Resource limits: Configurable operation counter prevents infinite loops (10000 operations default)
- Git state awareness: Warns when modifying files with uncommitted changes
Configuration via environment variables:
# Critical Security Controls
export PATCHPAL_REQUIRE_PERMISSION=true # Prompt for permission before executing commands/modifying files (default: true)
# ⚠️ WARNING: Setting to false disables prompts - only use in trusted, controlled environments
# When disabled, the agent can modify files and run commands without asking
export PATCHPAL_MAX_FILE_SIZE=5242880 # Maximum file size in bytes for read/write operations (default: 10485760 = 10MB)
export PATCHPAL_READ_ONLY=true # Prevent all file modifications, analysis-only mode (default: false)
# Useful for: code review, exploration, security audits, CI/CD analysis, or trying PatchPal risk-free
export PATCHPAL_ALLOW_SENSITIVE=true # Allow access to .env, credentials, API keys (default: false - blocked for safety)
# Only enable when working with test/dummy credentials or intentionally managing config files
export PATCHPAL_ALLOW_SUDO=true # Allow sudo commands (default: false - blocked for safety)
# ⚠️ WARNING: Only enable in trusted, controlled environments where sudo is necessary
# When enabled, all privilege escalation blocking is disabled
# Operational Safety Controls
export PATCHPAL_AUDIT_LOG=false # Log all operations to ~/.patchpal/<repo-name>/audit.log (default: true)
export PATCHPAL_ENABLE_BACKUPS=true # Auto-backup files to ~/.patchpal/<repo-name>/backups/ before modification (default: false)
export PATCHPAL_MAX_OPERATIONS=5000 # Maximum operations per session to prevent infinite loops (default: 10000)
export PATCHPAL_MAX_ITERATIONS=150 # Maximum agent iterations per task (default: 100)
# Increase for very complex multi-file tasks, decrease for testing
# Customization
export PATCHPAL_SYSTEM_PROMPT=~/.patchpal/my_prompt.md # Use custom system prompt file (default: built-in prompt)
# The file can use template variables like {current_date}, {platform_info}, etc.
# Useful for: custom agent behavior, team standards, domain-specific instructions
# Web Tool Controls
export PATCHPAL_ENABLE_WEB=false # Enable/disable web search and fetch tools (default: true)
# Set to false for air-gapped or offline environments
export PATCHPAL_WEB_TIMEOUT=60 # Timeout for web requests in seconds (default: 30)
export PATCHPAL_MAX_WEB_SIZE=10485760 # Maximum web content size in bytes (default: 5242880 = 5MB)
export PATCHPAL_MAX_WEB_CHARS=500000 # Maximum characters from web content to prevent context overflow (default: 500000 ≈ 125k tokens)
# Shell Command Controls
export PATCHPAL_SHELL_TIMEOUT=60 # Timeout for shell commands in seconds (default: 30)
Permission System:
When the agent wants to execute a command or modify a file, you'll see a prompt like:
================================================================================
Run Shell
--------------------------------------------------------------------------------
pytest tests/test_cli.py -v
--------------------------------------------------------------------------------
Do you want to proceed?
1. Yes
2. Yes, and don't ask again this session for 'pytest'
3. No, and tell me what to do differently
Choice [1-3]:
- Option 1: Allow this one operation
- Option 2: Allow for the rest of this session (like Claude Code - resets when you restart PatchPal)
- Option 3: Cancel the operation
Advanced: You can manually edit ~/.patchpal/<repo-name>/permissions.json to grant persistent permissions across sessions.
Example permissions.json:
{
"run_shell": ["pytest", "npm", "git"],
"apply_patch": true,
"edit_file": ["config.py", "settings.json"]
}
Format:
"tool_name": true- Grant all operations for this tool (no more prompts)"tool_name": ["pattern1", "pattern2"]- Grant only specific patterns (e.g., specific commands or file names)
Context Management
PatchPal automatically manages the context window to prevent "input too long" errors during long coding sessions.
Features:
- Automatic token tracking: Monitors context usage in real-time
- Smart pruning: Removes old tool outputs (keeps last 40k tokens) before resorting to full compaction
- Auto-compaction: Summarizes conversation history when approaching 85% capacity
- Manual control: Check status with
/status, disable with environment variable
Commands:
# Check context window usage
You: /status
# Output shows:
# - Messages in history
# - Token usage breakdown
# - Visual progress bar
# - Auto-compaction status
# Manually trigger compaction
You: /compact
# Useful when:
# - You want to free up context space before a large operation
# - Testing compaction behavior
# - Context is getting full but hasn't auto-compacted yet
# Note: Requires at least 5 messages; most effective when context >50% full
Configuration:
# Disable auto-compaction (not recommended for long sessions)
export PATCHPAL_DISABLE_AUTOCOMPACT=true
# Adjust compaction threshold (default: 0.85 = 85%)
export PATCHPAL_COMPACT_THRESHOLD=0.90
# Adjust pruning thresholds
export PATCHPAL_PRUNE_PROTECT=40000 # Keep last 40k tokens (default)
export PATCHPAL_PRUNE_MINIMUM=20000 # Min tokens to prune (default)
# Override context limit for testing (useful for testing compaction with small values)
export PATCHPAL_CONTEXT_LIMIT=10000 # Force 10k token limit instead of model default
Testing Context Management:
You can test the context management system with small values to trigger compaction quickly:
# Set up small context window for testing
export PATCHPAL_CONTEXT_LIMIT=10000 # Force 10k token limit (instead of 200k for Claude)
export PATCHPAL_COMPACT_THRESHOLD=0.75 # Trigger at 75% (instead of 85%)
# Note: System prompt + output reserve = ~6.4k tokens baseline
# So 75% of 10k = 7.5k, leaving ~1k for conversation
export PATCHPAL_PRUNE_PROTECT=500 # Keep only last 500 tokens of tool outputs
export PATCHPAL_PRUNE_MINIMUM=100 # Prune if we can save 100+ tokens
# Start PatchPal and watch it compact quickly
patchpal
# Generate context with tool calls (tool outputs consume tokens)
You: list all python files
You: read patchpal/agent.py
You: read patchpal/tools.py
# Check status - should show compaction happening
You: /status
# Continue - should see pruning messages
You: search for "context" in all files
# You should see:
# ⚠️ Context window at 85% capacity. Compacting...
# Pruned old tool outputs (saved ~400 tokens)
# ✓ Compaction complete. Saved 850 tokens (85% → 68%)
How It Works:
-
Phase 1 - Pruning: When context fills up, old tool outputs are pruned first
- Keeps last 40k tokens of tool outputs protected (only tool outputs, not conversation)
- Only prunes if it saves >20k tokens
- Pruning is transparent and fast
- Requires at least 5 messages in history
-
Phase 2 - Compaction: If pruning isn't enough, full compaction occurs
- Requires at least 5 messages to be effective
- LLM summarizes the entire conversation
- Summary replaces old messages, keeping last 2 complete conversation turns
- Work continues seamlessly from the summary
- Preserves complete tool call/result pairs (important for Bedrock compatibility)
Example:
Context Window Status
======================================================================
Model: anthropic/claude-sonnet-4-5
Messages in history: 47
System prompt: 15,234 tokens
Conversation: 142,567 tokens
Output reserve: 4,096 tokens
Total: 161,897 / 200,000 tokens
Usage: 80%
[████████████████████████████████████████░░░░░░░░░]
Auto-compaction: Enabled (triggers at 85%)
======================================================================
The system ensures you can work for extended periods without hitting context limits.
Troubleshooting
Error: "maximum iterations reached"
- The default number of iterations is 100.
- You can increase by setting the environment variable,
export PATCHPAL_MAX_ITERATIONS
Error: "Context Window Error - Input is too long"
- PatchPal includes automatic context management (compaction) to prevent this error.
- Use
/statusto check your context window usage. - If auto-compaction is disabled, re-enable it:
unset PATCHPAL_DISABLE_AUTOCOMPACT - Context is automatically managed at 85% capacity through pruning and compaction.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file patchpal-0.1.5.tar.gz.
File metadata
- Download URL: patchpal-0.1.5.tar.gz
- Upload date:
- Size: 84.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4ebd48ae23a566abeaa1623eff1664c13785f54a90423755e976e4600b2f11ae
|
|
| MD5 |
64d7be89725d5c1e3918fb02cd77a3f9
|
|
| BLAKE2b-256 |
87067f84d1dc09323572c21e5627d50cec777d076dca18523bfdcee03c1211e2
|
Provenance
The following attestation bundles were made for patchpal-0.1.5.tar.gz:
Publisher:
release.yml on amaiya/patchpal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
patchpal-0.1.5.tar.gz -
Subject digest:
4ebd48ae23a566abeaa1623eff1664c13785f54a90423755e976e4600b2f11ae - Sigstore transparency entry: 842894966
- Sigstore integration time:
-
Permalink:
amaiya/patchpal@227c10c748db8606f74bb28cbf6324b1a6d3d954 -
Branch / Tag:
refs/tags/0.1.5 - Owner: https://github.com/amaiya
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@227c10c748db8606f74bb28cbf6324b1a6d3d954 -
Trigger Event:
release
-
Statement type:
File details
Details for the file patchpal-0.1.5-py3-none-any.whl.
File metadata
- Download URL: patchpal-0.1.5-py3-none-any.whl
- Upload date:
- Size: 58.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3bc80e48aed1f743426333509ee37e23e35027376dbf64a36f3b7fabfe0b7582
|
|
| MD5 |
d03f4291818653444e3b36f10cec90e4
|
|
| BLAKE2b-256 |
d634c6abbaeb22c4851b4d12c7502dc5d91bb5e7939f3922a1e29658f9d69c24
|
Provenance
The following attestation bundles were made for patchpal-0.1.5-py3-none-any.whl:
Publisher:
release.yml on amaiya/patchpal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
patchpal-0.1.5-py3-none-any.whl -
Subject digest:
3bc80e48aed1f743426333509ee37e23e35027376dbf64a36f3b7fabfe0b7582 - Sigstore transparency entry: 842894988
- Sigstore integration time:
-
Permalink:
amaiya/patchpal@227c10c748db8606f74bb28cbf6324b1a6d3d954 -
Branch / Tag:
refs/tags/0.1.5 - Owner: https://github.com/amaiya
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@227c10c748db8606f74bb28cbf6324b1a6d3d954 -
Trigger Event:
release
-
Statement type: