Universal AI-powered automation for GitHub code review bots with intelligent conflict resolution
Project description
Review Bot Automator
Universal AI-powered automation for GitHub code review bots
Intelligent suggestion application and conflict resolution for CodeRabbit, GitHub Copilot, and custom review bots
๐ Table of Contents
- Problem Statement
- Quick Start
- Features
- Architecture
- Use Cases
- Environment Variables
- Documentation
- Contributing
- Project Status
- License
๐ฏ Problem Statement
When multiple PR review comments suggest overlapping changes to the same file, traditional automation tools either:
- Skip all conflicting changes (losing valuable suggestions)
- Apply changes sequentially without conflict awareness (potentially breaking code)
- Require tedious manual resolution for every conflict
Review Bot Automator provides intelligent, semantic-aware conflict resolution that:
- โ Understands code structure (JSON, YAML, TOML, Python, TypeScript)
- โ Uses priority-based resolution (user selections, security fixes, syntax errors)
- โ Supports semantic merging (combining non-conflicting changes automatically)
- โ Learns from your decisions to improve over time
- โ Provides detailed conflict analysis and actionable suggestions
๐ Quick Start
Installation
pip install review-bot-automator
Basic Usage
# Set your GitHub token (required)
export GITHUB_PERSONAL_ACCESS_TOKEN="your_token_here"
# Analyze conflicts in a PR
pr-resolve analyze --owner VirtualAgentics --repo my-repo --pr 123
# Apply suggestions with conflict resolution
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 --strategy priority
# Apply only conflicting changes
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 --mode conflicts-only
# Simulate without applying changes (dry-run mode)
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 --mode dry-run
# Use parallel processing for large PRs
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 --parallel --max-workers 8
# Load configuration from file
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 --config config.yaml
LLM Provider Setup (Optional)
Enable AI-powered features with your choice of LLM provider using zero-config presets:
# โจ NEW: Zero-config presets for instant setup
# Option 1: Codex CLI (free with GitHub Copilot subscription)
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 \
--llm-preset codex-cli-free
# Option 2: Local Ollama ๐ (free, private) - REDUCES THIRD-PARTY LLM VENDOR EXPOSURE
./scripts/setup_ollama.sh # One-time install
./scripts/download_ollama_models.sh # Download model
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 \
--llm-preset ollama-local
# ๐ Reduces third-party LLM vendor exposure (OpenAI/Anthropic never see comments)
# โ
Simpler compliance (one fewer data processor for GDPR, HIPAA, SOC2)
# โ ๏ธ Note: GitHub/CodeRabbit still have access (required for PR workflow)
# See docs/ollama-setup.md for setup | docs/privacy-architecture.md for privacy details
# Option 3: Claude CLI (requires Claude subscription)
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 \
--llm-preset claude-cli-sonnet
# Option 4: OpenAI API (pay-per-use, ~$0.01 per PR)
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 \
--llm-preset openai-api-mini \
--llm-api-key sk-...
# Option 5: Anthropic API (balanced cost/performance)
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123 \
--llm-preset anthropic-api-balanced \
--llm-api-key sk-ant-...
Available presets: codex-cli-free, ollama-local ๐, claude-cli-sonnet, openai-api-mini, anthropic-api-balanced
Privacy Note: Ollama (ollama-local) reduces third-party LLM vendor exposure by processing review comments locally. OpenAI/Anthropic never see your code, simplifying compliance. Note: GitHub and CodeRabbit still have access (required for PR workflow). See Privacy Architecture for details.
Alternative: Use environment variables
# Anthropic (recommended - 50-90% cost savings with caching)
export CR_LLM_ENABLED="true"
export CR_LLM_PROVIDER="anthropic"
export CR_LLM_API_KEY="sk-ant-..." # Get from https://console.anthropic.com/
# OpenAI
export CR_LLM_ENABLED="true"
export CR_LLM_PROVIDER="openai"
export CR_LLM_API_KEY="sk-..." # Get from https://platform.openai.com/api-keys
# Then use as normal
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123
Documentation:
- LLM Configuration Guide - All provider options and setup
- Privacy Architecture - Privacy comparison and compliance
- Local LLM Operation Guide - Local LLM setup with Ollama
- Privacy FAQ - Common privacy questions
Python API
from review_bot_automator import ConflictResolver
from review_bot_automator.config import PresetConfig
resolver = ConflictResolver(config=PresetConfig.BALANCED)
results = resolver.resolve_pr_conflicts(
owner="VirtualAgentics",
repo="my-repo",
pr_number=123
)
print(f"Applied: {results.applied_count}")
print(f"Conflicts: {results.conflict_count}")
print(f"Success rate: {results.success_rate}%")
๐จ Features
Intelligent Conflict Analysis
- Semantic Understanding: Analyzes JSON, YAML, TOML structure, not just text
- Conflict Categorization: Exact, major, partial, minor, disjoint-keys, semantic-duplicate
- Impact Assessment: Evaluates scope, risk level, and criticality of changes
- Actionable Suggestions: Provides specific guidance for each conflict
Smart Resolution Strategies
- Priority-Based: User selections > Security fixes > Syntax errors > Regular suggestions
- Semantic Merging: Combines non-conflicting changes in structured files
- Sequential Application: Applies compatible changes in optimal order
- Defer to User: Escalates complex conflicts for manual review
File-Type Handlers
- JSON: Duplicate key detection, key-level merging
- YAML: Comment preservation, structure-aware merging
- TOML: Section merging, format preservation
- Python/TypeScript: AST-aware analysis (planned)
Multi-Provider LLM Support โ (Phase 2 Complete - All 5 Providers Production-Ready)
- 5 Provider Types: OpenAI API, Anthropic API, Claude CLI, Codex CLI, Ollama (all production-ready)
- GPU Acceleration: Ollama supports NVIDIA CUDA, AMD ROCm, Apple Metal with automatic detection
- HTTP Connection Pooling: Optimized for concurrent requests (10 connections per provider)
- Auto-Download: Ollama can automatically download models when not available
- Cost Optimization: Prompt caching reduces Anthropic costs by 50-90%
- Retry Logic: Exponential backoff for transient failures (all providers)
- Flexible Deployment: API-based, CLI-based, or local inference
- Provider Selection: Choose based on cost, privacy, or performance needs
- Health Checks: Automatic provider validation before use
Learning & Optimization
- ML-Assisted Priority: Learns from your resolution decisions
- Metrics Tracking: Monitors success rates, resolution times, strategy effectiveness
- Conflict Caching: Reuses analysis for similar conflicts
- Performance: Parallel processing for large PRs
Configuration & Presets
- Conservative: Skip all conflicts, manual review required
- Balanced: Priority system + semantic merging (default)
- Aggressive: Maximize automation, user selections always win
- Semantic: Focus on structure-aware merging for config files
Application Modes
- all: Apply both conflicting and non-conflicting changes (default)
- conflicts-only: Apply only changes that have conflicts
- non-conflicts-only: Apply only changes without conflicts
- dry-run: Analyze and report without applying any changes
Rollback & Safety Features
- Automatic Rollback: Git-based checkpointing with automatic rollback on failure
- Pre-Application Validation: Validates changes before applying (optional)
- File Integrity Checks: Verifies file safety and containment
- Detailed Logging: Comprehensive logging for debugging and audit trails
Runtime Configuration
Configure via multiple sources with precedence chain: CLI flags > Environment variables > Config file > Defaults
- Configuration Files: Load settings from YAML or TOML files
- Environment Variables: Set options using
CR_*prefix variables - CLI Overrides: Override any setting via command-line flags
See .env.example for available environment variables.
๐ Documentation
User Guides
- Getting Started Guide - Installation, setup, and first steps
- Configuration Reference - Complete configuration options
- LLM Configuration Guide - LLM providers, presets, and advanced configuration
- Ollama Setup Guide - Comprehensive Ollama installation and setup
- Rollback System - Automatic rollback and recovery
- Parallel Processing - Performance tuning guide
- Migration Guide - Upgrading from earlier versions
- Troubleshooting - Common issues and solutions
Reference Documentation
- API Reference - Python API documentation
- Conflict Types Explained - Understanding conflict categories
- Resolution Strategies - Strategy selection guide
- Performance Benchmarks - LLM provider performance comparison
Architecture & Development
- Architecture Overview - System design and components
- Contributing Guide - How to contribute
Security
- Security Policy - Vulnerability reporting, security features
- Security Architecture - Design principles, threat model
- Threat Model - STRIDE analysis, risk assessment
- Incident Response - Security incident procedures
- Compliance - GDPR, OWASP, SOC2, OpenSSF
- Security Testing - Testing guide, fuzzing, SAST
๐๏ธ Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ GitHub PR Comments โ
โ (CodeRabbit, Review Bot) โ
โโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Comment Parser & Extractor โ
โ (Suggestions, Diffs, Codemods, Multi-Options) โ
โโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Conflict Detection Engine โ
โ โข Fingerprinting โข Overlap Analysis โข Semantic Check โ
โโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโดโโโโโโโโโโโ
โผ โผ
โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ
โ File Handlers โ โ Priority System โ
โ โข JSON โ โ โข User Selected โ
โ โข YAML โ โ โข Security Fix โ
โ โข TOML โ โ โข Syntax Error โ
โ โข Python โ โ โข Regular โ
โโโโโโโโโโโฌโโโโโโโโโ โโโโโโโโโโฌโโโโโโโโโโ
โ โ
โโโโโโโโโโโโฌโโโโโโโโโโโ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Resolution Strategy Selector โ
โ โข Skip โข Override โข Merge โข Sequential โข Defer โ
โโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Application Engine โ
โ โข Backup โข Apply โข Validate โข Rollback โ
โโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Reporting & Metrics โ
โ โข Conflict Summary โข Visual Diff โข Success Rate โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ง Use Cases
1. CodeRabbit Multi-Option Selections
Problem: User selects "Option 2" but it conflicts with another suggestion Solution: Priority system ensures user selections override lower-priority changes
2. Overlapping Configuration Changes
Problem: Two suggestions modify different keys in package.json
Solution: Semantic merging combines both changes automatically
3. Security Fix vs. Formatting
Problem: Security fix conflicts with formatting suggestion Solution: Priority system applies security fix, skips formatting
4. Large PR with 50+ Comments
Problem: Manual conflict resolution is time-consuming Solution: Parallel processing + caching resolves conflicts in seconds
๐ง Environment Variables
Configure the tool using environment variables (see .env.example for all options):
| Variable | Description | Default |
|---|---|---|
GITHUB_PERSONAL_ACCESS_TOKEN |
GitHub API token (required) | None |
CR_MODE |
Application mode (all, conflicts-only, non-conflicts-only, dry-run) |
all |
CR_ENABLE_ROLLBACK |
Enable automatic rollback on failure | true |
CR_VALIDATE |
Enable pre-application validation | true |
CR_PARALLEL |
Enable parallel processing | false |
CR_MAX_WORKERS |
Number of parallel workers | 4 |
CR_LOG_LEVEL |
Logging level (DEBUG, INFO, WARNING, ERROR) |
INFO |
CR_LOG_FILE |
Log file path (optional) | None |
๐ค Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Development Setup
git clone https://github.com/VirtualAgentics/review-bot-automator.git
cd review-bot-automator
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pre-commit install
Running Tests
This project uses pytest 9.0 with native subtests support for comprehensive testing. We maintain >80% test coverage with 1,445 tests including unit, integration, security, and property-based fuzzing tests.
# Run standard tests with coverage
pytest tests/ --cov=src --cov-report=html
# Run property-based fuzzing tests
make test-fuzz # Dev profile: 50 examples
make test-fuzz-ci # CI profile: 100 examples
make test-fuzz-extended # Extended: 1000 examples
# Run all tests (standard + fuzzing)
make test-all
For more details, see:
- Testing Guide - Comprehensive testing documentation
- Subtests Guide - Writing tests with subtests
- CONTRIBUTING.md - Contribution guidelines including testing practices
๐ License
MIT License - see LICENSE for details.
๐ Acknowledgments
- Inspired by the sophisticated code review capabilities of CodeRabbit AI
- Built with experience from ContextForge Memory project
- Community feedback and contributions
๐ Project Status
Current Version: 2.0.0
Roadmap:
- โ
Phase 0: Security Foundation (COMPLETE)
- โ 0.1: Security Architecture Design
- โ 0.2: Input Validation & Sanitization
- โ 0.3: Secure File Handling
- โ 0.4: Secret Detection (14+ patterns)
- โ 0.5: Security Testing Suite (95%+ coverage)
- โ 0.6: Security Configuration
- โ 0.7: CI/CD Security Scanning (7+ tools)
- โ 0.8: Security Documentation
- โ
Phase 1: Core Features (COMPLETE)
- โ Core conflict detection and analysis
- โ File handlers (JSON, YAML, TOML)
- โ Priority system
- โ Rollback system with git-based checkpointing
- โ
Phase 2: CLI & Configuration (COMPLETE)
- โ CLI with comprehensive options
- โ Runtime configuration system
- โ Application modes (all, conflicts-only, non-conflicts-only, dry-run)
- โ Parallel processing support
- โ Multiple configuration sources (file, env, CLI)
- ๐ Phase 3: Documentation & Examples (IN PROGRESS)
- ๐ Comprehensive documentation updates
- ๐ Example configurations and use cases
- โ
V2.0 Phase 0: LLM Foundation (COMPLETE) - PR #121
- โ Core LLM data models and infrastructure
- โ Universal comment parser with LLM + regex fallback
- โ LLM provider protocol for polymorphic support
- โ Structured prompt engineering system
- โ Confidence threshold filtering
- โ
V2.0 Phase 1: LLM-Powered Parsing (COMPLETE) - PR #122
- โ OpenAI API provider implementation
- โ Automatic retry logic with exponential backoff
- โ Token counting and cost tracking
- โ Comprehensive error handling
- โ Integration with ConflictResolver
- โ
V2.0 Phase 2: Multi-Provider Support (COMPLETE) - Closed Nov 9, 2025
- โ All 5 LLM providers implemented: OpenAI API, Anthropic API, Claude CLI, Codex CLI, Ollama
- โ Provider factory pattern with automatic selection
- โ HTTP connection pooling and retry logic
- โ Provider health checks and validation
- โ Cost tracking across all API-based providers
- โ
V2.0 Phase 3: CLI Integration Polish (COMPLETE) - Closed Nov 11, 2025
- โ Zero-config presets for instant LLM setup (5 presets available)
- โ Configuration precedence chain: CLI > Environment > File > Defaults
- โ Enhanced error messages with actionable resolution steps
- โ Support for YAML/TOML configuration files
- โ Security: API keys must use ${VAR} syntax in config files
- โ
V2.0 Phase 4: Local Model Support (COMPLETE) - Closed Nov 2025
- โ Ollama provider with GPU acceleration (NVIDIA, AMD ROCm, Apple Metal)
- โ Automatic GPU detection and hardware info display
- โ HTTP connection pooling for concurrent requests
- โ Model auto-download feature
- โ Performance benchmarking (local vs API models) - Issue #170
- โ Privacy documentation (local LLM operation guide) - Issue #171
- โ Integration tests with privacy verification - Issue #172
- โ
V2.0 Phase 5: Optimization & Production Readiness (COMPLETE) - PR #250 (Nov 26, 2025)
- Rate limit retry with exponential backoff
- Cache warming for cold start optimization
- Fallback rate tracking, confidence threshold CLI option
- fsync for atomic write durability
- ๐ V2.0 Phase 6: Documentation & Migration (IN PROGRESS) - ~90% complete
V2.0 Milestone Progress: ~95% complete (Phases 0-5 complete, Phase 6 finalizing)
Security Highlights
- ClusterFuzzLite: Continuous fuzzing (3 fuzz targets, ASan + UBSan)
- Test Coverage: 82.35% overall, 95%+ for security modules
- Security Scanning: CodeQL, Trivy, TruffleHog, Bandit, pip-audit, OpenSSF Scorecard
- Secret Detection: 14+ pattern types (GitHub tokens, AWS keys, API keys, etc.)
- Documentation: Comprehensive security documentation (threat model, incident response, compliance)
๐ LLM Features (v2.0 Architecture)
โ Core v2.0 LLM features are production-ready! Phases 0-4 complete (~71% of v2.0 milestone). All 5 LLM providers fully functional. See Roadmap for current status.
Vision: Major architecture upgrade to parse 95%+ of CodeRabbit comments (up from 20%)
The Problem We're Solving
Current system only parses ```suggestion blocks, missing:
- โ Diff blocks (```diff) - 60% of CodeRabbit comments
- โ Natural language suggestions - 20% of comments
- โ Multi-option suggestions
- โ Multiple diff blocks per comment
Result: Only 1 out of 5 CodeRabbit comments are currently parsed.
The Solution: LLM-First Parsing
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ LLM Parser (Primary - All Formats) โ
โ โข Diff blocks โข Suggestion blocks โ
โ โข Natural language โข Multi-options โ
โ โข 95%+ coverage โข Intelligent understanding โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโดโโโโโโโโโ
โ Fallback if โ
โ LLM fails โ
โโโโโโโโโโฌโโโโโโโโโ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Regex Parser (Fallback - Suggestion Blocks) โ
โ โข 100% reliable โข Zero cost โ
โ โข Legacy support โข Always available โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Multi-Provider Support (User Choice)
Choose your preferred LLM provider:
| Provider | Cost Model | Best For | Est. Cost (1000 comments) |
|---|---|---|---|
| Claude CLI | Subscription ($20/mo) | Best quality + zero marginal cost | $0 (covered) |
| Codex CLI | Subscription ($20/mo) | Cost-effective, OpenAI quality | $0 (covered) |
| Ollama | Free (local) | Privacy, offline, no API costs | $0 |
| OpenAI API | Pay-per-token | Pay-as-you-go, low volume | $0.07 (with caching) |
| Anthropic API | Pay-per-token | Best quality, willing to pay | $0.22 (with caching) |
Quick Preview
# Current (v1.x) - regex-only
pr-resolve apply --owner VirtualAgentics --repo my-repo --pr 123
# Parses: 1/5 comments (20%)
# v2.0 - LLM-powered (opt-in)
pr-resolve apply --llm --llm-provider claude-cli --owner VirtualAgentics --repo my-repo --pr 123
# Parses: 5/5 comments (100%)
# Use presets for quick config
pr-resolve apply --llm-preset claude-cli-sonnet --owner VirtualAgentics --repo my-repo --pr 123
pr-resolve apply --llm-preset ollama-local --owner VirtualAgentics --repo my-repo --pr 123 # Privacy-first
Backward Compatibility Guarantee
โ Runtime Behavior Preserved - v2.0 maintains full compatibility for CLI and API usage
- LLM parsing disabled by default (opt-in via
--llmflag) - Automatic fallback to regex if LLM fails
- v1.x CLI commands work identically
- v1.x Python API behavior unchanged
โ ๏ธ Package Rename: v2.0 renamed the package from pr-conflict-resolver to review-bot-automator. Update your imports and dependencies:
- Import:
from review_bot_automator import ...(waspr_conflict_resolver) - Dependency:
review-bot-automatorin requirements.txt (waspr-conflict-resolver)
See Migration Guide for details.
Enhanced Change Metadata
# v2.0: Changes include AI-powered insights
change = Change(
path="src/module.py",
start_line=10,
end_line=12,
content="new code",
# NEW in v2.0 (optional fields)
llm_confidence=0.95, # How confident the LLM is
llm_provider="claude-cli", # Which provider parsed it
parsing_method="llm", # "llm" or "regex"
change_rationale="Improves error handling", # Why change was suggested
risk_level="low" # "low", "medium", "high"
)
Documentation
Comprehensive planning documentation available:
- LLM Refactor Roadmap (15K words) - Full implementation plan
- LLM Architecture (8K words) - Technical specification
- Migration Guide (3K words) - v1.x โ v2.0 upgrade path
Timeline
- Phase 0-6: 10-12 weeks implementation
- Estimated Release: Q2 2025
- GitHub Milestone: v2.0 - LLM-First Architecture
- GitHub Issues: #114-#120 (Phases 0-6)
๐ Related Projects
- ContextForge Memory - Original implementation
- CodeRabbit AI - AI-powered code review
Made with โค๏ธ by VirtualAgentics
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file review_bot_automator-2.0.1.tar.gz.
File metadata
- Download URL: review_bot_automator-2.0.1.tar.gz
- Upload date:
- Size: 995.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f14d1c78efcb33542992257096a3a22b52789ecd62c359a92357d3d998578006
|
|
| MD5 |
7b79a12904c5d8142cb07f9c2b0602bc
|
|
| BLAKE2b-256 |
1452e833c23e63fcd274c091f55dc4d7f9d20fe2d9418092253a1efa620702bd
|
Provenance
The following attestation bundles were made for review_bot_automator-2.0.1.tar.gz:
Publisher:
release.yml on VirtualAgentics/review-bot-automator
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
review_bot_automator-2.0.1.tar.gz -
Subject digest:
f14d1c78efcb33542992257096a3a22b52789ecd62c359a92357d3d998578006 - Sigstore transparency entry: 729745933
- Sigstore integration time:
-
Permalink:
VirtualAgentics/review-bot-automator@a952a931010a1bdf12b113208a5e43f3d0c2cc49 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/VirtualAgentics
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a952a931010a1bdf12b113208a5e43f3d0c2cc49 -
Trigger Event:
push
-
Statement type:
File details
Details for the file review_bot_automator-2.0.1-py3-none-any.whl.
File metadata
- Download URL: review_bot_automator-2.0.1-py3-none-any.whl
- Upload date:
- Size: 215.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
13cbb184031d453f558dea9d79e225973fe61af84f7173183f4100e9c0c78406
|
|
| MD5 |
66755b734eeba76427b47ca450358fbb
|
|
| BLAKE2b-256 |
119c28e15e2d4c39afbc4a2176816fd589ec70094c6e207f2319cc1b4d170ac6
|
Provenance
The following attestation bundles were made for review_bot_automator-2.0.1-py3-none-any.whl:
Publisher:
release.yml on VirtualAgentics/review-bot-automator
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
review_bot_automator-2.0.1-py3-none-any.whl -
Subject digest:
13cbb184031d453f558dea9d79e225973fe61af84f7173183f4100e9c0c78406 - Sigstore transparency entry: 729745936
- Sigstore integration time:
-
Permalink:
VirtualAgentics/review-bot-automator@a952a931010a1bdf12b113208a5e43f3d0c2cc49 -
Branch / Tag:
refs/tags/v2.0.1 - Owner: https://github.com/VirtualAgentics
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a952a931010a1bdf12b113208a5e43f3d0c2cc49 -
Trigger Event:
push
-
Statement type: