Skip to main content

Production-grade segmentation tool with SAM v1/v2 support

Project description

Segmentor

Production-grade segmentation tool powered by Segment Anything Models (SAM v1 & v2).

Features

  • ๐Ÿš€ Easy to use: Simple Python SDK and REST API
  • ๐Ÿ”Œ Pluggable backends: PyTorch and ONNX Runtime support
  • ๐Ÿ“ฆ Multiple models: SAM v1 and SAM v2
  • ๐ŸŽฏ Flexible prompts: Bounding boxes, points, or both
  • ๐Ÿ“ค Multiple outputs: RLE, PNG, polygons, numpy arrays
  • โšก Performance: GPU acceleration, caching, optional tiling
  • ๐Ÿณ Ready for production: Docker images, metrics, structured logging

Installation

# Basic installation with PyTorch backend
pip install segmentor[torch]

# With FastAPI server
pip install segmentor[server,torch]

# All features
pip install segmentor[all]

Quick Start

Python SDK

from perceptra_seg import Segmentor
import numpy as np

# Initialize
segmentor = Segmentor(
    backend="torch",
    model="sam_v1",
    device="cuda"
)

# Load your image
image = np.array(...)  # or PIL.Image, path, URL

# Segment from bounding box
result = segmentor.segment_from_box(
    image,
    box=(100, 100, 400, 400),
    output_formats=["rle", "png", "polygons"]
)

print(f"Score: {result.score}, Area: {result.area} pixels")
print(f"Mask shape: {result.mask.shape}")

# Segment from points
result = segmentor.segment_from_points(
    image,
    points=[(250, 200, 1), (300, 250, 1)],  # (x, y, label)
    output_formats=["numpy"]
)

segmentor.close()

REST API

Start the server:

# Using CLI
segmentor-cli serve --config config.yaml

# Or with uvicorn
uvicorn service.main:app --host 0.0.0.0 --port 8080

Make requests:

# Segment from box
curl -X POST http://localhost:8080/v1/segment/box \
  -H "Content-Type: application/json" \
  -d '{
    "image": "",
    "box": [100, 100, 400, 400],
    "output_formats": ["rle", "png"]
  }'

# Segment from points
curl -X POST http://localhost:8080/v1/segment/points \
  -H "Content-Type: application/json" \
  -d '{
    "image": "",
    "points": [{"x": 250, "y": 200, "label": 1}],
    "output_formats": ["rle"]
  }'

Docker

# Build CPU image
docker build -t segmentor:cpu -f Dockerfile .

# Build GPU image
docker build -t segmentor:gpu -f Dockerfile.gpu .

# Run
docker run -p 8080:8080 segmentor:cpu

# With GPU
docker run --gpus all -p 8080:8080 segmentor:gpu

Configuration

Edit config.yaml or use environment variables:

model:
  name: "sam_v1"  # sam_v1 | sam_v2
  encoder_variant: "vit_h"  # vit_h | vit_l | vit_b
  checkpoint_path: null  # Auto-download if null

runtime:
  backend: "torch"  # torch | onnx
  device: "cuda"  # cuda | cpu
  precision: "fp32"  # fp16 | bf16 | fp32

server:
  host: "0.0.0.0"
  port: 8080
  api_keys: []  # Add keys for authentication

Environment overrides:

export SEGMENTOR_RUNTIME_DEVICE=cpu
export SEGMENTOR_MODEL_NAME=sam_v2

CLI Usage

# Segment from bounding box
segmentor-cli segment-box \
  --image path/to/image.jpg \
  --box 10 20 200 240 \
  --out mask.png \
  --backend torch \
  --model sam_v1

# Start server
segmentor-cli serve --config config.yaml

Model Weights & Licenses

This tool uses Meta's Segment Anything Models. Model weights are licensed under Apache 2.0.

SAM v1 checkpoints (auto-downloaded):

SAM v2 checkpoints (auto-downloaded):

Weights are downloaded to ~/.cache/segmentor/ on first use.

Important: Review Meta's license terms before commercial use.

Development

# Clone repository
git clone https://github.com/yourusername/segmentor.git
cd segmentor

# Install in development mode
pip install -e .[dev,all]

# Install pre-commit hooks
pre-commit install

# Run tests
pytest tests/ -v --cov=segmentor

# Run linters
black segmentor/ service/
isort segmentor/ service/
ruff check segmentor/ service/
mypy segmentor/ service/

# Build documentation
cd docs && mkdocs serve

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    Segmentor SDK                         โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚     segment_from_box / segment_from_points      โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ”‚                      โ”‚                                   โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚        Backend Abstraction Layer                 โ”‚   โ”‚
โ”‚  โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚   โ”‚
โ”‚  โ”‚  โ”‚ Torch    โ”‚ Torch    โ”‚  ONNX    โ”‚  ONNX    โ”‚  โ”‚   โ”‚
โ”‚  โ”‚  โ”‚ SAM v1   โ”‚ SAM v2   โ”‚  SAM v1  โ”‚  SAM v2  โ”‚  โ”‚   โ”‚
โ”‚  โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ”‚                      โ”‚                                   โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚      Utilities: Image I/O, Mask Utils,          โ”‚   โ”‚
โ”‚  โ”‚      Tiling, Caching, Postprocessing            โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                         โ”‚
                         โ”‚ REST API
                         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                  FastAPI Service                         โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚  /v1/segment/box  โ”‚  /v1/segment/points         โ”‚   โ”‚
โ”‚  โ”‚  /v1/segment      โ”‚  /v1/healthz  โ”‚  /metrics   โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ”‚                                                          โ”‚
โ”‚  Auth โ€ข CORS โ€ข Logging โ€ข Metrics โ€ข Rate Limiting        โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Key Design Decisions

  1. Backend Protocol Pattern: Uses Python's Protocol for type-safe backend abstraction, allowing new backends to be added without modifying core logic.

  2. Configuration-Driven: Single YAML config controls all aspects (model, runtime, outputs), with environment variable overrides for deployment flexibility.

  3. Separation of Concerns:

    • core.py: High-level API and orchestration
    • backends/: Model-specific inference logic
    • utils/: Reusable image/mask operations
    • service/: HTTP layer completely separate from SDK
  4. Output Flexibility: Supports multiple output formats (RLE, PNG, polygons, numpy) generated on-demand to minimize memory usage.

  5. Caching Strategy: LRU cache for image embeddings (expensive to compute), keyed by image hash for exact-match speedups.

  6. Error Handling: Custom exception hierarchy maps to appropriate HTTP status codes in the service layer.

  7. ONNX Placeholder: ONNX backends are stubs requiring pre-exported models, as SAM's official ONNX export is complex and model-specific.

API Reference

Python SDK

Segmentor

Main class for segmentation operations.

Constructor:

Segmentor(
    config: SegmentorConfig | None = None,
    **kwargs
)

Methods:

  • segment_from_box(image, box, *, output_formats, return_overlay) โ†’ SegmentationResult
  • segment_from_points(image, points, *, output_formats, return_overlay) โ†’ SegmentationResult
  • segment(image, boxes, points, *, strategy, output_formats, return_overlay) โ†’ list[SegmentationResult]
  • warmup(image_size) โ†’ None
  • set_backend(backend_name) โ†’ None
  • close() โ†’ None

SegmentationResult

Result object containing:

  • mask: numpy array (HxW) if 'numpy' in output_formats
  • rle: COCO RLE dict if 'rle' in output_formats
  • polygons: List of polygon contours if 'polygons' in output_formats
  • png_bytes: PNG-encoded mask if 'png' in output_formats
  • score: Confidence score (0-1)
  • area: Number of pixels in mask
  • bbox: Bounding box (x1, y1, x2, y2)
  • latency_ms: Processing time
  • model_info: Dict with model metadata
  • request_id: Unique request identifier

REST API

POST /v1/segment/box

Segment from bounding box.

Request:

{
  "image": "base64_string_or_url",
  "box": [x1, y1, x2, y2],
  "output_formats": ["rle", "png", "polygons"],
  "strategy": "largest"
}

Response:

{
  "rle": {"size": [H, W], "counts": [...]},
  "png_base64": "...",
  "polygons": [[[x1, y1], [x2, y2], ...]],
  "score": 0.95,
  "area": 12345,
  "bbox": [x1, y1, x2, y2],
  "latency_ms": 123.4,
  "model_info": {"name": "sam_v1", "backend": "torch"},
  "request_id": "uuid"
}

POST /v1/segment/points

Segment from point prompts.

Request:

{
  "image": "base64_string_or_url",
  "points": [
    {"x": 100, "y": 200, "label": 1},
    {"x": 150, "y": 220, "label": 1}
  ],
  "output_formats": ["rle"]
}

POST /v1/segment

General segmentation supporting boxes and/or points.

Request:

{
  "image": "base64_string_or_url",
  "boxes": [[x1, y1, x2, y2], ...],
  "points": [{"x": 100, "y": 200, "label": 1}, ...],
  "strategy": "merge",
  "output_formats": ["rle"]
}

Strategies:

  • "largest": Return only the largest mask
  • "merge": Union all masks into one
  • "all": Return all masks as separate results

GET /v1/healthz

Health check endpoint.

Response: {"status": "ok"}

GET /metrics

Prometheus metrics endpoint.

Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=segmentor --cov-report=html

# Run specific test file
pytest tests/test_core.py -v

# Run with markers
pytest -m "not slow"

Test coverage includes:

  • โœ… Core segmentation logic
  • โœ… Backend switching
  • โœ… Input validation
  • โœ… Output format conversion
  • โœ… REST API endpoints
  • โœ… Error handling
  • โœ… Configuration loading

Performance Tips

  1. Use GPU: Set device: "cuda" for 10-50x speedup
  2. Enable caching: Keep cache.enabled: true for repeated images
  3. Batch processing: Use segment() with multiple boxes instead of separate calls
  4. FP16 precision: Set precision: "fp16" on GPU for 2x speedup with minimal quality loss
  5. Warm up: Call warmup() before processing to avoid first-call overhead
  6. Tiling: Enable for very large images (>4K) to avoid OOM

Troubleshooting

CUDA out of memory

  • Reduce runtime.batch_size
  • Enable tiling.enabled: true
  • Use smaller model variant (vit_b instead of vit_h)
  • Use precision: "fp16"

Slow inference

  • Ensure GPU is being used: check torch.cuda.is_available()
  • Warm up the model first
  • Enable caching for repeated images
  • Use FP16 precision

Import errors

  • Ensure correct extras installed: pip install segmentor[torch]
  • For SAM v1: pip install git+https://github.com/facebookresearch/segment-anything.git
  • For SAM v2: pip install git+https://github.com/facebookresearch/segment-anything-2.git

Model download fails

  • Check internet connection
  • Manually download from URLs in README and set checkpoint_path in config
  • Verify disk space in ~/.cache/segmentor/

Roadmap

  • HQ-SAM and MobileSAM backend support
  • Complete ONNX backend implementation
  • Video segmentation support (SAM 2 temporal)
  • Automatic mask quality filtering
  • Batch API endpoint
  • WebSocket streaming API
  • Triton Inference Server backend
  • Model quantization (INT8)
  • Multi-GPU support

Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Ensure all tests pass and coverage >80%
  5. Run pre-commit hooks
  6. Submit a pull request

License

Apache License 2.0 - see LICENSE file.

This project uses SAM models from Meta, which are also licensed under Apache 2.0.

Citation

If you use this tool in research, please cite the original SAM papers:

@article{kirillov2023segany,
  title={Segment Anything},
  author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
  journal={arXiv:2304.02643},
  year={2023}
}

@article{ravi2024sam2,
  title={SAM 2: Segment Anything in Images and Videos},
  author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
  journal={arXiv:2408.00714},
  year={2024}
}

Contact


Built with โค๏ธ by the Segmentor team# Segmentor: Production-Grade Segmentation Tool

A modular, high-performance segmentation library and microservice powered by Segment Anything Models (SAM v1 & v2).

Project Structure

segmentor/
โ”œโ”€โ”€ pyproject.toml
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ config.yaml
โ”œโ”€โ”€ Dockerfile
โ”œโ”€โ”€ Dockerfile.gpu
โ”œโ”€โ”€ .pre-commit-config.yaml
โ”œโ”€โ”€ .github/
โ”‚   โ””โ”€โ”€ workflows/
โ”‚       โ””โ”€โ”€ ci.yml
โ”œโ”€โ”€ segmentor/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ core.py
โ”‚   โ”œโ”€โ”€ config.py
โ”‚   โ”œโ”€โ”€ models.py
โ”‚   โ”œโ”€โ”€ exceptions.py
โ”‚   โ”œโ”€โ”€ backends/
โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”‚   โ”œโ”€โ”€ base.py
โ”‚   โ”‚   โ”œโ”€โ”€ torch_sam_v1.py
โ”‚   โ”‚   โ”œโ”€โ”€ torch_sam_v2.py
โ”‚   โ”‚   โ”œโ”€โ”€ onnx_sam_v1.py
โ”‚   โ”‚   โ””โ”€โ”€ onnx_sam_v2.py
โ”‚   โ”œโ”€โ”€ utils/
โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”‚   โ”œโ”€โ”€ image_io.py
โ”‚   โ”‚   โ”œโ”€โ”€ mask_utils.py
โ”‚   โ”‚   โ”œโ”€โ”€ tiling.py
โ”‚   โ”‚   โ””โ”€โ”€ cache.py
โ”‚   โ”œโ”€โ”€ cli.py
โ”‚   โ””โ”€โ”€ quickstart.py
โ”œโ”€โ”€ service/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ main.py
โ”‚   โ”œโ”€โ”€ routes.py
โ”‚   โ””โ”€โ”€ middleware.py
โ”œโ”€โ”€ tests/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ conftest.py
โ”‚   โ”œโ”€โ”€ test_core.py
โ”‚   โ”œโ”€โ”€ test_backends.py
โ”‚   โ”œโ”€โ”€ test_utils.py
โ”‚   โ””โ”€โ”€ test_service.py
โ””โ”€โ”€ docs/
    โ”œโ”€โ”€ index.md
    โ”œโ”€โ”€ quickstart.md
    โ”œโ”€โ”€ api.md
    โ””โ”€โ”€ config.md
## For Package Developers

### Installation for Development

```bash
# Clone the repository
git clone https://github.com/yourusername/segmentor.git
cd segmentor

# Install in editable mode with all dependencies
pip install -e .[all]

# Install pre-commit hooks
pre-commit install

Using Segmentor in Your Project

Install from PyPI (when published):

pip install segmentor[torch]

Install from GitHub:

pip install git+https://github.com/yourusername/segmentor.git

Install specific version:

pip install segmentor[torch]==0.1.0

Add to requirements.txt:

segmentor[torch]>=0.1.0

Add to pyproject.toml:

dependencies = [
    "segmentor[torch]>=0.1.0",
]

Quick Integration Example

# Add to your project
from perceptra_seg import Segmentor

class MyImageProcessor:
    def __init__(self):
        self.segmentor = Segmentor(backend="torch", device="cuda")
    
    def process(self, image, box):
        result = self.segmentor.segment_from_box(image, box)
        return result.mask

API Stability

  • Stable: Core API (Segmentor, SegmentationResult, SegmentorConfig)
  • Beta: Service endpoints may change in minor versions
  • Experimental: ONNX backends, tiling features

Version Compatibility

Segmentor Version Python PyTorch NumPy
0.1.x 3.10+ 2.0+ 1.24+

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

perceptra_seg-0.1.0.tar.gz (36.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

perceptra_seg-0.1.0-py3-none-any.whl (38.5 kB view details)

Uploaded Python 3

File details

Details for the file perceptra_seg-0.1.0.tar.gz.

File metadata

  • Download URL: perceptra_seg-0.1.0.tar.gz
  • Upload date:
  • Size: 36.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for perceptra_seg-0.1.0.tar.gz
Algorithm Hash digest
SHA256 ab0ee92018e37b7dd1201e26b789668a4c3b9eb9df01836258aef673668fddbd
MD5 252211ed870604a980306ab03e072aa2
BLAKE2b-256 103ab5834c06b77cd9d9b70946f4f4321b22520de4319247ae46ed1baf1f1200

See more details on using hashes here.

File details

Details for the file perceptra_seg-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: perceptra_seg-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 38.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for perceptra_seg-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7a5916bb546aff6cbe304ed657e51cabc539d3d28facc1c679b0879a5e43c7b2
MD5 c746580cda9725bb7e7bab175eb136d8
BLAKE2b-256 dc22e12c54563bb13292c506a44b484531181c2f0563d43e61531cf9477d97a2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page