Skip to main content

Production-grade segmentation tool with SAM v1/v2 support

Project description

Perceptra Seg

Production-grade segmentation tool powered by Segment Anything Models (SAM v1 & v2).

Features

  • ๐Ÿš€ Easy to use: Simple Python SDK and REST API
  • ๐Ÿ”Œ Pluggable backends: PyTorch and ONNX Runtime support
  • ๐Ÿ“ฆ Multiple models: SAM v1 and SAM v2
  • ๐ŸŽฏ Flexible prompts: Bounding boxes, points, or both
  • ๐Ÿ“ค Multiple outputs: RLE, PNG, polygons, numpy arrays
  • โšก Performance: GPU acceleration, caching, optional tiling
  • ๐Ÿณ Ready for production: Docker images, metrics, structured logging

Installation

# Basic installation with PyTorch backend
pip install perceptra-seg[torch]

# With FastAPI server
pip install perceptra-seg[server,torch]

# All features
pip install perceptra-seg[all]

Quick Start

Python SDK

from perceptra_seg import Segmentor
import numpy as np

# Initialize
segmentor = Segmentor(
    backend="torch",
    model="sam_v1",
    device="cuda"
)

# Load your image
image = np.array(...)  # or PIL.Image, path, URL

# Segment from bounding box
result = segmentor.segment_from_box(
    image,
    box=(100, 100, 400, 400),
    output_formats=["rle", "png", "polygons"]
)

print(f"Score: {result.score}, Area: {result.area} pixels")
print(f"Mask shape: {result.mask.shape}")

# Segment from points
result = segmentor.segment_from_points(
    image,
    points=[(250, 200, 1), (300, 250, 1)],  # (x, y, label)
    output_formats=["numpy"]
)

segmentor.close()

REST API

Start the server:

# Using CLI
segmentor-cli serve --config config.yaml

# Or with uvicorn
uvicorn service.main:app --host 0.0.0.0 --port 8080

Make requests:

# Segment from box
curl -X POST http://localhost:8080/v1/segment/box \
  -H "Content-Type: application/json" \
  -d '{
    "image": "",
    "box": [100, 100, 400, 400],
    "output_formats": ["rle", "png"]
  }'

# Segment from points
curl -X POST http://localhost:8080/v1/segment/points \
  -H "Content-Type: application/json" \
  -d '{
    "image": "",
    "points": [{"x": 250, "y": 200, "label": 1}],
    "output_formats": ["rle"]
  }'

Docker

# Build CPU image
docker build -t segmentor:cpu -f Dockerfile .

# Build GPU image
docker build -t segmentor:gpu -f Dockerfile.gpu .

# Run
docker run -p 8080:8080 segmentor:cpu

# With GPU
docker run --gpus all -p 8080:8080 segmentor:gpu

Configuration

Edit config.yaml or use environment variables:

model:
  name: "sam_v1"  # sam_v1 | sam_v2
  encoder_variant: "vit_h"  # vit_h | vit_l | vit_b
  checkpoint_path: null  # Auto-download if null

runtime:
  backend: "torch"  # torch | onnx
  device: "cuda"  # cuda | cpu
  precision: "fp32"  # fp16 | bf16 | fp32

server:
  host: "0.0.0.0"
  port: 8080
  api_keys: []  # Add keys for authentication

Environment overrides:

export SEGMENTOR_RUNTIME_DEVICE=cpu
export SEGMENTOR_MODEL_NAME=sam_v2

CLI Usage

# Segment from bounding box
segmentor-cli segment-box \
  --image path/to/image.jpg \
  --box 10 20 200 240 \
  --out mask.png \
  --backend torch \
  --model sam_v1

# Start server
segmentor-cli serve --config config.yaml

Model Weights & Licenses

This tool uses Meta's Segment Anything Models. Model weights are licensed under Apache 2.0.

SAM v1 checkpoints (auto-downloaded):

SAM v2 checkpoints (auto-downloaded):

Weights are downloaded to ~/.cache/segmentor/ on first use.

Important: Review Meta's license terms before commercial use.

Development

# Clone repository
git clone https://github.com/tannousgeagea/perceptra-seg.git
cd segmentor

# Install in development mode
pip install -e .[dev,all]

# Install pre-commit hooks
pre-commit install

# Run tests
pytest tests/ -v --cov=segmentor

# Run linters
black segmentor/ service/
isort segmentor/ service/
ruff check segmentor/ service/
mypy segmentor/ service/

# Build documentation
cd docs && mkdocs serve

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    Segmentor SDK                         โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚     segment_from_box / segment_from_points      โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ”‚                      โ”‚                                   โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚        Backend Abstraction Layer                 โ”‚   โ”‚
โ”‚  โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚   โ”‚
โ”‚  โ”‚  โ”‚ Torch    โ”‚ Torch    โ”‚  ONNX    โ”‚  ONNX    โ”‚  โ”‚   โ”‚
โ”‚  โ”‚  โ”‚ SAM v1   โ”‚ SAM v2   โ”‚  SAM v1  โ”‚  SAM v2  โ”‚  โ”‚   โ”‚
โ”‚  โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ”‚                      โ”‚                                   โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚      Utilities: Image I/O, Mask Utils,          โ”‚   โ”‚
โ”‚  โ”‚      Tiling, Caching, Postprocessing            โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                         โ”‚
                         โ”‚ REST API
                         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                  FastAPI Service                         โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚  /v1/segment/box  โ”‚  /v1/segment/points         โ”‚   โ”‚
โ”‚  โ”‚  /v1/segment      โ”‚  /v1/healthz  โ”‚  /metrics   โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ”‚                                                          โ”‚
โ”‚  Auth โ€ข CORS โ€ข Logging โ€ข Metrics โ€ข Rate Limiting        โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Key Design Decisions

  1. Backend Protocol Pattern: Uses Python's Protocol for type-safe backend abstraction, allowing new backends to be added without modifying core logic.

  2. Configuration-Driven: Single YAML config controls all aspects (model, runtime, outputs), with environment variable overrides for deployment flexibility.

  3. Separation of Concerns:

    • core.py: High-level API and orchestration
    • backends/: Model-specific inference logic
    • utils/: Reusable image/mask operations
    • service/: HTTP layer completely separate from SDK
  4. Output Flexibility: Supports multiple output formats (RLE, PNG, polygons, numpy) generated on-demand to minimize memory usage.

  5. Caching Strategy: LRU cache for image embeddings (expensive to compute), keyed by image hash for exact-match speedups.

  6. Error Handling: Custom exception hierarchy maps to appropriate HTTP status codes in the service layer.

  7. ONNX Placeholder: ONNX backends are stubs requiring pre-exported models, as SAM's official ONNX export is complex and model-specific.

API Reference

Python SDK

Segmentor

Main class for segmentation operations.

Constructor:

Segmentor(
    config: SegmentorConfig | None = None,
    **kwargs
)

Methods:

  • segment_from_box(image, box, *, output_formats, return_overlay) โ†’ SegmentationResult
  • segment_from_points(image, points, *, output_formats, return_overlay) โ†’ SegmentationResult
  • segment(image, boxes, points, *, strategy, output_formats, return_overlay) โ†’ list[SegmentationResult]
  • warmup(image_size) โ†’ None
  • set_backend(backend_name) โ†’ None
  • close() โ†’ None

SegmentationResult

Result object containing:

  • mask: numpy array (HxW) if 'numpy' in output_formats
  • rle: COCO RLE dict if 'rle' in output_formats
  • polygons: List of polygon contours if 'polygons' in output_formats
  • png_bytes: PNG-encoded mask if 'png' in output_formats
  • score: Confidence score (0-1)
  • area: Number of pixels in mask
  • bbox: Bounding box (x1, y1, x2, y2)
  • latency_ms: Processing time
  • model_info: Dict with model metadata
  • request_id: Unique request identifier

REST API

POST /v1/segment/box

Segment from bounding box.

Request:

{
  "image": "base64_string_or_url",
  "box": [x1, y1, x2, y2],
  "output_formats": ["rle", "png", "polygons"],
  "strategy": "largest"
}

Response:

{
  "rle": {"size": [H, W], "counts": [...]},
  "png_base64": "...",
  "polygons": [[[x1, y1], [x2, y2], ...]],
  "score": 0.95,
  "area": 12345,
  "bbox": [x1, y1, x2, y2],
  "latency_ms": 123.4,
  "model_info": {"name": "sam_v1", "backend": "torch"},
  "request_id": "uuid"
}

POST /v1/segment/points

Segment from point prompts.

Request:

{
  "image": "base64_string_or_url",
  "points": [
    {"x": 100, "y": 200, "label": 1},
    {"x": 150, "y": 220, "label": 1}
  ],
  "output_formats": ["rle"]
}

POST /v1/segment

General segmentation supporting boxes and/or points.

Request:

{
  "image": "base64_string_or_url",
  "boxes": [[x1, y1, x2, y2], ...],
  "points": [{"x": 100, "y": 200, "label": 1}, ...],
  "strategy": "merge",
  "output_formats": ["rle"]
}

Strategies:

  • "largest": Return only the largest mask
  • "merge": Union all masks into one
  • "all": Return all masks as separate results

GET /v1/healthz

Health check endpoint.

Response: {"status": "ok"}

GET /metrics

Prometheus metrics endpoint.

Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=segmentor --cov-report=html

# Run specific test file
pytest tests/test_core.py -v

# Run with markers
pytest -m "not slow"

Test coverage includes:

  • โœ… Core segmentation logic
  • โœ… Backend switching
  • โœ… Input validation
  • โœ… Output format conversion
  • โœ… REST API endpoints
  • โœ… Error handling
  • โœ… Configuration loading

Performance Tips

  1. Use GPU: Set device: "cuda" for 10-50x speedup
  2. Enable caching: Keep cache.enabled: true for repeated images
  3. Batch processing: Use segment() with multiple boxes instead of separate calls
  4. FP16 precision: Set precision: "fp16" on GPU for 2x speedup with minimal quality loss
  5. Warm up: Call warmup() before processing to avoid first-call overhead
  6. Tiling: Enable for very large images (>4K) to avoid OOM

Troubleshooting

CUDA out of memory

  • Reduce runtime.batch_size
  • Enable tiling.enabled: true
  • Use smaller model variant (vit_b instead of vit_h)
  • Use precision: "fp16"

Slow inference

  • Ensure GPU is being used: check torch.cuda.is_available()
  • Warm up the model first
  • Enable caching for repeated images
  • Use FP16 precision

Import errors

  • Ensure correct extras installed: pip install perceptra-seg[torch]
  • For SAM v1: pip install git+https://github.com/facebookresearch/segment-anything.git
  • For SAM v2: pip install git+https://github.com/facebookresearch/segment-anything-2.git

Model download fails

  • Check internet connection
  • Manually download from URLs in README and set checkpoint_path in config
  • Verify disk space in ~/.cache/segmentor/

Roadmap

  • HQ-SAM and MobileSAM backend support
  • Complete ONNX backend implementation
  • Video segmentation support (SAM 2 temporal)
  • Automatic mask quality filtering
  • Batch API endpoint
  • WebSocket streaming API
  • Triton Inference Server backend
  • Model quantization (INT8)
  • Multi-GPU support

Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Ensure all tests pass and coverage >80%
  5. Run pre-commit hooks
  6. Submit a pull request

License

Apache License 2.0 - see LICENSE file.

This project uses SAM models from Meta, which are also licensed under Apache 2.0.

Citation

If you use this tool in research, please cite the original SAM papers:

@article{kirillov2023segany,
  title={Segment Anything},
  author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
  journal={arXiv:2304.02643},
  year={2023}
}

@article{ravi2024sam2,
  title={SAM 2: Segment Anything in Images and Videos},
  author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
  journal={arXiv:2408.00714},
  year={2024}
}

Contact


Built with โค๏ธ by the Segmentor team# Segmentor: Production-Grade Segmentation Tool

A modular, high-performance segmentation library and microservice powered by Segment Anything Models (SAM v1 & v2).

Project Structure

perceptra-seg/
โ”œโ”€โ”€ pyproject.toml
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ config.yaml
โ”œโ”€โ”€ Dockerfile
โ”œโ”€โ”€ Dockerfile.gpu
โ”œโ”€โ”€ .pre-commit-config.yaml
โ”œโ”€โ”€ .github/
โ”‚   โ””โ”€โ”€ workflows/
โ”‚       โ””โ”€โ”€ ci.yml
โ”œโ”€โ”€ perceptra_seg/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ core.py
โ”‚   โ”œโ”€โ”€ config.py
โ”‚   โ”œโ”€โ”€ models.py
โ”‚   โ”œโ”€โ”€ exceptions.py
โ”‚   โ”œโ”€โ”€ backends/
โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”‚   โ”œโ”€โ”€ base.py
โ”‚   โ”‚   โ”œโ”€โ”€ torch_sam_v1.py
โ”‚   โ”‚   โ”œโ”€โ”€ torch_sam_v2.py
โ”‚   โ”‚   โ”œโ”€โ”€ onnx_sam_v1.py
โ”‚   โ”‚   โ””โ”€โ”€ onnx_sam_v2.py
โ”‚   โ”œโ”€โ”€ utils/
โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”‚   โ”œโ”€โ”€ image_io.py
โ”‚   โ”‚   โ”œโ”€โ”€ mask_utils.py
โ”‚   โ”‚   โ”œโ”€โ”€ tiling.py
โ”‚   โ”‚   โ””โ”€โ”€ cache.py
โ”‚   โ”œโ”€โ”€ cli.py
โ”‚   โ””โ”€โ”€ quickstart.py
โ”œโ”€โ”€ service/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ main.py
โ”‚   โ”œโ”€โ”€ routes.py
โ”‚   โ””โ”€โ”€ middleware.py
โ”œโ”€โ”€ tests/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ conftest.py
โ”‚   โ”œโ”€โ”€ test_core.py
โ”‚   โ”œโ”€โ”€ test_backends.py
โ”‚   โ”œโ”€โ”€ test_utils.py
โ”‚   โ””โ”€โ”€ test_service.py
โ””โ”€โ”€ docs/
    โ”œโ”€โ”€ index.md
    โ”œโ”€โ”€ quickstart.md
    โ”œโ”€โ”€ api.md
    โ””โ”€โ”€ config.md
## For Package Developers

### Installation for Development

```bash
# Clone the repository
git clone https://github.com/tannousgeagea/perceptra-seg.git
cd perceptra-seg

# Install in editable mode with all dependencies
pip install -e .[all]

# Install pre-commit hooks
pre-commit install

Using perceptra-seg in Your Project

Install from PyPI (when published):

pip install perceptra-seg[torch]

Install from GitHub:

pip install git+https://github.com/tannousgeagea/perceptra-seg.git

Install specific version:

pip install perceptra-seg[torch]==0.1.0

Add to requirements.txt:

perceptra-seg[torch]>=0.1.0

Add to pyproject.toml:

dependencies = [
    "perceptra-seg[torch]>=0.1.0",
]

Quick Integration Example

# Add to your project
from perceptra_seg import Segmentor

class MyImageProcessor:
    def __init__(self):
        self.segmentor = Segmentor(backend="torch", device="cuda")
    
    def process(self, image, box):
        result = self.segmentor.segment_from_box(image, box)
        return result.mask

API Stability

  • Stable: Core API (Segmentor, SegmentationResult, SegmentorConfig)
  • Beta: Service endpoints may change in minor versions
  • Experimental: ONNX backends, tiling features

Version Compatibility

Segmentor Version Python PyTorch NumPy
0.1.x 3.10+ 2.0+ 1.24+

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

perceptra_seg-0.1.1.tar.gz (36.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

perceptra_seg-0.1.1-py3-none-any.whl (38.5 kB view details)

Uploaded Python 3

File details

Details for the file perceptra_seg-0.1.1.tar.gz.

File metadata

  • Download URL: perceptra_seg-0.1.1.tar.gz
  • Upload date:
  • Size: 36.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for perceptra_seg-0.1.1.tar.gz
Algorithm Hash digest
SHA256 258f159debf1e55f960dd9f9ce5aaefa51586ca26d037f95fdc1a0bf5f5886ed
MD5 8b18274fd8fbbfcc9e035012a53e5c6c
BLAKE2b-256 9283ece73ba8509f4886556241b068a2c4b158b28947fc851f9330485472f382

See more details on using hashes here.

File details

Details for the file perceptra_seg-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: perceptra_seg-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 38.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for perceptra_seg-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3ca9e4e20c90ed051f8bd637f04c5d4ae5798c6e588aaf5d8559f497468c197c
MD5 6091c2824653777dfd3e42b0eab69d60
BLAKE2b-256 cabe8b6bcee34f609b1f9a111f4fd808d69809e62ad4b6ef840b416651f24c74

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page