Skip to main content

Opinionated Inference Framework for Remote Sensing Deep Learning Applications.

Project description

SatVision

docs-build docs-deploy docs-deploy-main pypi-publish static-checks

An opinionated framework for deploying computer vision models on remote sensing imagery.

🚀 Getting Started

The framework supports both local Python inference and scalable deployment via NVIDIA Triton Inference Server.

1. 💻 Local Inference

Best for development and direct integration into Python applications.

Installation:

pip install SatVision
# For TensorRT support (requires NVIDIA GPU and drivers):
pip install "SatVision[tensorrt-cu12]"

Usage Example:

import satvis
from pathlib import Path

# Initialize model (supports 'torch' or 'tensorrt' backends)
# Note: 'tensorrt' backend requires the optional dependency installed above.
model = satvis.get_inference_model(
    model_name="torchvision::resnet50", 
    # override_args={"backend_type": "tensorrt"} # Uncomment to use TensorRT
)

image_path = Path("path/to/your/image.jpg") # Replace with your image path

predictions = model.predict(
    image=image_path, 
    apply_transform=True
)

best_prediction = max(predictions.items(), key=lambda item: item[1])

print(f"Best prediction: {best_prediction}")

2. 🏗️ Server Deployment

Deploy models using NVIDIA Triton Inference Server for high-performance serving.

Setup:

  1. Clone the repository:

    git clone https://github.com/liopeer/SatVis.git
    cd SatVis
    
  2. Install just (Command Runner): Follow the instructions at Just Docs to install just for running predefined commands.

  3. Install Dependencies: This will install uv (package manager) and project dependencies.

    just install-uv
    just install-dev
    source .venv/bin/activate
    

    This installs tensorrt==10.9.0.34. This version is compatible with Nvidia Triton Server nvcr.io/nvidia/tritonserver:25.03-py3 and driver >=570.

  4. Generate Models: Create the optimized ONNX/TensorRT models for Triton Server.

    python server/generate_models.py
    
  5. Launch Triton Server: Requirements: NVIDIA Driver >= 570, Docker with CUDA Toolkit.

    docker compose -f server/docker-compose.yml up -d
    
  6. Run Inference Client: Send an inference request to the running server.

    uv run scripts/predict_resnet_http.py
    

🗺️ Roadmap

Q4 2025

  • Core PyTorch Framework with Inference and ONNX Export:
    • Classification
    • Image Embeddings
    • Language Embeddings
  • TensorRT Export
    • FP32/FP16 export
    • verification
  • Model Serving
    • Nvidia Triton server with kserver API
    • Model Zoo for classification and embeddings
  • Documentation of REST API
  • Basic CI/CD
    • Inference server to container registry
    • Unit tests with pytest and coverage

Q1 2026

  • Model Training
    • Panoptic Segmentation
    • CLIP-style training for image and language embeddings
  • Multi-Spectral / Hyperspectral support
    • I/O for various formats (GeoTIFF, HDF5, NetCDF)
    • Data Augmentation for multi-spectral data
    • Models
  • Documentation for Python API

Q2 2026

  • Model Quantization
    • Int8 PTQ (Post Training Quantization) with calibration

Small Projects

  • PostGIS sampler for PyTorch DataLoader

👤 Maintainers

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

satvision-0.0.3.tar.gz (292.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

satvision-0.0.3-py3-none-any.whl (77.2 kB view details)

Uploaded Python 3

File details

Details for the file satvision-0.0.3.tar.gz.

File metadata

  • Download URL: satvision-0.0.3.tar.gz
  • Upload date:
  • Size: 292.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"20.04","id":"focal","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for satvision-0.0.3.tar.gz
Algorithm Hash digest
SHA256 48acf1dabfa1f954540e7c3a667c7592ca99b9871a2d41bf4137bfd359327540
MD5 fae8d92e7e5812ddd8e288d41f613510
BLAKE2b-256 15ad560fa6c1a9131a313918bd0a2074fd6461449d58f2d54be053dab373f4f9

See more details on using hashes here.

File details

Details for the file satvision-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: satvision-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 77.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"20.04","id":"focal","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for satvision-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 cb0f20d92cd61d5625e8c7451520f848b59c97616a27fc8b25a06ca5dab0a405
MD5 4d31aa50aac3e1255e6996f23c9bb7dc
BLAKE2b-256 9da3fe17ff4fece546e14d63c5b8911873a3d6cbb57fda0024afa96a96a92bc2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page