Skip to main content

package for inferencing models developed at phosphene

Project description

Torch CUDA install

If the identity_clustering package is installing the CPU version, you can install the right CUDA versions by manually installing it or running the following command:

python -m model_inference.torch_install

Sphinx Documentation

To view Sphinx documentation, clone this repository and run the following commands

cd pkg-faceswap-inference

#install sphinx and a theme.
pip install sphinx
pip install sphinx-rtd-theme

# generate docs
sphinx-build -b html docs/ build/

open the build folder there you will find index.html , open it with a browser


Inference Class Documentation

The Inference class provides functionalities for inferencing video files with models. It includes utilities for timing and tracking nested function calls to aid in performance analysis.

Table of Contents


Class Attributes

  • device (str): Specifies the device (cpu or cuda) used for computation.
  • shape (tuple): Dimensions for resizing clustered faces, default is (224, 224).
  • classes (list): The categories the model can predict, such as ["Real", "Fake"].
  • timings (dict): Stores timing data for functions with nested call relationships.
  • _clusters (None or dict): Stores clusters of faces post-clustering.

Methods

__init__(self, device: str, shape=(224, 224))

  • Initializes the Inference class with specified device and shape attributes.

generate_video_data(self, video_path: str, print_timings=True)

  • Processes a video to detect, crop, and cluster faces, and converts them to RGB format.

get_data(self, video_path: str, print_timings=True)

  • Retrieves essential data for frames, bounding boxes, images, FPS, and clustered identities from a video.

get_predictions(self, model, images: torch.Tensor, device='cuda')

  • Runs predictions on clustered face images using a model and returns logits and labels.

__cvt_to_rgb(self, faces: tuple) -> torch.Tensor

  • Converts images from BGR to RGB format.

__plot_images_grid(self, tensor: torch.Tensor, images_per_row=4)

  • Plots a grid of images from a 4D tensor.

__print_result(self, result: dict, image_data: List[torch.Tensor])

  • Displays results, including images, based on the model’s predictions.

__print_timings(self, timings: dict)

  • Outputs timing information for functions with nested call relationships.

__create_sequence_dict(self, identity_data)

  • Organizes identity information into a sequence dictionary.

__draw_bounding_boxes(self, video_path: str, sequence_dict, result_video_path: str)

  • Draws bounding boxes on faces in the video and saves the processed output.

Decorator timeit(func)

  • Times functions and records their nested relationships in the timings attribute.

Usage

Here's a simple example of using the Inference class:

from inference import Inference
import torch
from models.models_list import ModelList

# Initialize Inference class
device = "cuda" if torch.cuda.is_available() else "cpu"
inference = Inference(device=device)

# Load your model from ModelList
model = ModelList().load_model("face_detection_model", device)

# Process video
video_path = "/path/to/video.mp4"
output_data, num_clusters = inference.generate_video_data(video_path)

# Get predictions for each cluster
for cluster_images in output_data:
    predictions = inference.get_predictions(model, cluster_images, device)
    inference.__print_result(predictions, cluster_images)

License

This project is licensed under the MIT License. See the LICENSE file for details.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_inference-0.0.7.1.tar.gz (10.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_inference-0.0.7.1-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file model_inference-0.0.7.1.tar.gz.

File metadata

  • Download URL: model_inference-0.0.7.1.tar.gz
  • Upload date:
  • Size: 10.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.1 CPython/3.12.9 Linux/6.8.0-1021-azure

File hashes

Hashes for model_inference-0.0.7.1.tar.gz
Algorithm Hash digest
SHA256 d1cfeb32aa7fc9c881a3ffe9679bca0c410afd72ee9629d73afee3d604fb53d6
MD5 796bd0fd8c917a1156d7ce66f34b6a42
BLAKE2b-256 46a2b96cba0beea737233833b35d2daf93db693a356a581c434c710a8d0c5d1d

See more details on using hashes here.

File details

Details for the file model_inference-0.0.7.1-py3-none-any.whl.

File metadata

  • Download URL: model_inference-0.0.7.1-py3-none-any.whl
  • Upload date:
  • Size: 11.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.1 CPython/3.12.9 Linux/6.8.0-1021-azure

File hashes

Hashes for model_inference-0.0.7.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d4b24e02e4013dc85bbc210378ffce8ac1057b1bc68f9dfb01bb140a2836143f
MD5 d5bbd877a0b7750aa39a5f896f6c0195
BLAKE2b-256 7fe302dcc56a6e9e12e3eb8bafd7670b1b29e1520007dc00bed9e41eb759f460

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page