Skip to main content

package for inferencing models developed at phosphene

Project description

Torch CUDA install

If the identity_clustering package is installing the CPU version, you can install the right CUDA versions by manually installing it or running the following command:

python -m model_inference.torch_install

Sphinx Documentation

To view Sphinx documentation, clone this repository and run the following commands

cd pkg-faceswap-inference

#install sphinx and a theme.
pip install sphinx
pip install sphinx-rtd-theme

# generate docs
sphinx-build -b html docs/ build/

open the build folder there you will find index.html , open it with a browser


Inference Class Documentation

The Inference class provides functionalities for inferencing video files with models. It includes utilities for timing and tracking nested function calls to aid in performance analysis.

Table of Contents


Class Attributes

  • device (str): Specifies the device (cpu or cuda) used for computation.
  • shape (tuple): Dimensions for resizing clustered faces, default is (224, 224).
  • classes (list): The categories the model can predict, such as ["Real", "Fake"].
  • timings (dict): Stores timing data for functions with nested call relationships.
  • _clusters (None or dict): Stores clusters of faces post-clustering.

Methods

__init__(self, device: str, shape=(224, 224))

  • Initializes the Inference class with specified device and shape attributes.

generate_video_data(self, video_path: str, print_timings=True)

  • Processes a video to detect, crop, and cluster faces, and converts them to RGB format.

get_data(self, video_path: str, print_timings=True)

  • Retrieves essential data for frames, bounding boxes, images, FPS, and clustered identities from a video.

get_predictions(self, model, images: torch.Tensor, device='cuda')

  • Runs predictions on clustered face images using a model and returns logits and labels.

__cvt_to_rgb(self, faces: tuple) -> torch.Tensor

  • Converts images from BGR to RGB format.

__plot_images_grid(self, tensor: torch.Tensor, images_per_row=4)

  • Plots a grid of images from a 4D tensor.

__print_result(self, result: dict, image_data: List[torch.Tensor])

  • Displays results, including images, based on the model’s predictions.

__print_timings(self, timings: dict)

  • Outputs timing information for functions with nested call relationships.

__create_sequence_dict(self, identity_data)

  • Organizes identity information into a sequence dictionary.

__draw_bounding_boxes(self, video_path: str, sequence_dict, result_video_path: str)

  • Draws bounding boxes on faces in the video and saves the processed output.

Decorator timeit(func)

  • Times functions and records their nested relationships in the timings attribute.

Usage

Here's a simple example of using the Inference class:

from inference import Inference
import torch
from models.models_list import ModelList

# Initialize Inference class
device = "cuda" if torch.cuda.is_available() else "cpu"
inference = Inference(device=device)

# Load your model from ModelList
model = ModelList().load_model("face_detection_model", device)

# Process video
video_path = "/path/to/video.mp4"
output_data, num_clusters = inference.generate_video_data(video_path)

# Get predictions for each cluster
for cluster_images in output_data:
    predictions = inference.get_predictions(model, cluster_images, device)
    inference.__print_result(predictions, cluster_images)

License

This project is licensed under the MIT License. See the LICENSE file for details.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_inference-0.0.7.tar.gz (10.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_inference-0.0.7-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file model_inference-0.0.7.tar.gz.

File metadata

  • Download URL: model_inference-0.0.7.tar.gz
  • Upload date:
  • Size: 10.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.1 CPython/3.12.9 Linux/6.8.0-1021-azure

File hashes

Hashes for model_inference-0.0.7.tar.gz
Algorithm Hash digest
SHA256 7aac5de942256c350d4e4bd648577babe6f4cb80706a4af5291cb2d7190bd4fe
MD5 b18d96faf1130351d0bed9c76bcf3378
BLAKE2b-256 8fa3fbf669f167ac1b2e46d80ddd08129a3195becbc52e0214d0c0d5a89cfb80

See more details on using hashes here.

File details

Details for the file model_inference-0.0.7-py3-none-any.whl.

File metadata

  • Download URL: model_inference-0.0.7-py3-none-any.whl
  • Upload date:
  • Size: 11.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.1 CPython/3.12.9 Linux/6.8.0-1021-azure

File hashes

Hashes for model_inference-0.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 590853a4776b602dad0bd9be03a46d464b084cf2d0ba7b10e3a34f78488b50dc
MD5 458922f433bc2756de731ed901b8b7b5
BLAKE2b-256 f52c2692bd2b61f5e5b2f4341764b16fbbdd58202941ad87f07453129fa977cd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page