Skip to main content

package for inferencing models developed at phosphene

Project description

Torch CUDA install

If the identity_clustering package is installing the CPU version, you can install the right CUDA versions by manually installing it or running the following command:

python -m model_inference.torch_install

Sphinx Documentation

To view Sphinx documentation, clone this repository and run the following commands

cd pkg-faceswap-inference

#install sphinx and a theme.
pip install sphinx
pip install sphinx-rtd-theme

# generate docs
sphinx-build -b html docs/ build/

open the build folder there you will find index.html , open it with a browser


Inference Class Documentation

The Inference class provides functionalities for inferencing video files with models. It includes utilities for timing and tracking nested function calls to aid in performance analysis.

Table of Contents


Class Attributes

  • device (str): Specifies the device (cpu or cuda) used for computation.
  • shape (tuple): Dimensions for resizing clustered faces, default is (224, 224).
  • classes (list): The categories the model can predict, such as ["Real", "Fake"].
  • timings (dict): Stores timing data for functions with nested call relationships.
  • _clusters (None or dict): Stores clusters of faces post-clustering.

Methods

__init__(self, device: str, shape=(224, 224))

  • Initializes the Inference class with specified device and shape attributes.

generate_video_data(self, video_path: str, print_timings=True)

  • Processes a video to detect, crop, and cluster faces, and converts them to RGB format.

get_data(self, video_path: str, print_timings=True)

  • Retrieves essential data for frames, bounding boxes, images, FPS, and clustered identities from a video.

get_predictions(self, model, images: torch.Tensor, device='cuda')

  • Runs predictions on clustered face images using a model and returns logits and labels.

__cvt_to_rgb(self, faces: tuple) -> torch.Tensor

  • Converts images from BGR to RGB format.

__plot_images_grid(self, tensor: torch.Tensor, images_per_row=4)

  • Plots a grid of images from a 4D tensor.

__print_result(self, result: dict, image_data: List[torch.Tensor])

  • Displays results, including images, based on the model’s predictions.

__print_timings(self, timings: dict)

  • Outputs timing information for functions with nested call relationships.

__create_sequence_dict(self, identity_data)

  • Organizes identity information into a sequence dictionary.

__draw_bounding_boxes(self, video_path: str, sequence_dict, result_video_path: str)

  • Draws bounding boxes on faces in the video and saves the processed output.

Decorator timeit(func)

  • Times functions and records their nested relationships in the timings attribute.

Usage

Here's a simple example of using the Inference class:

from inference import Inference
import torch
from models.models_list import ModelList

# Initialize Inference class
device = "cuda" if torch.cuda.is_available() else "cpu"
inference = Inference(device=device)

# Load your model from ModelList
model = ModelList().load_model("face_detection_model", device)

# Process video
video_path = "/path/to/video.mp4"
output_data, num_clusters = inference.generate_video_data(video_path)

# Get predictions for each cluster
for cluster_images in output_data:
    predictions = inference.get_predictions(model, cluster_images, device)
    inference.__print_result(predictions, cluster_images)

License

This project is licensed under the MIT License. See the LICENSE file for details.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_inference-0.0.5.tar.gz (8.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_inference-0.0.5-py3-none-any.whl (9.9 kB view details)

Uploaded Python 3

File details

Details for the file model_inference-0.0.5.tar.gz.

File metadata

  • Download URL: model_inference-0.0.5.tar.gz
  • Upload date:
  • Size: 8.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.1 CPython/3.12.8 Linux/6.8.0-1020-azure

File hashes

Hashes for model_inference-0.0.5.tar.gz
Algorithm Hash digest
SHA256 15c7fd3e53b18ddd1e840e81bbbf8bdbf0bba44ffb09bd683eb85b56d0743377
MD5 6f6c99262ed690bc55250a31072314f3
BLAKE2b-256 346e8e317cdd529aec4f0bc9f051e31aafca1d89e810f89fdc1018ace4331ba9

See more details on using hashes here.

File details

Details for the file model_inference-0.0.5-py3-none-any.whl.

File metadata

  • Download URL: model_inference-0.0.5-py3-none-any.whl
  • Upload date:
  • Size: 9.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.1 CPython/3.12.8 Linux/6.8.0-1020-azure

File hashes

Hashes for model_inference-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 3ffb5d71822990b3d3e492f12d1242880baa0d0775aeffdc467c6ab2ab093545
MD5 8c538a386d6cd50bd3f6b992d7b9adea
BLAKE2b-256 e738430bb33c3e9bed97a210118c6e110ea0a8097535219d0e4ab8b02f6b05e2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page