Skip to main content

PyNetsPresso

Project description

PyNetsPresso




Use PyNetsPresso for a seamless model optimization process. PyNetsPresso resolves AI-related constraints in business use cases and enables cost-efficiency and enhanced performance by removing the requirement for high-spec servers and network connectivity and preventing high latency and personal data breaches.

The PyNetsPresso is a python interface with the NetsPresso web application and REST API.

Easily compress various models with our resources. Please browse the Docs for details, and join our Discussion Forum for providing feedback or sharing your use cases.

To get started with the PyNetsPresso, you will need to sign up either at NetsPresso or PyNetsPresso.




Steps Types Description
Train
(Model Zoo)
Image Classification PyTorch-CIFAR-Models
Object Detection YOLO Fastest
YOLOX
YOLOv5
YOLOv7
Semantic Segmentation PIDNet
Pose Estimation YOLOv8
Build and train models.
Compress np.compressor Compress and optimize the user’s pre-trained model.
Convert np.launcher Convert AI models to run efficiently on the desired hardware and provide easy installation for seamless usage of the converted AI models.

Installation

There are two ways you can install the PyNetsPresso: using pip or manually through our project GitHub repository.

To install this package, please use Python 3.8 or higher.

From PyPI (Recommended)

pip install netspresso

From Github

git clone https://github.com/nota-netspresso/pynetspresso.git
cd pynetspresso
pip install -e .

Quick Start

⭐⭐⭐ (New Feature) Train ⭐⭐⭐

from loguru import logger
from netspresso.trainer import ModelTrainer, Task

trainer = ModelTrainer(task=Task.OBJECT_DETECTION)
logger.info(trainer.available_models)  # ['EfficientFormer', 'YOLOX']

trainer.set_dataset_config(
    name="traffic_sign_config_example",
    root_path="/root/traffic-sign",
    train_image="images/train",
    train_label="labels/train",
    valid_image="images/valid",
    valid_label="labels/valid",
    id_mapping=["prohibitory", "danger", "mandatory", "other"],
)
trainer.set_model_config(model_name="YOLOX")
trainer.set_training_config(epochs=40, batch_size=16, lr=6e-3, opt="adamw", warmup_epochs=10)

trainer.train(gpus="0, 1")
from netspresso.trainer import ModelTrainer, Task

trainer = ModelTrainer(task=Task.IMAGE_CLASSIFICATION)

trainer.set_dataset_config_with_yaml(yaml_path="config/data/beans.yaml")
trainer.set_model_config_with_yaml(yaml_path="config/model/resnet50-classification.yaml")

trainer.train(gpus="0, 1")

Download config folder from netspresso-trainer

If you want to train the trainer as a yaml file, download the config folder and use it.

python tools/github_download.py --repo Nota-NetsPresso/netspresso-trainer --path config

Login

To use the PyNetsPresso, please enter the email and password registered in NetsPresso.

from netspresso.client import SessionClient

session = SessionClient(email='YOUR_EMAIL', password='YOUR_PASSWORD')

Automatic Compression

Automatically compress the model by setting the compression ratio for the model.

Enter the ID of the uploaded model, the name and storage path of the compressed model, and the compression ratio.

from netspresso.compressor import ModelCompressor

compressor = ModelCompressor(user_session=session)
compressed_model = compressor.automatic_compression(
    model_name="YOUR_MODEL_NAME",
    task=Task.IMAGE_CLASSIFICATION,
    framework=Framework.TENSORFLOW_KERAS,
    input_shapes="YOUR_MODEL_INPUT_SHAPES",  # ex) [{"batch": 1, "channel": 3, "dimension": [32, 32]}]
    input_path="YOUR_MODEL_PATH",  # ex) "./examples/sample_models/mobilenetv1.h5"
    output_path="OUTPUT_PATH",  # ex) ./outputs/compressed/compressed_model.h5,
    compression_ratio=0.5,
)

Convert Model and Benchmark the Converted Model

Convert an ONNX model into a TensorRT model, and benchmark the TensorRT model on the Jetson Nano.

from loguru import logger
from netspresso.launcher import ModelConverter, ModelBenchmarker, ModelFramework, DeviceName, SoftwareVersion

converter = ModelConverter(user_session=session)
conversion_task = converter.convert_model(
    model_path="YOUR_MODEL_PATH",  # ex) "./examples/sample_models/test.onnx"
    target_framework=ModelFramework.TENSORRT,
    target_device_name=DeviceName.JETSON_AGX_ORIN,
    target_software_version=SoftwareVersion.JETPACK_5_0_1,
    output_path="CONVERTED_MODEL_PATH"  # ex) "./outputs/converted/converted_model.trt"
)
logger.info(conversion_task)

benchmarker = ModelBenchmarker(user_session=session)
benchmark_task = benchmarker.benchmark_model(
    model_path="CONVERTED_MODEL_PATH",  # ex) "./outputs/converted/converted_model.trt"
    target_device_name=DeviceName.JETSON_AGX_ORIN,
    target_software_version=SoftwareVersion.JETPACK_5_0_1,
)
logger.info(f"model inference latency: {benchmark_task.latency} ms")
logger.info(f"model gpu memory footprint: {benchmark_task.memory_footprint_gpu} MB")
logger.info(f"model cpu memory footprint: {benchmark_task.memory_footprint_cpu} MB")

Available Options for Launcher (Convert, Benchmark)

Available Target Frameworks for Conversion with Source Models

Target / Source Model ONNX TENSORFLOW_KERAS TENSORFLOW
TENSORRT ✔️
DRPAI ✔️
OPENVINO ✔️
TENSORFLOW_LITE ✔️ ✔️ ✔️

Available Devices for Framework

Device / Framework ONNX TENSORRT TENSORFLOW_LITE DRPAI OPENVINO
RASPBERRY_PI_4B ✔️ ✔️
RASPBERRY_PI_3B_PLUS ✔️ ✔️
RASPBERRY_PI_ZERO_W ✔️ ✔️
RASPBERRY_PI_ZERO_2W ✔️ ✔️
RENESAS_RZ_V2L ✔️ ✔️
RENESAS_RZ_V2M ✔️ ✔️
RENESAS_RA8D1 ✔️(only INT8)
ALIF_ENSEMBLE_E7_DEVKIT_GEN2 ✔️(only INT8)
JETSON_NANO ✔️ ✔️
JETSON_TX2 ✔️ ✔️
JETSON_XAVIER ✔️ ✔️
JETSON_NX ✔️ ✔️
JETSON_AGX_ORIN ✔️ ✔️
AWS_T4 ✔️ ✔️
Intel_XEON_W_2233 ✔️

Available Software Versions for Jetson Devices

Software Versions requires only Jetson Device. If you are using a different device, you do not need to enter it.

Software Version / Device JETSON_NANO JETSON_TX2 JETSON_XAVIER JETSON_NX JETSON_AGX_ORIN
JETPACK_4_4_1 ✔️
JETPACK_4_6 ✔️ ✔️ ✔️ ✔️
JETPACK_5_0_1 ✔️
JETPACK_5_0_2 ✔️

NetsPresso Model Compressor Best Practice

If you want to experience Model Compressor online without any installation, please refer to the NetsPresso-Model-Compressor-ModelZoo repo that runs on Google Colab.

Contact

Join our Discussion Forum for providing feedback or sharing your use cases, and if you want to talk more with Nota, please contact us here.
Or you can also do it via email(netspresso@nota.ai) or phone(+82 2-555-8659)!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

netspresso-1.3.0b0.tar.gz (38.2 kB view details)

Uploaded Source

Built Distribution

netspresso-1.3.0b0-py3-none-any.whl (43.5 kB view details)

Uploaded Python 3

File details

Details for the file netspresso-1.3.0b0.tar.gz.

File metadata

  • Download URL: netspresso-1.3.0b0.tar.gz
  • Upload date:
  • Size: 38.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for netspresso-1.3.0b0.tar.gz
Algorithm Hash digest
SHA256 41c8086c31a2d2bc5bb537aed834d7547431d18f230fd6b37fb5bea8e5f3ebeb
MD5 952ec2a014df2ebd07543a59fe223278
BLAKE2b-256 fd9b6f2efeb46a447907ca72f9a91bca94d8793b02b99d442facd73b81098425

See more details on using hashes here.

File details

Details for the file netspresso-1.3.0b0-py3-none-any.whl.

File metadata

  • Download URL: netspresso-1.3.0b0-py3-none-any.whl
  • Upload date:
  • Size: 43.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for netspresso-1.3.0b0-py3-none-any.whl
Algorithm Hash digest
SHA256 aa03c578448f456c747d0d86d12432abdccdd3ae828285a38686bb75f8eb7caf
MD5 0bf1897fa865b61d5164c654f3275232
BLAKE2b-256 0d07557c0281c3977f065666f4baf1e67b2ea9bbdcb28692d317e1af64633dd3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page