Neural network inference on accelerators simplified
Project description
Please refer to the project's documentation.
What is it
nnio is a light-weight python package for easily running neural networks.
It supports running models on CPU as well as some of the edge devices:
- Google USB Accelerator
- Intel Compute Stick
- Intel integrated GPUs
For each device there exists an own library and a model format. We wrap all those in a single well-defined python package.
Look at this simple example:
import nnio
# Create model and put it on a Google Coral Edge TPU device
model = nnio.EdgeTPUModel(
model_path='path/to/model_quant_edgetpu.tflite',
device='TPU',
)
# Create preprocessor
preproc = nnio.Preprocessing(
resize=(224, 224),
batch_dimension=True,
)
# Preprocess your numpy image
image = preproc(image_rgb)
# Make prediction
class_scores = model(image)
nnio was developed for the Fast Sense X microcomputer. It has six neural accelerators, which are all supported by nnio:
- 3 x Google Coral Edge TPU
- 2 x Intel VPU
- an integrated Intel GPU
More usage examples can be found in the documentation.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
nnio-0.2.4.4.tar.gz
(13.7 kB
view details)
File details
Details for the file nnio-0.2.4.4.tar.gz
.
File metadata
- Download URL: nnio-0.2.4.4.tar.gz
- Upload date:
- Size: 13.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.1.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.9.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3153373fad3b0f07710216e215328c7cd1a5e1660e41bb87044dc1132484dccc |
|
MD5 | 38d8af4d1796e54f9b42adc1f0eceb26 |
|
BLAKE2b-256 | 122fb1d47b731821c4c9ed5c0413b07e82a134260e60b1322bd60b9d9f7049db |