A package that provides advanced coding tools for training and deploying AI
Project description
instmodel
instmodel is a Python package for building instruction-based neural network models with either a PyTorch or TensorFlow/Keras backend. Build, train, and export models into a compact JSON "instruction" format for lightweight, backend-agnostic inference.
Features
- Dual Backend: Build models with PyTorch (
instmodel.torch) or TensorFlow/Keras (instmodel.tf) — both are optional dependencies. - Instruction Model Export: Convert trained models into a JSON-based instruction format that captures architecture, weights, and activations.
- Backend-Agnostic Inference: Run exported instruction models with pure NumPy via
instmodel.instruction_model— no framework required at inference time. - Validation: Verify that the instruction model produces the same outputs as the original trained model.
Installation
Install the core package (NumPy inference only):
pip install instmodel
Install with a training backend:
pip install instmodel[pytorch] # PyTorch backend
pip install instmodel[tensorflow] # TensorFlow/Keras backend
Quick Example — PyTorch
import numpy as np
from instmodel.torch import (
Dense,
InputBuffer,
ModelGraph,
ff_model,
validate_torch_model,
)
from instmodel.instruction_model import validate_instruction_model
# 1. Define a simple feed-forward model.
input_buffer = InputBuffer(4, name="simple_input")
hidden = Dense(8, activation="relu", name="hidden_relu_1")(input_buffer)
hidden = Dense(6, activation="relu", name="hidden_relu_2")(hidden)
output = Dense(1, activation="sigmoid", name="output_sigmoid")(hidden)
model_graph = ModelGraph(input_buffer, output)
model_graph.compile(optimizer="adam", loss="binary_crossentropy")
# 2. Train on dummy data.
x_data = np.random.random((10, 4))
y_data = np.random.randint(0, 2, size=(10, 1))
model_graph.fit(x_data, y_data, epochs=1, verbose=0)
# 3. Export to instruction model.
instruction_model = model_graph.create_instruction_model()
# 4. Validate.
torch_pred = model_graph.predict(x_data)
instruction_model["validation_data"] = {
"inputs": x_data.tolist(),
"expected_outputs": torch_pred.tolist(),
}
validate_instruction_model(instruction_model)
validate_torch_model(model_graph.get_torch(), instruction_model["validation_data"])
Quick Example — TensorFlow/Keras
import numpy as np
from instmodel.tf import (
Dense,
InputBuffer,
ModelGraph,
ff_model,
validate_keras_model,
)
from instmodel.instruction_model import validate_instruction_model
# 1. Define a simple feed-forward model.
input_buffer = InputBuffer(4, name="simple_input")
hidden = Dense(8, activation="relu", name="hidden_relu_1")(input_buffer)
hidden = Dense(6, activation="relu", name="hidden_relu_2")(hidden)
output = Dense(1, activation="sigmoid", name="output_sigmoid")(hidden)
model_graph = ModelGraph(input_buffer, output)
model_graph.compile(optimizer="adam", loss="binary_crossentropy")
# 2. Train on dummy data.
x_data = np.random.random((10, 4))
y_data = np.random.randint(0, 2, size=(10, 1))
model_graph.fit(x_data, y_data, epochs=1, verbose=0)
# 3. Export to instruction model.
instruction_model = model_graph.create_instruction_model()
# 4. Validate.
keras_pred = model_graph.predict(x_data, verbose=0)
instruction_model["validation_data"] = {
"inputs": x_data.tolist(),
"expected_outputs": keras_pred.tolist(),
}
validate_instruction_model(instruction_model)
validate_keras_model(model_graph.get_keras(), instruction_model["validation_data"])
API Overview
Both backends expose the same model-building API:
| Layer / Op | Description |
|---|---|
InputBuffer |
Model input |
Dense |
Fully connected layer |
Attention |
Attention mechanism |
Concatenate |
Concatenate buffers |
ReduceSum |
Sum reduction |
Add |
Element-wise addition |
Multiply |
Element-wise multiplication |
MultiplyHeads |
Head-wise broadcast multiply |
AddHeads |
Head-wise broadcast add |
ScaleVectorized |
Learnable per-element scale |
ShiftVectorized |
Learnable per-element shift |
SingleIdEmbeddings |
Single-ID embedding lookup |
MultiIdEmbeddings |
Multi-ID embedding lookup |
ModelGraph |
Compiles the computation graph for training and export |
ff_model |
Helper to build a feed-forward stack |
validate_model |
Backend-specific validator (alias) |
Backend-specific validators:
instmodel.tf.validate_keras_modelinstmodel.torch.validate_torch_model
Backend-agnostic inference:
instmodel.instruction_model.instruction_model_inferenceinstmodel.instruction_model.validate_instruction_model
GPU Testing
For running PyTorch tests on RTX 50-series GPUs (CUDA 13.1), a custom Dockerfile is provided:
custom_cuda_builds/Dockerfile.torch.cuda13
License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file instmodel-1.0.2.tar.gz.
File metadata
- Download URL: instmodel-1.0.2.tar.gz
- Upload date:
- Size: 36.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a077106cc413caa30e5b96eebf76bb4565ca4c2fe8ec1bc47402cf345d5966c4
|
|
| MD5 |
d7dbae68fd136c88bedfa77e47e73816
|
|
| BLAKE2b-256 |
af7df93874840232ad45576c0a177917dec51dddf9df9702a703eae29b2749f7
|
Provenance
The following attestation bundles were made for instmodel-1.0.2.tar.gz:
Publisher:
pypi-publish.yml on Joaoprcf/instmodel
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
instmodel-1.0.2.tar.gz -
Subject digest:
a077106cc413caa30e5b96eebf76bb4565ca4c2fe8ec1bc47402cf345d5966c4 - Sigstore transparency entry: 934401553
- Sigstore integration time:
-
Permalink:
Joaoprcf/instmodel@a81adfa72f4b95a0b4afe332b19c315202fafce9 -
Branch / Tag:
refs/tags/release-1.0.2 - Owner: https://github.com/Joaoprcf
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@a81adfa72f4b95a0b4afe332b19c315202fafce9 -
Trigger Event:
push
-
Statement type:
File details
Details for the file instmodel-1.0.2-py3-none-any.whl.
File metadata
- Download URL: instmodel-1.0.2-py3-none-any.whl
- Upload date:
- Size: 28.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9f813281811b258fae1fcf0052c242cd2faaf4e60d97673c605b71ae873f8701
|
|
| MD5 |
12b454a5a385b21cf9360a972fbc66db
|
|
| BLAKE2b-256 |
16612c7220061c3992e6e98361f875fde892aee7d03ca81391d430e764b1c5fa
|
Provenance
The following attestation bundles were made for instmodel-1.0.2-py3-none-any.whl:
Publisher:
pypi-publish.yml on Joaoprcf/instmodel
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
instmodel-1.0.2-py3-none-any.whl -
Subject digest:
9f813281811b258fae1fcf0052c242cd2faaf4e60d97673c605b71ae873f8701 - Sigstore transparency entry: 934401668
- Sigstore integration time:
-
Permalink:
Joaoprcf/instmodel@a81adfa72f4b95a0b4afe332b19c315202fafce9 -
Branch / Tag:
refs/tags/release-1.0.2 - Owner: https://github.com/Joaoprcf
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@a81adfa72f4b95a0b4afe332b19c315202fafce9 -
Trigger Event:
push
-
Statement type: