A high-performance AI model optimization library
Project description
Solv AI
Solv AI is a high-performance AI model optimization library designed to enhance the efficiency and speed of running large-scale models on local, resource-constrained hardware. By leveraging advanced techniques such as custom quantization, pruning, mixed precision inference, and efficient memory management, Solv AI enables seamless deployment and inference of sophisticated AI models even on slower machines.
Key Features
- Custom Quantization: Reduce model precision for faster computation without significant loss of accuracy.
- Pruning: Remove less important parameters to reduce model size and computational load.
- Layer-wise Loading: Load model layers incrementally to optimize memory usage.
- Mixed Precision Inference: Utilize both 16-bit and 32-bit operations to accelerate processing.
- Efficient Memory Management: Advanced techniques to reuse memory allocations and minimize overhead.
- Lightweight Architectures: Optimized model architectures for resource efficiency.
- Layer Fusion: Combine multiple layers into a single operation to reduce memory access and improve computational efficiency.
- Batch Normalization Folding: Fold batch normalization into preceding convolution layers to reduce the number of operations.
- Weight Sharing: Share weights across different layers to reduce the model size.
- Knowledge Distillation: Use a smaller "student" model trained to mimic a larger "teacher" model.
- Dynamic Quantization: Apply quantization dynamically during inference to adapt to different input data distributions.
- Distributed Computing: Utilize multiple devices to accelerate model training and inference.
Installation
To install Solv AI, use pip:
pip install solv-ai
Usage
Custom Quantization
from solv_ai import QuantizedLayer
import numpy as np
weights = np.random.rand(5, 10)
quant_layer = QuantizedLayer(weights, bits=8)
x = np.random.rand(3, 10)
output = quant_layer.forward(x)
print("Quantized output:")
print(output)
Pruning
from solv_ai import PrunedLayer
import numpy as np
weights = np.random.rand(5, 10)
prune_layer = PrunedLayer(weights, pruning_rate=0.5)
x = np.random.rand(3, 10)
output = prune_layer.forward(x)
print("Pruned output:")
print(output)
Mixed Precision Inference
import torch
import torch.nn as nn
import torch.optim as optim
from solv_ai import MixedPrecisionLayer
class SimpleModel(nn.Module):
def init(self):
super(SimpleModel, self).init()
self.fc = nn.Linear(10, 5)
def forward(self, x):
return self.fc(x)
model = SimpleModel()
optimizer = optim.Adam(model.parameters(), lr=0.001)
mp_layer = MixedPrecisionLayer(model, optimizer)
x = torch.randn(3, 10)
output = mp_layer.forward(x)
loss = output.sum()
mp_layer.backward(loss)
Efficient Models
from solv_ai import EfficientNet
import torch
model = EfficientNet()
x = torch.randn(3, 3, 224, 224)
output = model(x)
print("EfficientNet output:")
print(output)
Distributed Computing
import torch
import torch.nn as nn
from solv_ai import ModelParallelLayer, DataParallelLayer, PipelineParallelLayer
class SimpleModel(nn.Module):
def init(self):
super(SimpleModel, self).init()
self.fc = nn.Linear(10, 5)
def forward(self, x):
return self.fc(x)
model = SimpleModel()
device_ids = [0, 1]
mp_layer = ModelParallelLayer(model, device_ids)
dp_layer = DataParallelLayer(model, device_ids)
pipeline_layer = PipelineParallelLayer([model], chunks=2)
x = torch.randn(3, 10)
output_mp = mp_layer(x)
output_dp = dp_layer(x)
output_pipeline = pipeline_layer(x)
print("Model Parallel output:")
print(output_mp)
print("Data Parallel output:")
print(output_dp)
print("Pipeline Parallel output:")
print(output_pipeline)
Profiling
from solv_ai import profile_model
import torch.nn as nn
class SimpleModel(nn.Module):
def init(self):
super(SimpleModel, self).init()
self.fc = nn.Linear(10, 5)
def forward(self, x):
return self.fc(x)
model = SimpleModel()
profile_model(model, (3, 10))
License
Solv AI is licensed under the MIT License. See the LICENSE file for more details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
solv_ai-0.2.5.tar.gz
(9.0 kB
view hashes)
Built Distribution
solv_ai-0.2.5-py3-none-any.whl
(11.0 kB
view hashes)