Skip to main content

A comprehensive tool for analyzing and optimizing machine learning models

Project description

Neural-Scope

PyPI version License: MIT Python 3.7+

Neural-Scope is a comprehensive tool for analyzing and optimizing machine learning models. It provides a robust CI/CD integration that enables automated model optimization as part of ML workflows.

Features

  • Automated Model Optimization: Analyze, optimize, and validate ML models automatically
  • CI/CD Integration: Seamlessly integrate with GitHub Actions, GitLab CI, Jenkins, and Azure DevOps
  • Pre-trained Model Support: Analyze models from PyTorch Hub, TensorFlow Hub, Hugging Face, SageMaker, and more
  • Advanced Security Analysis: Detect vulnerabilities with sophisticated detection algorithms
  • Adversarial Robustness: Test model resilience against FGSM, PGD, and other attacks
  • Model Versioning: Track model versions and promote models through staging and production
  • MLflow Integration: Track optimization results and model performance over time

Installation

# Basic installation
pip install neural-scope

# With PyTorch support
pip install neural-scope[pytorch]

# With TensorFlow support
pip install neural-scope[tensorflow]

# With MLflow integration
pip install neural-scope[mlflow]

# With security and robustness testing
pip install neural-scope[security]

# With all dependencies
pip install neural-scope[all]

Quick Start

Analyze a Model

from neural_scope import NeuralScope

# Load your model (PyTorch example)
import torch
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)

# Create Neural-Scope instance
neural_scope = NeuralScope()

# Analyze the model
results = neural_scope.analyze_model(
    model=model,
    model_name="resnet18",
    framework="pytorch"
)

# Print results
print(f"Parameters: {results['parameters']}")
print(f"Layers: {results['layers']}")
print(f"Memory usage: {results['memory_usage_mb']} MB")
print(f"Inference time: {results['inference_time_ms']} ms")

Optimize a Model

from neural_scope import NeuralScope
import torch

# Load your model
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)

# Create Neural-Scope instance
neural_scope = NeuralScope()

# Optimize the model
optimized_model, results = neural_scope.optimize_model(
    model=model,
    model_name="resnet18",
    framework="pytorch",
    techniques=["quantization", "pruning"]
)

# Print results
print(f"Original size: {results['original_size']} MB")
print(f"Optimized size: {results['optimized_size']} MB")
print(f"Size reduction: {results['size_reduction_percentage']}%")
print(f"Inference speedup: {results['inference_speedup']}x")

Security Analysis

from neural_scope import NeuralScope
import torch

# Load your model
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)

# Create Neural-Scope instance
neural_scope = NeuralScope()

# Analyze security
security_results = neural_scope.analyze_security(
    model=model,
    model_name="resnet18",
    framework="pytorch"
)

# Print results
print(f"Security score: {security_results['security_score']}/100")
print(f"Vulnerabilities: {security_results['total_vulnerabilities']}")
for severity in ['critical', 'high', 'medium', 'low']:
    vulns = security_results['vulnerabilities'][severity]
    if vulns:
        print(f"{severity.capitalize()} severity: {len(vulns)}")

MLflow Integration

from neural_scope import NeuralScope
import torch

# Load your model
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)

# Create Neural-Scope instance with MLflow tracking
neural_scope = NeuralScope(
    mlflow_tracking_uri="http://localhost:5000",
    mlflow_experiment_name="model-analysis"
)

# Analyze the model
results = neural_scope.analyze_model(
    model=model,
    model_name="resnet18",
    framework="pytorch"
)

# Results are automatically tracked in MLflow
print(f"Results tracked in MLflow run: {neural_scope.mlflow_run_id}")

Command Line Interface

# Analyze a model
neural-scope analyze \
    --model-path models/model.pt \
    --framework pytorch \
    --output-dir results

# Optimize a model
neural-scope optimize \
    --model-path models/model.pt \
    --framework pytorch \
    --output-dir results \
    --techniques quantization,pruning

# Test security
neural-scope security \
    --model-path models/model.pt \
    --framework pytorch \
    --output-dir results

# Test robustness
neural-scope robustness \
    --model-path models/model.pt \
    --framework pytorch \
    --output-dir results \
    --attack-types fgsm,pgd

Documentation

For more detailed documentation, visit https://neural-scope.readthedocs.io/

Why Neural-Scope?

Neural-Scope addresses critical challenges in ML model deployment:

  1. Performance Optimization: Reduce model size and improve inference speed
  2. Security Analysis: Identify vulnerabilities with sophisticated detection algorithms
  3. Adversarial Robustness: Test model resilience against various attack types
  4. CI/CD Integration: Automate optimization as part of your ML workflow
  5. Model Versioning: Track model versions and promote them through stages
  6. MLflow Integration: Track experiments and compare model performance

Use Cases

  • ML Engineers: Optimize models for deployment and test robustness
  • DevOps Engineers: Integrate model optimization into CI/CD pipelines
  • Security Teams: Assess model vulnerabilities and adversarial robustness
  • Data Scientists: Analyze model performance and track experiments
  • MLOps Teams: Implement model versioning and promotion workflows

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_scope-0.3.0.tar.gz (233.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neural_scope-0.3.0-py3-none-any.whl (241.1 kB view details)

Uploaded Python 3

File details

Details for the file neural_scope-0.3.0.tar.gz.

File metadata

  • Download URL: neural_scope-0.3.0.tar.gz
  • Upload date:
  • Size: 233.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.8

File hashes

Hashes for neural_scope-0.3.0.tar.gz
Algorithm Hash digest
SHA256 c23dc61e26b64747185c5a02b3e3972d8263e628f13fb7f06252822730087229
MD5 e7871c4cf72fa362ab5154bde0a89c96
BLAKE2b-256 524ef3d87cd72bb6fbd8c333110364f65c51ab517cf9429038125d0e81aa636d

See more details on using hashes here.

File details

Details for the file neural_scope-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: neural_scope-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 241.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.8

File hashes

Hashes for neural_scope-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f87fa011e4d37f654a1b4273431e3ee9d669f9bdd40553b07eabd681bfd29554
MD5 b453a84aa53036671a18cabb1ab2b553
BLAKE2b-256 dc583be378d534f6b0bf28fb951c9dca9952161041f19925ca7cd6716f2ea70e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page