Skip to main content
Avatar for NeuralMagic Admin from gravatar.com

NeuralMagic Admin

Username    neuralmagic
Date joined   Joined

19 projects

compressed-tensors-nightly

Last released

Library for utilization of compressed safetensors of neural network models

nm-vllm

Last released

A high-throughput and memory-efficient inference and serving engine for LLMs

sparseml-nightly

Last released

Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

sparseml

Last released

Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

compressed-tensors

Last released

Library for utilization of compressed safetensors of neural network models

sparsezoo

Last released

Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes

nm-magic-wand-nightly

Last released

SparseLinear layers

deepsparse-nightly

Last released

An inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application

sparsezoo-nightly

Last released

Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes

nm-magic-wand

Last released

SparseLinear layers

deepsparse

Last released

An inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application

deepsparse-ent

Last released

An inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application

nm-transformers

Last released

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

nm-yolov5

Last released

nm-yolov5-nightly

Last released

nm-transformers-nightly

Last released

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

sparsify-nightly

Last released

Easy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint

sparsify

Last released

Easy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint

optimum-deepsparse

Last released

Optimum DeepSparse is an extension of the Hugging Face Transformers library that integrates the DeepSparse inference runtime. DeepSparse offers GPU-class performance on CPUs, making it possible to run Transformers and other deep learning models on commodity hardware with sparsity. Optimum DeepSparse provides a framework for developers to easily integrate DeepSparse into their applications, regardless of the hardware platform.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page