No project description provided
Project description
Plug and play modules to optimize the performances of your AI systems
Documentation: docs.nebuly.com/
Nebullvm
is an ecosystem of plug and play modules to optimize the performances of your AI systems. The optimization modules are stack-agnostic and work with any library. They are designed to be easily integrated into your system, providing a quick and seamless boost to its performance. Simply plug and play to start realizing the benefits of optimized performance right away.
If you like the idea, give us a star to show your support for the project ⭐
What can this help with?
There are multiple modules we actually provide to boost the performances of your AI systems:
✅ Speedster: Automatically apply the best set of SOTA optimization techniques to achieve the maximum inference speed-up on your hardware.
✅ Nos: Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas.
✅ ChatLLaMA: Create hyper-personalized ChatGPT-like assistants using your custom data and the least amount of compute possible.
✅ OpenAlphaTensor: Increase the computational performances of an AI model with custom-generated matrix multiplication algorithm fine-tuned for your specific hardware.
✅ Forward-Forward: The Forward Forward algorithm is a method for training deep neural networks that replaces the backpropagation forward and backward passes with two forward passes.
Next modules and roadmap
We are actively working on incorporating the following modules, as requested by members of our community, in upcoming releases:
- GPToptimizer: Effortlessly optimize large APIs generative models from OpenAI, Cohere, HF.
- CloudSurfer: Automatically discover the optimal cloud configuration and hardware on AWS, GCP and Azure to run your AI models.
- OptiMate: Interactive tool guiding savvy users in achieving the best inference performance out of a given model / hardware setup.
- TrainingSim: Easily simulate the training of large AI models on a distributed infrastructure to predict training behaviours without actual implementation.
Contributing
As an open source project in a rapidly evolving field, we welcome contributions of all kinds, including new features, improved infrastructure, and better documentation. If you're interested in contributing, please see the linked page for more information on how to get involved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.