Neptune Client
Project description
neptune.ai
What is neptune.ai?
neptune.ai makes it easy to log, store, organize, compare, register, and share all your ML model metadata in a single place.
- Automate and standardize as your modeling team grows.
- Collaborate on models and results with your team and across the org.
- Use hosted, deploy on-premises, or in a private cloud. Integrate with any MLOps stack.
Play with a live neptune.ai app →
Getting started
Step 1: Create a free account
Step 2: Install Neptune client library
pip install neptune
Step 3: Add experiment tracking snippet to your code
import neptune
run = neptune.init_run(project="Me/MyProject")
run["parameters"] = {"lr": 0.1, "dropout": 0.4}
run["test_accuracy"] = 0.84
Core features
Log and display
Add a snippet to any step of your ML pipeline once. Decide what and how you want to log. Run a million times.
-
Any framework: any code, PyTorch, PyTorch Lightning, TensorFlow/Keras, scikit-learn, LightGBM, XGBoost, Optuna, Kedro.
-
Any metadata type: metrics, parameters, dataset and model versions, images, interactive plots, videos, hardware (GPU, CPU, memory), code state.
-
From anywhere in your ML pipeline: multinode pipelines, distributed computing, log during or after execution, log offline, and sync when you are back online.
Organize experiments
Organize logs in a fully customizable nested structure. Display model metadata in user-defined dashboard templates.
-
Nested metadata structure: flexible API lets you customize the metadata logging structure however you want. Talk to a dictionary at the code level. See the folder structure in the app. Organize nested parameter configs or the results on k-fold validation splits the way they should be.
-
Custom dashboards: combine different metadata types in one view. Define it for one run. Use anywhere. Look at GPU, memory consumption, and load times to debug training speed. See learning curves, image predictions, and confusion matrix to debug model quality.
-
Table views: create different views of the runs table and save them for later. You can have separate table views for debugging, comparing parameter sets, or best experiments.
Compare results
Visualize training live in the neptune.ai web app. See how different parameters and configs affect the results. Optimize models quicker.
-
Compare: learning curves, parameters, images, datasets.
-
Search, sort, and filter: experiments by any field you logged. Use our query language to filter runs based on parameter values, metrics, execution times, or anything else.
-
Visualize and display: runs table, interactive display, folder structure, dashboards.
-
Monitor live: hardware consumption metrics, GPU, CPU, memory.
-
Group by: dataset versions, parameters.
Register models
Version, review, and access production-ready models and metadata associated with them in a single place.
-
Version models: register models, create model versions, version external model artifacts.
-
Review and change stages: look at the validation, test metrics and other model metadata. You can move models between None/Staging/Production/Archived.
-
Access and share models: every model and model version is accessible via the neptune.ai web app or through the API.
Share results
Have a single place where your team can see the results and access all models and experiments.
-
Send a link: share every chart, dashboard, table view, or anything else you see in the neptune.ai app by copying and sending persistent URLs.
-
Query API: access all model metadata via neptune.ai API. Whatever you logged, you can query in a similar way.
-
Manage users and projects: create different projects, add users to them, and grant different permissions levels.
-
Add your entire org: get unlimited users on every paid plan. So you can invite your entire organization, including product managers and subject matter experts at no extra cost.
Integrate with any MLOps stack
neptune.ai integrates with 25+ frameworks: PyTorch, PyTorch Lightning, TensorFlow/Keras, LightGBM, scikit-learn, XGBoost, Optuna, Kedro, 🤗 Transformers, fastai, Prophet, and more.
PyTorch Lightning
Example:
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import NeptuneLogger
# Create NeptuneLogger instance
from neptune import ANONYMOUS_API_TOKEN
neptune_logger = NeptuneLogger(
api_key=ANONYMOUS_API_TOKEN,
project="common/pytorch-lightning-integration",
tags=["training", "resnet"], # optional
)
# Pass the logger to the Trainer
trainer = Trainer(max_epochs=10, logger=neptune_logger)
# Run the Trainer
trainer.fit(my_model, my_dataloader)
neptune.ai is trusted by great companies
Read how various customers use Neptune to improve their workflow.
Support
If you get stuck or simply want to talk to us about something, here are your options:
- Check our FAQ page.
- Take a look at our resource center.
- Chat! In the app, click the blue message icon in the bottom-right corner and send a message. A real person will talk to you ASAP (typically very ASAP).
- You can just shoot us an email at support@neptune.ai.
People behind
Created with :heart: by the neptune.ai team:
Piotr, Paulina, Chaz, Prince, Parth, Kshitij, Siddhant, Jakub, Patrycja, Dominika, Karolina, Stephen, Artur, Aleksiej, Martyna, Małgorzata, Magdalena, Karolina, Marcin, Michał, Tymoteusz, Rafał, Aleksandra, Sabine, Tomek, Piotr, Adam, Rafał, Hubert, Marcin, Jakub, Paweł, Jakub, Franciszek, Bartosz, Aleksander, Dawid, Patryk, Krzysztof, Aurimas, and you?
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for neptune-1.1.0rc0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 15f1b70f78063f01417c1075376fd7ef75750e9bf32e13dcdde76b5221e034f6 |
|
MD5 | 044d2ad9040a988b2ed50fbb1b288e1c |
|
BLAKE2b-256 | 026cdd78ea0122f38574981ea562372bce5b1d8f616036ce92da82a558da802d |