Skip to main content

Lightly Purple is a lightweight, fast, and easy-to-use data exploration tool for data scientists and engineers.

Project description

The open-source tool curating datasets


PyPI python PyPI version License

๐Ÿš€ Aloha!

We at Lightly created an open-source tool that supercharges your data curation workflows by enabling you to explore datasets, analyze data quality, and improve your machine learning pipelines more efficiently than ever before. Embark with us in this adventure of building better datasets.

๐Ÿ’ป Installation

Please use Python 3.8 or higher with venv.

The library is not OS-dependent and should work on Windows, Linux, and macOS.

# Create a virtual environment
# On Linux/macOS:
python3 -m venv venv
source venv/bin/activate

# On Windows:
python -m venv venv
.\venv\Scripts\activate

# Install library
pip install lightly-purple

Quickstart

Download the dataset and run a quickstart script to load your dataset and launch the app.

Here is a quick example using the YOLO8 dataset:

The YOLO format details:
dataset/
โ”œโ”€โ”€ train/
โ”‚   โ”œโ”€โ”€ images/
โ”‚   โ”‚   โ”œโ”€โ”€ image1.jpg
โ”‚   โ”‚   โ”œโ”€โ”€ image2.jpg
โ”‚   โ”‚   โ””โ”€โ”€ ...
โ”‚   โ””โ”€โ”€ labels/
โ”‚       โ”œโ”€โ”€ image1.txt
โ”‚       โ”œโ”€โ”€ image2.txt
โ”‚       โ””โ”€โ”€ ...
โ”œโ”€โ”€ valid/  (optional)
โ”‚   โ”œโ”€โ”€ images/
โ”‚   โ”‚   โ””โ”€โ”€ ...
โ”‚   โ””โ”€โ”€ labels/
โ”‚       โ””โ”€โ”€ ...
โ””โ”€โ”€ data.yaml

Each label file should contain YOLO format annotations (one per line):

<class> <x_center> <y_center> <width> <height>

Where coordinates are normalized between 0 and 1.

# Download and extract dataset
export DATASET_PATH=$(pwd)/example-dataset && \
    bash <(curl -sL https://raw.githubusercontent.com/lightly-ai/gists/refs/heads/main/fetch-dataset.sh) \
 https://universe.roboflow.com/ds/nToYP9Q1ix\?key\=pnjUGTjjba \
        $DATASET_PATH

# Download example script
curl -sL https://raw.githubusercontent.com/lightly-ai/gists/refs/heads/main/example-yolo8.py > example.py

# Run the example script
python example.py
Quickstart commands explanation
  1. Setting up the dataset path:
  export DATASET_PATH=$(pwd)/example-dataset

This creates an environment variable DATASET_PATH pointing to an 'example-dataset' folder in your current directory.

  1. Downloading and extracting the dataset:
  bash <(curl -sL https://raw.githubusercontent.com/lightly-ai/gists/refs/heads/main/fetch-dataset.sh)
  • Downloads a shell script that handles dataset fetching
  • The script downloads a YOLO-format dataset from Roboflow
  • Automatically extracts the dataset to your specified DATASET_PATH
  1. Getting the example code:
  curl -sL https://raw.githubusercontent.com/lightly-ai/gists/refs/heads/main/example-yolo8.py > example.py

Downloads a Python script that demonstrates how to:

  • Load the YOLO dataset
  • Process the images and annotations
  • Launch the Lightly Purple UI for exploration
  1. Running the example:
  python example.py

Executes the downloaded script, which will:

  • Initialize the dataset processor
  • Load and analyze your data
  • Start a local server
  • Open the UI in your default web browser

Example explanation

Let's break down the example.py script to explore the dataset:

# We import os to access the DATASET_PATH environment variable
import os

# We import the DatasetLoader class from the lightly_purple module
from lightly_purple import DatasetLoader

# We create a DatasetLoader instance
loader = DatasetLoader()

# We load the YOLO dataset by defined DATASET_PATH
# We point to data.yaml and train a subset within the given dataset.
# Train subset is defined in the data.yaml file like `train: ./train/images`

# Defined dataset will processed here to be available for the UI application and further operations.
# You can select the subset of the dataset by changing the input_split parameter.
loader.from_yolo(
    f"{os.getenv("DATASET_PATH")}/data.yaml",
    input_split='train',
)

# We launch the UI application
loader.launch()

Here is an example using the COCO dataset:

The COCO format details:
dataset/
โ”œโ”€โ”€ images/                   # Image files
โ”‚   โ”œโ”€โ”€ image1.jpg
โ”‚   โ”œโ”€โ”€ image2.jpg
โ”‚   โ””โ”€โ”€ ...
โ””โ”€โ”€ annotations.json         # Single JSON file containing all annotations

COCO uses a single JSON file containing all annotations. The format consists of three main components:

  • Images: Defines metadata for each image in the dataset.
  • Categories: Defines the object classes.
  • Annotations: Defines object instances.
# Download example script
curl -sL https://raw.githubusercontent.com/lightly-ai/gists/refs/heads/main/example-coco.py > example.py

# Run the example script
python example.py

Example explanation

Let's break down the example-coco.py script to explore the dataset:

import os

from lightly_purple import DatasetLoader

# Create a DatasetLoader instance
loader = DatasetLoader()

# Define the path to the dataset (folder containing annotations.json)
dataset_path = os.getenv("DATASET_PATH")

# We load the COCO dataset using the defined DATASET_PATH
# We point to annotations.json and the input image folder.
# The image folder can be an absolute path or relative to the annotations.json file.

# Defined dataset is processed here to be available for the UI application and further operations.
coco_loader, dataset_id = loader.from_coco(
    f"{dataset_path}/annotations.json",
    input_images_folder="image_folder"
)

loader.launch()

๐Ÿ” How it works

Let's describe a little bit in detail what is happening under the hood:

In our library, we emulated a full-fledged environment to process your data and make it available for the UI application.

  • Dataset Loader: The Python module is responsible for processing the dataset.

    • Processes given dataset.
    • Stores it in the persistent data storage layer.
    • Handling various data formats and annotation types.
  • Data Storage Layer: Stores information about the dataset:

    • After the dataset is processed information about the dataset is stored in the persistent database.
    • We use duckdb database as a persistent storage layer, you will see purple.db file after the dataset is processed.
  • Backend API: Python web server that serves the dataset to the UI application.

    • Uses the persistent data storage layer to serve the dataset to the UI application.
    • Manages user interactions with the data
  • UI Application: A responsive web interface:

    • Running on your local machine on 8001 port and available at http://localhost:8001/. You can't change the port for now.
    • It opens automatically after the dataset is processed.
    • Consumes local API endpoints
    • Visualizes your dataset and analysis results

๐Ÿ“ฆ Dataset Formats

Our library supports the following dataset formats:

  • YOLO8
  • COCO

๐Ÿ“š FAQ

Are the datasets persistent?

Yes, the information about datasets is persistent and stored in the db file. You can see it after the dataset is processed. If you rerun the loader it will create a new dataset representing the same dataset, keeping the previous dataset information untouched.

Can I launch in another Python script or do I have to do it in the same script?

It is possible to use only one script at the same time because we lock the db file for the duration of the script.

Can I process datasets that do not have annotations?

No, we do support only datasets with annotations now.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lightly_purple-0.2.13.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lightly_purple-0.2.13-py3-none-any.whl (1.3 MB view details)

Uploaded Python 3

File details

Details for the file lightly_purple-0.2.13.tar.gz.

File metadata

  • Download URL: lightly_purple-0.2.13.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.1

File hashes

Hashes for lightly_purple-0.2.13.tar.gz
Algorithm Hash digest
SHA256 1ce425743b70421f5b32c7dc2e3ac1a7b9caa2ae8533435b32ad4fb6852174cb
MD5 965580c83bdfa0516ab950452c41581f
BLAKE2b-256 d28c8b6048c890bfb4b2dc22e74f8447dafbe801771305bfd169d0da203d1a96

See more details on using hashes here.

File details

Details for the file lightly_purple-0.2.13-py3-none-any.whl.

File metadata

File hashes

Hashes for lightly_purple-0.2.13-py3-none-any.whl
Algorithm Hash digest
SHA256 f2b2570f690d1bf944c8019a3b122bfd04449170e94bad77c748b989358eb27d
MD5 0178b494e0058b7ff89a9230d0795190
BLAKE2b-256 124010ac5fc22e61476cc38878ae57461a37801c494c0283e6093b87778db323

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page