Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0

Tensorflow

Library Version
tensorflow >=2.12.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('torchvision.resnet18')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model(torchvision.[name of model]). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier')

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.0.32-cp312-cp312-win_arm64.whl (863.4 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.0.32-cp312-cp312-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.0.32-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.2 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.0.32-cp312-cp312-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.0.32-cp312-cp312-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.0.32-cp311-cp311-win_arm64.whl (887.7 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.0.32-cp311-cp311-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.0.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.0.32-cp311-cp311-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.0.32-cp311-cp311-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.0.32-cp310-cp310-win_arm64.whl (884.5 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.0.32-cp310-cp310-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.0.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.0.32-cp310-cp310-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.0.32-cp310-cp310-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.0.32-cp39-cp39-win_arm64.whl (886.8 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.0.32-cp39-cp39-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.0.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.3 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.0.32-cp39-cp39-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.0.32-cp39-cp39-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.0.32-cp38-cp38-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.0.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.5 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.0.32-cp38-cp38-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.0.32-cp38-cp38-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.0.32-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.32-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 863.4 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.32-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 ec6658b7781fe38bcb68260f2108d11850689d93e17f57624f733c7b32b80a64
MD5 7d47ad7a0d1df22c8c83bcf618874860
BLAKE2b-256 a5c039f1ec4ae33a2fc6c528d745145b8f325536d650de39661696c3f9d03948

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.32-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.32-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 f3eb424864bd0471b7d84995e795ec78e5e6689f70008cb30f37f6cf7d0b0e45
MD5 25cec16cb6f25ea6ec935030adadb7f4
BLAKE2b-256 17fe715c4bffa2573b0dd6513168fd2b97e7223bfd8a1fd0655d52b0e7b4e2bc

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 030e1efa3b63c0910414a6ff3c5ef4bfbbe132ec0432238a8527d51731212747
MD5 e6c2163618b14b31013f4695c268b4c0
BLAKE2b-256 9ca7c8b1d88dc571ad59186e867fe822359dae6603ffed241073b1ad6795507b

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9c8c572cc8633bf664e8c214015231c93a194e68c0302a6dc83d543f4929e960
MD5 ad8ccebd022af567097962e7306417f7
BLAKE2b-256 64361de0e2c557f204aa59e637f99ed39dbe4c168ee904a7342b2d8f94030cc9

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 16744d95c4370981d1dd7563e453c9a7599738237aee24959a9312f78f50cf65
MD5 1925296e3e093cab2c6cdebb516398ad
BLAKE2b-256 faf4b0fcf7366b83dfcd0b1f8f40dca1af7bb425bd31fb31a2ac10808b5342e5

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.32-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 887.7 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.32-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 ff9351477fc981d135d8276e5f8ebe570ff359bc224e3a74c89424cb96bec430
MD5 c2a6c5b2a971fee73c76ea0e7c2f759a
BLAKE2b-256 10182bbad3c64ed91a87dc6bfc14e8fe80828bbf10861f14ce9a95e1b36e487a

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.32-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.32-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 7803e8cfe34d195ca98a882ebf2340c581db3639723933ef4fdd373d3fe1da28
MD5 7af6dd2dab0a059160c987aff713d41d
BLAKE2b-256 18668c430b12164c2527e0362102578018161399ec9a5b91dac11250761af951

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 362ddb4ff8c7a27495021ad633734fa5a7a30ffbbf86b74d5901835c15ea57a9
MD5 6deba089abf295b498f26f29bb5d2127
BLAKE2b-256 2920b40d974984f98f4f2bc817eb8605fad39ab0c5bc25fae6ff67e06a3af17a

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 ed42d84e6a4b0fce633c4583e3f492d00da26325cdb6fa4802df6249b26dd79d
MD5 4a6d52af95dd1963d04f3fd657fcd6f9
BLAKE2b-256 fdf3f3c9a6fe894bbb3c74e9d9f3a50978599844dc1ab38db9a116b2d61c0231

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 b1056ad49d40a8d3d67dbe8769fb104e145a50d031d9e3a021658a6a25ddc135
MD5 7919954392cbd0e2cc2985853b3acb28
BLAKE2b-256 a6d3590a03389eab5cad540a4014448de78d99e2e25b8d693b809cd1b811c19a

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.32-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 884.5 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.32-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 39e52a6c4976cae90eed45d985d5f7039bf797324ea2d76977a071b5667cf942
MD5 553b6a71eb1d15161cc19e43dc019fe0
BLAKE2b-256 383a4a4ce7ccc7afb2138a50c8541d3ead9026d2b9d539c0644c157a2db7b27e

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.32-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.32-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 b7cf566ce69ce1b6802871679a6eee955440fa4b87d846ee0061a44af1e72648
MD5 79e0184e81d7f7613f3e78314ce852f9
BLAKE2b-256 ff9914d1a910c5973a70a69a13afca1c3b2e42ce4aca6412674dc987c4c247f4

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 5f78aad7d13f50487eaa88592b6958480c821b1bc281b0b8af534ebd84772b41
MD5 97b33c59106263f30b41288362d60033
BLAKE2b-256 0e97d96e8d0f5b72303a0fee6f5b576817b5944b80b77ac0fde2a85caeedea1c

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 a597feebf790e9e78e25b1d7bea594c44122a855a3e10f37e4b99fe977b09981
MD5 43d50bace61a9a847e327fd844ce48ed
BLAKE2b-256 5dcb55152b86301601588d975a7e5a3d88ed3d8575cd3cac73bb6d12aeed64af

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 25da6cdea0a721674bc41608404dc8d74ba97b88bc5e63897565a4996156f533
MD5 3c22009817b92b67456259e41f8d09c3
BLAKE2b-256 194f950e504a9efbc98b6f681d50a3019c6515f2c02e05962675ff32547f119d

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.32-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 886.8 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.32-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 181fa8d889ba07c4358ac8e47e352ed411de668d483b1923a321c5c492e3d6b2
MD5 a93dcc090ff6d244b529c66ce64030df
BLAKE2b-256 6e0534286608039ed42e365168677485664f821bcebbb00c025be4ccaf755211

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.32-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.32-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 746540e7b979069bb45a8a0ff1c03c34135a3eec6c3acdb223d0ae77dc9bad58
MD5 1c5c40581c7ad8a47c7ece31c80063be
BLAKE2b-256 30a623d25523ac876a89adbe2d8ba0eb83b72388c93ddfb15a772c917fef1760

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 aa3a0db8c4c261d23f14b4627ac3f0e3d63bd0a19f4035f2dab6aedcc2d91f3a
MD5 559a9b8d9d4fa341cc9eccf92a9ab246
BLAKE2b-256 5c1029d030c6a63920bd30dbe45224ad7b932e93682f32ea46691f6a7e1bb0f5

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 350e394394bd303a44c0db10bfb02234c259b690310651d8c34ac4cc01656984
MD5 fac78e6f5a5d3e2621df07715e3288ef
BLAKE2b-256 1eba2136c14a8b0d0f7212280464fec621164f701479eaa25a3bee611fae1179

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 aaaba11471f29ffd2d3a25649d527341aa27556d1ada68fd58297cac5f4a82c9
MD5 ade78513f0c430bda29d96e549a56e29
BLAKE2b-256 cbce46e1c51809de8bea37cc2ca7186ca67b3f314dbfd34192affc1c679151e1

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.32-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.32-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 ee3de85c6914c16b6c07e9e8afb6a3045887d84902ba9660b4b39e3e5e0cf76a
MD5 d15f1cfd1f7b6c1c066e5dbb308e162a
BLAKE2b-256 8802a19dcabf9d8eb7a67268793fedb5e6ab5eb94f3000aa000713afc41347d1

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 36accb3bfabcca1dea77ec328d3e1a065fc5b154c9fff2108ae9a294697b2f47
MD5 fa665b762ac33d526049b6b6593857d5
BLAKE2b-256 ca477c5a04b114d735a2247bfc2d62209e10f23bd6794c4256fb4ee398c8c1ae

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b22c0c38341c15e665dd738d052653a9c608894e3bcb25d8213366547d916479
MD5 fe4470a74590a5c4fa7326c98bdade28
BLAKE2b-256 8e577683f2f34e6aba57e991debf54916c6bf5ed143d0df05fe80892039c7f41

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.32-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.32-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 a598f6dc7432f66a4087dc982a5c28ad509895a3eb5ba1cdb70b28a30388a3f7
MD5 30cab91fb13d2f73d798e73a0acdca59
BLAKE2b-256 f088caba07eb157c75d93cc4401f7fa7a45b76a5f7617a125f5b1a2575e49b05

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page