Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0
timm >=0.9.12

Tensorflow

Library Version
tensorflow >=2.4.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('resnet18', source='torchvision')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model("model_name", source="torchvision"). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier', source='huggingface').

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.2.4-cp312-cp312-win_arm64.whl (960.5 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.2.4-cp312-cp312-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.2.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (8.0 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.2.4-cp312-cp312-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.2.4-cp312-cp312-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.2.4-cp311-cp311-win_arm64.whl (989.5 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.2.4-cp311-cp311-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.2.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.7 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.2.4-cp311-cp311-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.2.4-cp311-cp311-macosx_10_9_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.2.4-cp310-cp310-win_arm64.whl (986.2 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.2.4-cp310-cp310-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.2.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.9 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.2.4-cp310-cp310-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.2.4-cp310-cp310-macosx_10_9_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.2.4-cp39-cp39-win_arm64.whl (987.6 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.2.4-cp39-cp39-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.2.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.9 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.2.4-cp39-cp39-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.2.4-cp39-cp39-macosx_10_9_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.2.4-cp38-cp38-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.2.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.1 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.2.4-cp38-cp38-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.2.4-cp38-cp38-macosx_10_9_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.2.4-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.4-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 960.5 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.4-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 4f8ab050b270fc9deab0fb7c37892ed0e56e0be8c8d0dc0db5964ee5a68ad5b9
MD5 2e84fee4333539c7effad6df5d573de2
BLAKE2b-256 88fa06fea0107a1cfc9413b4e6c28837096f05f7e4a4bb4e3f84a5398070ab00

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.4-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.4-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 609f946a100de08fe50c8771da7908c1a817c8ed21428d90c3c7390800185901
MD5 c41a5a87026e89f5696a6b65e466cb47
BLAKE2b-256 6904fa4a55012f608715d9af9c788c5936cbfa66f21f287d4edbf9a34b3934d9

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 1d7097a718a541ba93361bd7e8caf26b6277d8d356641cebb34fd500dadb9ea0
MD5 bb2981ce63f8b95feae0acb5dae28ad1
BLAKE2b-256 f6b628172bba52849d7d3a9cf646f798f935398384b087e1bca9014013e09a5a

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 210f60816acccd546a6514c570259c59414d0907950586737a59f378bb7f23b3
MD5 b4cf917b011d0504a9d35b289ceda8cf
BLAKE2b-256 c8d9986ced557ca412fb5231d4cb57c4da74f5228064babd3986be77ea1b8d53

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 27a797a382656767d6f352ba2c50e872c04bddc23d3a4260188e37aebc83f95c
MD5 4ba2f2e403c0b03b2a4187f1b36d7d48
BLAKE2b-256 38873d83b1b4a882cdd6fd2608fde3f01d8782508f34b5bca646ecb34e9b7b80

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.4-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 989.5 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.4-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 84786a7d27552830a3b0c3a24bae5f7ea1bd40a8ea20fac4e024af37b1cdf84a
MD5 d3db2ea668026cf080e0431a03059f90
BLAKE2b-256 3c19beb6b0c8ed57c80e914d0f3c9047898c04b9488245441234efde668d6ff4

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.4-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.4-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 60f8b38b3084667c92ed9292e18ba2c69ca7713e602cae38e71883b2ac2af0bd
MD5 f53b8863c83dac63866cf839764b403e
BLAKE2b-256 25162c0048d04926e7fee4deb0066ace8d3459c94ebcb66e0bcd0c59b508582c

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7cc7be965d8d77edb46ba4e8faf03e3f939a2689d999ea8a154682b3db545921
MD5 452209b28277e7530fad437215f516ec
BLAKE2b-256 a235ebba1dcf3f03b896eb87d49db6032c402e8bcff7bda270ac4a8e809a8803

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 7ef287c5c4169437aa0e03ee2ace60a933c7624c576fbb77f29d0271ebec3b49
MD5 06e73abf60bbc211b5525ba5de605852
BLAKE2b-256 6259b1f64e147a1b78b108bd410a65cc672747502371685d9da071ea06934e94

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 0cceb302e03acef9a533b03f64ddae13165d1a3bfcc9d31f785b85ba44c3cf5c
MD5 7aa5f31a831c60719f25c6ad48ab36a3
BLAKE2b-256 c26737b4870cd349b56a501d59f4404afa81eb9e54e4f1dac38aabca6a1b406a

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.4-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 986.2 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.4-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 be3e3c8adedd9b4829dd642738813336a50bb15bb65bda4ea2912d01ba174c23
MD5 227a62246ec103763487fb8c1e354def
BLAKE2b-256 cc81572262d94f9d38cca50bea25d4aafe17730b378f05bec5e7e344613bf6fc

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.4-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.4-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 e1544ad143494abbfa91bb50e12ef146b3a086cf153009e9c7bc98cf4c437104
MD5 ff37776d607b19292cb6b59c896a4d71
BLAKE2b-256 a4fe6a6b131a8eff780bd5a06713213aeaab6d3c2067d7057468565949b4c137

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 07bcdc4b917b7a3af9527e72ded28b8e28af56be102a196e39eed90a2c68e9bc
MD5 22a3ebad4b1b742bd801c80cdb2e4207
BLAKE2b-256 d626f0afae176dd0162ce7a750fc5d5aebe52c1e06bb44692f7a9693c155c4c5

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 53b4f5782a52f0182940a7133b627d7cc122cf7cd945fa675ab6622d125c81db
MD5 502704e67efa616ce672ebdd65971302
BLAKE2b-256 a402c0f6c59dc5fc44e36be32e0e68f847b2001418027e5c583653b8af169b7e

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 f809655d0217deddd5695bcab923dc6f070bd6e757aadadef5e5efd1d64ef0f0
MD5 01d3ad62a2d5cf7bebc138f3747b6463
BLAKE2b-256 522076d55a66dc4974aebf71a7c7cc75abb09674510fd541b01fbb42508eb90c

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.4-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 987.6 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.4-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 08a0c87f47d5625eef5294405df2f5d71115718406774fc301c0af6c1d6c4ebd
MD5 0942f2bf168c12f8c4c1b52605d247bf
BLAKE2b-256 eee99a6347069a8ad12dbc96b0a690961c839eac0f15b880908158e5e92105c3

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.4-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.4-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 0d42124667d433326ad50526c7caab038dba1240c3c77eff7a5863997abb69e0
MD5 59d01d065f9fc87fb640f2929658a67d
BLAKE2b-256 896febcd3ab9760c1c864e9d64f3ba8972d672a05902f2aa269ecaa51baf7ce5

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e9827038e2db332154b630bea7f56c15efd1cf6ddd66ac4b47be1eee85a7dbdb
MD5 283ed331354bc7d4b490a478a8d739e0
BLAKE2b-256 b5148ae1af3e99ec831f16e64f73cbc19c1ead771716a7a4786011f93e8c1e78

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 178ffb53389afe96a7c873e3c052ebad6545a28f7478259e704fbceb9e3aa192
MD5 f9a67a37b2c411fbfd1b0d8caf9a964c
BLAKE2b-256 d935d521cbc037643b99968b397b53a861e7f4cd0aea1fafddad9cfa8fc88221

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 e045b10676dd45d812a04050c56d9568f08353c6b7269dfd5b4e26f13d5b3f42
MD5 e0db808f900fb645a758de923370f2bd
BLAKE2b-256 7e0421b1a3600bfb2356900d5571ff8429bd18c809cd44eae682c5e85beb80d1

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.4-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.4-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 e2e14b5a5001e021bf80a0189f905aab2085066b011656cce5886200d0f6915f
MD5 afd65bae8cc9581f2b75b223cdfa92a2
BLAKE2b-256 6c206b0d758aa965cc1e3f439a276b6614823f0f5c0cdd91e2005e9684433a54

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 515572fa433a85f2b6258dee5a3963d91f1dccb16fa4eaa16d207a9a48b15977
MD5 364f4df40ed269e4ef3bc1d78e0dae39
BLAKE2b-256 36f3912fadfc855e5f6c2dcfacfd744ce6b9e6540bae7ea37086887ff0add477

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b81e3af0eaf7f385422f7397ec71f44a17571bdda26bba77b8b164e762347067
MD5 1be33753cbd281042333be30fe1ff844
BLAKE2b-256 619378c1d4b5fff49da7dfe4429142033230e2bfaee08b80d715f398b06af2b9

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.4-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.4-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 311afe6151133a56daa01b8c5236b638fa840d1d25ac4bccf01e9468c0ffb211
MD5 193df8fbe1a5f26353ecee8332611edf
BLAKE2b-256 a031b4e4de5de1a315ffbda4db2a85eb3397b2c7f6e2807745cb8574d5cd4877

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page