Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0
timm >=0.9.12

Tensorflow

Library Version
tensorflow >=2.4.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('resnet18', source='torchvision')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model("model_name", source="torchvision"). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier', source='huggingface').

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.2.2-cp312-cp312-win_arm64.whl (921.6 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.2.2-cp312-cp312-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.7 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.2.2-cp312-cp312-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.2.2-cp312-cp312-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.2.2-cp311-cp311-win_arm64.whl (947.3 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.2.2-cp311-cp311-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.4 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.2.2-cp311-cp311-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.2.2-cp311-cp311-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.2.2-cp310-cp310-win_arm64.whl (944.0 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.2.2-cp310-cp310-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.2.2-cp310-cp310-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.2.2-cp310-cp310-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.2.2-cp39-cp39-win_arm64.whl (946.4 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.2.2-cp39-cp39-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.2.2-cp39-cp39-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.2.2-cp39-cp39-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.2.2-cp38-cp38-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.8 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.2.2-cp38-cp38-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.2.2-cp38-cp38-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.2.2-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.2-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 921.6 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.2-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 54b390d80e15b438402e9757b5d11adc1349f032220b569a7f58dbf100e8f1c8
MD5 17092afd1190829709e079701e09091b
BLAKE2b-256 e51347f0a5f7e83e197f4dfab2c43785d459967f75465b465fc94fdf3e6fbf9c

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.2-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.2-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 91ac06c2293cde242e39bb74ade5faa956e0f24377dcd2eedef1d2f90458a6a4
MD5 143991e257d2ffd37c7d8c67940457e3
BLAKE2b-256 eab39864c8bb446eae8b2c1f0683c146eb4ed5b5278721ff77cc01809ca346c4

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d5a7d4e2cd4bbd797c328cb853e9399f03b61b333c3fefabf2cdbe4542cb6282
MD5 b47ad6e01ab2fd7c0214f53680b2d44e
BLAKE2b-256 227b99c923909b74b71898b71527bae1a9e56f2f7bd37a9b5e6caa776ebea386

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9d37118c30d41db6dbe244070e5a4106274735f1df1050f636b6e9f3d7cb4319
MD5 87f7a02583fef2efe40a0aafddb2d22c
BLAKE2b-256 34fff8026e732e8627ccb87ef0ce2d9ad98e93de7b2753176979277862bf3c42

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 ab19575a12ae90a4f423b4504306da32c4ce7ba4dc81f5e3f1e6756446c6f84b
MD5 8698ab1fe6821d1e3e2cbfc07532e7e5
BLAKE2b-256 945c9fbd81f6f84a5ef748eb4429310933067d4efbedf9efa21f8409306fe9bb

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.2-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 947.3 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.2-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 95c53279dcbfb2247ab0e66ac4b9a7f3bc32ae694418de42421a5bf78cd17ea4
MD5 4c2c8f80140de999569546ad62a35270
BLAKE2b-256 c4d74b5f55b3ccbfdabfad1ff35475825812a58ec00577d4e7793a869bd2e787

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.2-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.2-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 1512a3a34fbd550e3036936f04a8075bf5a5cb5738610753baacb6ad0ed5ff8f
MD5 deff6a1a7dda68d34d3929ddc6480c82
BLAKE2b-256 625d45bfdb13be556216fc7215c37a3dba0c3d3f4d10a6032120e917b7b0bb1b

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 94924bdd833f52cdf6712bdeb193692b6e52d499f4e2e9997bc02334e03a7377
MD5 99e92c67145aa3b60bc3f82c65e160f6
BLAKE2b-256 c816c36605747b97a52d35bf7907c839b5e1f77ddb81f60d0dd7ddac3a2280fa

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f62e36372c95c1023a8d9306dc229e86045d7ae222d647a7efb43c1ecd928d71
MD5 cb11e05378b4d0a949a25428d2a4efc8
BLAKE2b-256 750084443e1a29406c3f20e92b4d2c253f4271aa9c21cbf5841f4fea5b898448

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 f1cba9a418808df5959ed02647d64deca202ac262f319cdf563d953b559b166f
MD5 0f96a039b976a1239d9327e9beb15fac
BLAKE2b-256 c36309eb7b9219bd0e5548452ea9b243536ef313ded3b0acc4e699a607458eba

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.2-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 944.0 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.2-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 8bad93847ec4993c462f57ef68a0b48c457a6d6d00e88772efd52fc1d1e1e689
MD5 9708c6861c97f465bcca6f437a289a2e
BLAKE2b-256 74c9268042e12e3d5a2c56535a6bea6c57fce17b240eebb776def0178e6b96f7

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.2-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.2-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 96f9e918404eda9d8db9a5772e8c485606b3ac93e6bb51d21c12c0771967698c
MD5 1761e1a7e75d0aeb318434a89dd943ef
BLAKE2b-256 a3bc9f50127fc4ac9151362e0a602700f0eeaf88ad3f643174cd3346d4564079

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7ebd84449315bb91cd9c2ff7268bf0271dd9b5ed6cabfe248c2ad0337f02b857
MD5 acb6f6f85ff09cfed451865c3b4d1fa7
BLAKE2b-256 c1850dba2670487031a4c413bacb75352de7f29f7946415978fbd200567556da

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 5738481d967c4c3ae66c375f606f3acaf8f028964a163763bf49fb6f18c58f08
MD5 891589386ecbfcb0a30638beeba453c5
BLAKE2b-256 d50155843cabbb8338faecd11c6f61cf53f4a290d9b0f34c32f1784e98855838

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 5c76a34f74713cb27f4c0b957bbf772f55e435542fa4707bed4a7438d542bf16
MD5 f5b11f88f879d37071210709f8d46633
BLAKE2b-256 886f296ebaabc450cce444b9c5735ba5136f0c167ff6d5ff260751ddeb2da250

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.2-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 946.4 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.2-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 90113aef78cf153646e18d617feacee139f0be5f50ea9832d2a7281928ff8817
MD5 1c40d513aa78b4a61ee537c7fff49891
BLAKE2b-256 577ef78bed32f905b1c82188b4ca5e34a92e7b637bf293cd72fb929e0f491f9a

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.2-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.2-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 0a7c55aebdf36f9bdecbcd67bb13b56c8f88d826781421517afbc2b8e191d7e4
MD5 a8755e33ac3d52625ad70f2111ef7178
BLAKE2b-256 880c062a158f6b56c8ff15645acec567edae0f22c6377b629dd997645f5f985f

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 486ac3757a3d8faaf2cdabcb647a6bf6783c6f1ab9d074922d3b1362c24094ad
MD5 a031af7f193630af604223b079abfaa9
BLAKE2b-256 8a3f0a0fa83e44fa643eaacc7db2083964300450a2de2be935613437d03a4a3b

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9b73463d10237d1cefe8e3d118c9bff28253a76d70efc9182bcc74a471a1bd24
MD5 f4d5e6851255e9a5a5815d3f237e7b02
BLAKE2b-256 049e86d8eda53c5a04f7347b92b9475d69ed1d7830625f64494c10f90d4fb5be

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 c10684e2a2e56d5e468f94b0d66d203bace53706612b396b908b59597d38c1ed
MD5 ad0b8cc27ae776da10cf281c30d9881a
BLAKE2b-256 6ae4301ae04b436c19b23bc67d3035b9e33326e2c7059b41cc3aae128e6c6f9e

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.2-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.2-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 2f8db4ced4327b405fb2a0953cbe052dcd74a5231c0e5665c2dcc2f2ebd9f871
MD5 029e968ecd146679ab003093e4b2e512
BLAKE2b-256 622494be7b1de837ec9d4a1161d854412d83f2dfb3a18cef6b0d2c43d3a9b762

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2cfd45320bba0a2ef14a773d9b722c1053285b19a00f74b816d8d2508f9510df
MD5 7a173724d435fbe68ccf1609a7e437ac
BLAKE2b-256 c225e9f2a7b35fa928f2c7e066ea028aae9730dc5a7b22384ba3b4c68af0cdfe

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f431bddd5a6ed4ab6c140b216b77cb169c817fa85706ecb32b763c6b4a2c403e
MD5 a416cbce5c3ba4494bacdc84d9dcc938
BLAKE2b-256 8fe00f1d108a4a8ebc6da48a2a3f792a9e8640bdc8f0d67465cd431277be66c2

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.2-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.2-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 94cb56ce98be3b35c65f84d1a3f10e18a120d570f26abcd136ad4f0116d090c2
MD5 887505bea46ddfd301a9280092d135c8
BLAKE2b-256 315f07bccd9e5825d2e9c6279207d9aeaec83265b0809db84f2029174d7f7b55

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page