Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0
timm >=0.9.12

Tensorflow

Library Version
tensorflow >=2.4.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('resnet18', source='torchvision')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model("model_name", source="torchvision"). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier', source='huggingface').

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

leap_ie-0.3.2-cp312-cp312-win_arm64.whl (964.5 kB view details)

Uploaded CPython 3.12 Windows ARM64

leap_ie-0.3.2-cp312-cp312-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.12 Windows x86-64

leap_ie-0.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (8.0 MB view details)

Uploaded CPython 3.12 manylinux: glibc 2.17+ x86-64

leap_ie-0.3.2-cp312-cp312-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.12 macOS 11.0+ ARM64

leap_ie-0.3.2-cp312-cp312-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.12 macOS 10.9+ x86-64

leap_ie-0.3.2-cp311-cp311-win_arm64.whl (993.9 kB view details)

Uploaded CPython 3.11 Windows ARM64

leap_ie-0.3.2-cp311-cp311-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.11 Windows x86-64

leap_ie-0.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.7 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

leap_ie-0.3.2-cp311-cp311-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.11 macOS 11.0+ ARM64

leap_ie-0.3.2-cp311-cp311-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.11 macOS 10.9+ x86-64

leap_ie-0.3.2-cp310-cp310-win_arm64.whl (990.5 kB view details)

Uploaded CPython 3.10 Windows ARM64

leap_ie-0.3.2-cp310-cp310-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.10 Windows x86-64

leap_ie-0.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.9 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

leap_ie-0.3.2-cp310-cp310-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.10 macOS 11.0+ ARM64

leap_ie-0.3.2-cp310-cp310-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.10 macOS 10.9+ x86-64

leap_ie-0.3.2-cp39-cp39-win_arm64.whl (992.0 kB view details)

Uploaded CPython 3.9 Windows ARM64

leap_ie-0.3.2-cp39-cp39-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.9 Windows x86-64

leap_ie-0.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.9 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

leap_ie-0.3.2-cp39-cp39-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.9 macOS 11.0+ ARM64

leap_ie-0.3.2-cp39-cp39-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.9 macOS 10.9+ x86-64

leap_ie-0.3.2-cp38-cp38-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.8 Windows x86-64

leap_ie-0.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.1 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

leap_ie-0.3.2-cp38-cp38-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.8 macOS 11.0+ ARM64

leap_ie-0.3.2-cp38-cp38-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.8 macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.3.2-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.3.2-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 964.5 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for leap_ie-0.3.2-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 17ed70996cc53efd093dc21db3e70bd47d4cc60f9e0a8521bbecd0514bb38c96
MD5 a82f5e6557084d18baed3cd451957507
BLAKE2b-256 715cc49beace6f7ad8d1cb2af14bf3a202d87fd22fca865e9030ea1dd1ff7f34

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.3.2-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for leap_ie-0.3.2-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 4532a255db6c2c8678eab3fe934ad0e903705f55ca7bfdbb0833496ce99fba58
MD5 eff7b404f999cc01ac0bb1ed3b0daf07
BLAKE2b-256 8f67753c3158011a7923cfad3f6d8fde487147aeb9c7246caba134ee9d9cce40

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 727ac4ea3a368109053a04b8ad215d8569c42540a4e0c5a375a83cf1703074c0
MD5 19b4cc16828175fc258a96f63c543e29
BLAKE2b-256 6b38d286f0bed3171dc39ea8b282390bfa92c7bfdcf6a8516e20bec3a2dcd400

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 a95a6e4ccdb6f1d5756a988ea90a968c17ea7e5fa662fa0d8a7e4820f37723b7
MD5 a4372bb34ef254c3dc55cd3c25a97a45
BLAKE2b-256 3fb8f83d5cf025c88cf84879e17acb72a5d892d20907128a1f13a967e68fc22e

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 619cf770e81c734c11adc43183326598155305a3dff343e7e490e4068a3ef87a
MD5 36db6f8bdf21a697da161c9f7613d726
BLAKE2b-256 a943e8187d7e4e4abdbcd8037e5d24ecb7eadadd9ab379e7a211b13209c8d770

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.3.2-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 993.9 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for leap_ie-0.3.2-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 b9ecaae438d8c320b115b16fba5f21001450ea370e589eda4e1d248e1884aef4
MD5 74c4f3d1a7751838eb763dac6b196e5d
BLAKE2b-256 8aca1300da1101aede541f2ce1997d888fe16778657e03ddf625ff5460429805

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.3.2-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for leap_ie-0.3.2-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 4d442364129d71e68b7eef6b9374d54c996abcec8c2f3d0f48bd93dbd6fb3842
MD5 233246b9a1e9db2de3951158ac037256
BLAKE2b-256 b1aca9fd4fd1a013286716b7a196a1759b6dbf109ed4481b7dc0f8ce1c8ab606

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ca3895263a7097daea99eae4f538e4f3ac199069cef83e30de30c3382ebbed94
MD5 8b71e5c612d940871149a57da425c228
BLAKE2b-256 b4ac774ea7d1bc4edaefce3545eb8dbe59b19df585c9d2b89547ed2efe3b62ef

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 37db33d74fa519f4a95b47ae45cf6d80931a2cff1e54a27c79f841eaadb9da67
MD5 a592509d6b29928eea7711ec78b5f8ba
BLAKE2b-256 f9a28627b9666986e8aa31cf65844bc494c1314a80fd4fd41e12f0b0038f2ca7

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 b6462818cd77e6432faaa1a258b8dd00b73172377762397e8a71b0a2781e75ac
MD5 fbf97bc2733577fd118d9234f42c1fa5
BLAKE2b-256 b4105627bdb4cb2f16345f040ef409f05a40cb934d4ff9d5723e8fc58e1e231d

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.3.2-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 990.5 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for leap_ie-0.3.2-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 c2b4e998f5163631c258e1ca1f78d9cf797f9167a413327e2c8f9612db851f6b
MD5 068f19e59c3c06c7b87bf06b26d5a8e0
BLAKE2b-256 d4dd6d9b355f2cd7ac7ab3d4103cff859170841cf0b9c692f0bf470f7480eef2

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.3.2-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for leap_ie-0.3.2-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 88e60b745e0825bbd5c785ed33c48e86a7f408457e0e9ab5a7396db1c3d1c485
MD5 576d0447ea60a507cacb24e6e0d6111d
BLAKE2b-256 0d780b637d111da646297e206d3efa00bec4faa2a881359c0c5a3be453881338

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 30289a2f60c1201354aeff2ea767c82c13ed893ce92a73ca8509dc6bcc2e60c7
MD5 86d6bf80c9f2e11f36976ca54759030c
BLAKE2b-256 65b7d941c5071e4dba504834ba0bfe3be506fbffa00d8de43cb4b8ff1b73899c

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 c4cd939579bd89fdd056550d6b93934f7d8e1d765b8117f532b9d97ed7696ea2
MD5 61b5de7d76987b4eabfc6c348fc436a8
BLAKE2b-256 c4c27d9e295e3ecfeffb66f3ccbb81c38dbb6fa271b31573fcc7d2717f04abfc

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 a3fbff9c1e1e97c3d5426297e8635a8fd96309eca647a2e3ea54dde92227e3a1
MD5 86d8036ba20f2ecd1434978f02981aa5
BLAKE2b-256 ff1ff7b839f321f6142b55b619b561a651686dc19d65a65bcb2992319a36133f

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.3.2-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 992.0 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for leap_ie-0.3.2-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 0b13a86975aeece7ca536c562116045e7cbdbb8e526328042b689781d40574cd
MD5 2818e3587b4621e0272231638b1bf0c0
BLAKE2b-256 4f4d3835c3c8b47d2cf847f8f09bd83e0c83dc417d4c85e03ab6fed7659d4b5c

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.3.2-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for leap_ie-0.3.2-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 c47c16945fbfb3457515ddb09f74064e81ced8c8f10e48d89f17f9ab4a70334a
MD5 5d119c7b79214fd4305a7eac5dbddf37
BLAKE2b-256 6daf24ffe03dde8de75840881ab9dc6a85d1067c928102af4330f87e971c8205

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6b704e9b5979e413c1c99d99a0ac142e99d35ce940b0288910e43406e6f5d9dc
MD5 3db7020177cc2e8016d7031bca00b93e
BLAKE2b-256 a36c5460a1dccd15a0e4bcdbb748bddbf3fbdbe1ea65228ff24367eeae03080b

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 28964b38a507c22c4b0a65d405f1ee77940ce2459b4922b7da97d5e2f0b31a9b
MD5 6d1e5e9d0dfc58e09dcb3adee44b074e
BLAKE2b-256 1c80703ff325acb4b7da4ae07510119cdffec8e2ac63080383af20b2ed28d1c8

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 2c8bdd3dc6dfa6ef37842d7c1c3f41b05f60535d481e213170933364de9327b5
MD5 4342fc0dbb4823be72d825faafaac123
BLAKE2b-256 1ba40ccac97212c1f1a37e95033960eb117eec9ba691ffca9175f93462be0ec3

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.3.2-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for leap_ie-0.3.2-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 7fc9fd1a4e756fcce4c003b8062a964f29d7ec17f249ac418495f214f7c8ba32
MD5 4cc8b025d198bcd87ac20c30f6cd3817
BLAKE2b-256 b8364f3f853afae5664ce2bcce23f15b36010ed43bfa543e65eddb6b6cf8a779

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 238e89b36fd785071662c2ad2c17ba5deef7c63343d03aeebc28b0ee7261b375
MD5 1316f94eb56f203b268d1fb3af7ba5fd
BLAKE2b-256 fd30cdc3afbfe0112c30cb6897a0c9615bbece52340d2b533af754c91e74cf9e

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f8a6939975d294125dc34fce7e5c31e753b9e6963c8c304ee3262f63b243757c
MD5 367f581172f63faefe08e299304a38a7
BLAKE2b-256 f5fc2b07f115efd1d0e328405704ca5c3a6a152c444f93b77c3b1ac378c91fa0

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.2-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.2-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 c1ab7e4177f30b2c57a425f4e34d9cd9b2f8c8fa375a78660cff162b880309f4
MD5 059480def007d5c17d2adbb8c8776809
BLAKE2b-256 441d187b55bbe2a6540c7610ed5c3ff2c8e2867165139dea3200282f7d579acb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page