Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0
timm >=0.9.12

Tensorflow

Library Version
tensorflow >=2.4.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('resnet18', source='torchvision')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model("model_name", source="torchvision"). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier', source='huggingface').

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.2.1-cp312-cp312-win_arm64.whl (919.9 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.2.1-cp312-cp312-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.7 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.2.1-cp312-cp312-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.2.1-cp312-cp312-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.2.1-cp311-cp311-win_arm64.whl (946.1 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.2.1-cp311-cp311-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.4 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.2.1-cp311-cp311-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.2.1-cp311-cp311-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.2.1-cp310-cp310-win_arm64.whl (942.6 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.2.1-cp310-cp310-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.2.1-cp310-cp310-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.2.1-cp310-cp310-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.2.1-cp39-cp39-win_arm64.whl (944.9 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.2.1-cp39-cp39-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.2.1-cp39-cp39-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.2.1-cp39-cp39-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.2.1-cp38-cp38-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.8 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.2.1-cp38-cp38-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.2.1-cp38-cp38-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.2.1-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.1-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 919.9 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.1-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 d8c371f78e12da2c658b305a10e72941ccd41e6ac4307d82abbba60e4ddd5521
MD5 6b4e5cd834837035458f9d82a5fd9f03
BLAKE2b-256 45d45f37b39a3fb2dfe303beaf9fb1de6dc6b1f177bf45b272d36ca8bfb39424

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.1-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.1-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 1344435c97798ff5bf0ad431e66d3b576fd75a59e2abac6225c22290bb18c468
MD5 3e624b65b04dfc56ace462f4b0d153ce
BLAKE2b-256 58ba412b3f5a9052a18d062fed2c120a350c587f1737c07bddf2111800a37b03

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 38e581b7f997672d9ceec074465f35d9db8b9d127c633788180b8fad365b9ce0
MD5 2015d5da1559b7be07edd30ce4cf3bc1
BLAKE2b-256 3a65fe5c17c98a6e9609ff122cd51c39143d83c3e629f809461efe528cedacc9

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b3e734fbf37c3e538b2be61c2cef453166bd8095084b5617cb7f55a6b54aecd6
MD5 1aa48db1fcd2cd0b6df9be153d5ddd5d
BLAKE2b-256 ae08b03726354dfd06321c2b2ae3379f36e08dc7442b696c4ab0e022c1b641b7

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 145482e503f09fb07638fa5b3e422fde87fcfb33fe4961007522a86cfc0af2ca
MD5 4cf78fbf785c95688a2a0ee03da1ed98
BLAKE2b-256 72da300a19f01945495ca917ba85fe99c5cbae884305381b6a42a9e827c1254a

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.1-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 946.1 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.1-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 83714628d8e990995a8962d01adbb3f4cc8e41b7832cafe50077ba089d89d43b
MD5 b08f1183c363727a34d3968c6620f3ec
BLAKE2b-256 16362275f7eb6ba4fb352247f046aa32e730037d125df93730c82313e34764bd

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.1-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.1-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 82e27bb46a31a5f66bc344bc8e0dd00cdfb871ef453ba06d1c727c462de5cc67
MD5 b647cad55e5f432f7855f96943720849
BLAKE2b-256 d68908b5332b1772d677de3c335a956a3b911cb09a6e440f073e1e8117844b9e

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 3bc535d67725cc7b5fe796e40e0bcae128ae41956134e76fac8c30b8c5754740
MD5 e2ff48b1d78f5085577927e9848d1eab
BLAKE2b-256 cde90f14d23d4a09425016f8b3d5cd76dac271e1de84903e5361a410596f6dd2

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 deda01b93d6013dba425123481f53e4b2e6e04c04383c1c199342bac7f7f7e62
MD5 78ee9bbdf1b62a31830fbd76d51f444d
BLAKE2b-256 8fc0b8aba34098ccb5ed52c1350cff30cebdef8d922f236470e60d651a6b879a

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 c5e14103f00bd148b9b43faff07a38b836aa36baf0c28c54377f654b4d4d91a0
MD5 e2cf71483e4b1d0e88ecccef13ec2ced
BLAKE2b-256 acd8d23a9b86c8f87b6061c5daeda2fd430b814b684cc2e4279c3c20c7456319

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.1-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 942.6 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.1-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 30e9c9b9c2f433be6a49ff526dbeb294d15cc9069a49eee6457ec984c5b33831
MD5 aabaf0425d8f9c721514ac716212dd70
BLAKE2b-256 92060fd2f93efba8f65d7fd68ec091bcbe7ff60346b866fc90cc43f1ec10948a

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.1-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.1-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 cf5adea739a6510ae47e3fe34613701d82bce7e5a4a9799ff322657f53eb3fd1
MD5 04e7f40ff6a0d0c68c2656d8b2e45a7e
BLAKE2b-256 46ee32dc7888a8ad8f6efe07ac0f4d36f63780fcbde3d84c50805ff854ec7d12

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4c3c4af846405dffc8df3722292ec55b213246d94742c8358673f12a6e810bc0
MD5 5bbd7d6e38282ff922595ccda61b8b65
BLAKE2b-256 f3f9273850d6ade923890db9710ec3a68ec016739b52e6c09fea0e92773b3f83

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 663202a6d388640e2d568854585f034d8cbba4db024042eba8a515a4a04ce6bb
MD5 d4a18555b92a49070c0c1f99fb91d61a
BLAKE2b-256 cc4738c9baaa10381859d806a26b62dd44a1bfe7d040f295ed8f0c245e4b1a8a

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 b2ce8e499d62057438cd8e200e73196bf1d5ff49a800781833a9acf1f74caeac
MD5 54b1d3bd8932ef08e9eabb8c636e87db
BLAKE2b-256 a9012e71936b09bd2987dcfda2a5d999113f3c699139f68dc9440c77fd0e853f

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.1-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 944.9 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.1-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 2982d918ae4ee71904db28cdd912829c47e11c0d5dc469a902f34ef20fa3b450
MD5 4d2daa93f8e12eb01cf4daffa291a7a4
BLAKE2b-256 a7bee9cc3b8c44e8262d8f799a1d7183de574936dd71ef9dc81c3f87968e661e

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.1-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.1-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 76cf56829d7a4fcd5961d30fbbcc8f914909774c6789c6f0cf4b2c9ec35238a4
MD5 37e1c65cd660961ed36fda72d21457f4
BLAKE2b-256 add55e0e170c24e5a408af353069894d9df67060a8756ab7f18d12a80684d0cf

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 5b4492b1553a208d4e977ad586711bcda16933ed3927300dd31ea68f013583f0
MD5 7aafe55d5928b7c55f0048cfe4415894
BLAKE2b-256 f2a0bd23db2f7302a61890cf69ca00c9cea5a2516c8e21ddc28f5a23731e7225

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 6f6fd1fda86d5246e27582c4496cec69b228cd629a4da56c8b18146c98f67397
MD5 15ec52c2927d4edc94c3302cb0af44fd
BLAKE2b-256 85d4c50abe7a4d937c2a2c1ae8fe1864ae0221e05ea24e2f1e57f4c00df9e6ca

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 44b04092ba6a88697e48bb1397c69d858ef393b734f2db68f67f50ab1e537a5f
MD5 945cfdf664df265f0747444dbc978bcd
BLAKE2b-256 05ac79e711574181d47c379facd08d144361dfc1882034458d774bcb405b5a2b

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.1-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.1-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 d585775e38f26f94dd9ea48657c183d77fdc39903d11bd74ffe79dea09c5cdc4
MD5 19a279948ea3cf283ae44997f9577af3
BLAKE2b-256 8af23b9aeffab7b5dded7f19534367853e6a220f08261dea504705cb4f9ca17f

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 184706adcdc5bcb6fe76d87579427f8e01baec4e88d2cfac1c81cafedbe0e1f8
MD5 da5cf9d0a6f41e14fbcb960f20f9f60e
BLAKE2b-256 b1551b203e4409b7864e02811c501027aab0fe3387aaa2e5fc435f6b73ab8833

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 34d63d75a2257a9a0b5c21a216c3e62707595ee90ee4ecb884ce18b5d59703c3
MD5 04bdef9d1c2abf24c7a179d83285673f
BLAKE2b-256 b0515213fedbc20784ba027e21d8d091bd7ac3bbd0279a3c68efa1da25c375d7

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.1-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.1-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 5535bf9518e23bd8695ac37776cfbefa620db28877cc1b04b9b5436625372210
MD5 fe92f4f6992bd6beda3e4b0f82b60d8f
BLAKE2b-256 47607b36717111682d7f6ee7db1f6fefcf3fb792db9ff12f13eecbf4547e3ebf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page