Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0

Tensorflow

Library Version
tensorflow >=2.12.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('torchvision.resnet18')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model(torchvision.[name of model]). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier')

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.0.33-cp312-cp312-win_arm64.whl (885.4 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.0.33-cp312-cp312-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.0.33-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.4 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.0.33-cp312-cp312-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.0.33-cp312-cp312-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.0.33-cp311-cp311-win_arm64.whl (911.1 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.0.33-cp311-cp311-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.0.33-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.1 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.0.33-cp311-cp311-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.0.33-cp311-cp311-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.0.33-cp310-cp310-win_arm64.whl (907.6 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.0.33-cp310-cp310-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.0.33-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.4 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.0.33-cp310-cp310-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.0.33-cp310-cp310-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.0.33-cp39-cp39-win_arm64.whl (909.9 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.0.33-cp39-cp39-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.0.33-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.4 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.0.33-cp39-cp39-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.0.33-cp39-cp39-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.0.33-cp38-cp38-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.0.33-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.0.33-cp38-cp38-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.0.33-cp38-cp38-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.0.33-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.33-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 885.4 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.33-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 5cc2e9a36ea9a82697936be0f9e596938459ef8320001644041d2772d22621c4
MD5 2a09435f48c2c6ebd205d32f99300e2f
BLAKE2b-256 c5ef4225c137731e88ac7eaf9ba1c4aab78a6d27167263843a1f4014bb042333

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.33-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.33-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 0a77ef97e053ebda24a6700387b3bb76843528d738a8d92a1a1eadf0701b002e
MD5 aadbf99a6fd61452c341849c5ce7cb8a
BLAKE2b-256 a4b5cb10a83629c86d218ff88cbc01a801b1824890f3fc76be713d237f1106a5

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c55c4c2f8cddea3b1ef753384fde94e1aaedc3e093be6da1315f5edc19549b9f
MD5 9044cc8b461d3f11ddf77586f30e7e23
BLAKE2b-256 fcc4326d525aeee77c17b0f61bd767decb36fc2861e1b630210d4642bad49c81

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 25bf2c8bfd93f3054147a18c3e8968d9073ae3c39b120dbb57137cb19ff58121
MD5 7afd4a715bc8883f88a5334d92fa030b
BLAKE2b-256 e274f3af991dbaba6d362485bbdac001e522966af1dcda825f91de4e3d6d8749

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 a15552d43c5bb5ee04292a7628c0791f4efad793172e657df513786132bfbe47
MD5 16240e71190d465960a00213e07b4212
BLAKE2b-256 616945b71340624d4718c21d6f090660c4db823ed0a9c70be1ce2b5a2dde58e2

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.33-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 911.1 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.33-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 f2280b004c72c20c5f4a122f81135b4f1f41120e49e422892a47c17ecd11e906
MD5 614facc553ddcaad8c8a2afccc3738f9
BLAKE2b-256 c9906c2618de8c727433e2d51615c9d47b0395295f3f9b20e06cd6421d232681

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.33-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.33-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 75f85a4dce71f75aa1553355734225491a4d4050c40bea3e786dc77a803e6229
MD5 de957f571dff5f1579d4dfc78d6e5bd5
BLAKE2b-256 4c9d32370bd9857d7ba90f4966921055b452e4c592f63939feb1b0f446219d9c

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6740bd26a880284c2189703754f16dd960d9ba1e246f1b3a0499a21921b87c01
MD5 67ffef7ab2144898e5b59cabee24a74c
BLAKE2b-256 3fc121afb0f9eef88f7c4e08485932cc763ba5dc6725e655a8b27abd12caff61

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 0f0e625bf8ceb16eeba5cae764cdbbd28cc8170636bbce7e251f54ef0a824b66
MD5 984619287e7a5beafa6fa97f9198d4e5
BLAKE2b-256 75aedc662f4a04d48465f32eed725b675c9694198701cf80c3c9b9901da3e61a

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 16f85431db5cfc5e4dcf98080a90de64ace6dfb46e043547a4031e446160e6fc
MD5 bfcc6ec4681650c79165c7b9486f172d
BLAKE2b-256 95809d5fcbba0a15427c5d3fc9e3d5649767ec6050dd20c04a44b1c907a5543d

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.33-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 907.6 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.33-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 710c3d4bb817bc874e5f4d190ee38a4ac9a1513293e5ef3f2eacbc6922e1e90a
MD5 4ccc34f8cf53678ac46b16c6e8c43ee5
BLAKE2b-256 8f737d953d79d77e3319cd019c6fbe9e34c5b97d83b30ada09cb591707890bea

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.33-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.33-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 7a0096acb0d13b70937f0bd40a0b1cdd5fc4f1e1fd29951f51e856241c7240f5
MD5 4c4c0536e678019e76746a0dc9b25adf
BLAKE2b-256 8ffba77815dd74dbae0a6f92191aa1f6b3be090e7129cdaaad6b7cd35204a0b3

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7aadff1ba4bd5f22bab03446355c8535f43907a1bf081534aec4407f0544dc72
MD5 97595914b6d7e414ef0be4c5cf273196
BLAKE2b-256 cd6a2143da00efecff577f4750e6a35f386b16e31288e7019ef296afdf727966

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3286c582134249553def6271953dc91d4491291f0672d868a6d1dc6fe1ab4f28
MD5 00107e3624f52ddb14ad9d0d357f184f
BLAKE2b-256 710f8e1217f9a866d3d82c6b8e15393927d00f1cf99610a6b1cca633bf37971a

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 f0f3962b70c4af88fc1e8f4ed9fc5183d72619cdd380fb6140495e4e7451ab59
MD5 530a69c08d2e9013afcd42d597dbd2f3
BLAKE2b-256 0f41eddd2c81edbddda3ad2808bb5d599c3716e2600cf0a96ee8384ff8df0beb

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.33-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 909.9 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.33-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 76aff69bdf7bf2aa53b420c476a044a101ce1109ac125efa87acdc296c637ee1
MD5 1ad4536a523602caf06b31cbdd986fc1
BLAKE2b-256 e11e256ae4ac364fb795d3023e4c863cc3ea256d6a16824591e362d867c86347

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.33-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.33-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 7dfda31d7090dfe3df037564d3ecc74373d374577837d5679195a0d09a34c656
MD5 02a36b40ee25f611349c704cb74234e7
BLAKE2b-256 ad5091b5342d3f2269ada8fa9fb3ea4c913f69ea7c3066044a306340da30008c

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9fe11fb4dc2b1244d027fbfc02dde953e6b149aa109a89a65efe83d92841a9dc
MD5 cfe1ac3195952dc1f918cff8cc6b4485
BLAKE2b-256 50f4d8130cea23a849147598cb700d52fe148b81ac28796410a02f195821b944

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 e616e7ab89f1c145bb0f8d47fddfd22cba10b47a36dfb6cb616d23172680101e
MD5 22d1423b0e77dc5129ec8331a79cb4eb
BLAKE2b-256 702f417bb3ade8d7cc647649e18efb95a9027a0cf3825c400334bb81c39accdd

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 6d81633b78e7ddd1021980aa0449a2369c5eee2ce47608a556a80ceca64a8f8c
MD5 13f6ca9c872679515c45489990a8ab1d
BLAKE2b-256 1c599c65af42b7830e0d9ea5debbac4e59a9d5194549383b7806823fd1e3593e

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.33-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.33-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 8144af373e6f13b6bdaddfb85aaaeca50955e9b55ed7c0fd6626f671a5ba5691
MD5 2a6436eb626b19846e1ad5f1166bed64
BLAKE2b-256 d5162137394f8b325f1af2c74aa173da88131b3ce13456c7b90b44ce571e8310

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 75543f0cc225a4f7f5c944399e56c552e7006ac7db2a1529dfcedc6c09a7b77d
MD5 40cac5a84a18de7e13ed6d1bab5d36ad
BLAKE2b-256 6a08313a0c3d364997d13803d97a32bfe50da8c5a1a981a2bfdffebb0b66df89

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 7270d339e44f9456f547dbaf49550e35d0e1370be3dd9cce4b40b486c07db9d7
MD5 87d53bf42c2bbe0b4fe9aae0a083eb0e
BLAKE2b-256 0431a20b9ff7c730091cc1504d8308a08974c8c76030e89b95ba8a4c3c02169b

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.33-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.33-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 bd735d0580d5b8fad1e3797a6f8c842d80dcc7d3281e919ab171eea8d5702bc1
MD5 488b5ff0e29df11048b1ed5592ef4b4e
BLAKE2b-256 5cc4b1df57d64b52ba6f8b6817309baf7c23b54b161a22028768352e341e62fd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page