Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0
timm >=0.9.12

Tensorflow

Library Version
tensorflow >=2.4.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('resnet18', source='torchvision')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model("model_name", source="torchvision"). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier', source='huggingface').

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.2.3-cp312-cp312-win_arm64.whl (911.7 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.2.3-cp312-cp312-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.6 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.2.3-cp312-cp312-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.2.3-cp312-cp312-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.2.3-cp311-cp311-win_arm64.whl (936.7 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.2.3-cp311-cp311-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.2.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.2.3-cp311-cp311-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.2.3-cp311-cp311-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.2.3-cp310-cp310-win_arm64.whl (933.6 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.2.3-cp310-cp310-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.5 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.2.3-cp310-cp310-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.2.3-cp310-cp310-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.2.3-cp39-cp39-win_arm64.whl (935.2 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.2.3-cp39-cp39-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.2.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.5 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.2.3-cp39-cp39-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.2.3-cp39-cp39-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.2.3-cp38-cp38-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.2.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.7 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.2.3-cp38-cp38-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.2.3-cp38-cp38-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.2.3-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.3-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 911.7 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.3-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 e862d08bee7a07304d78452adbe72efa846cbb269582114e6eb4cba68d89755e
MD5 eca132e2f6f1dc021f5dafae346393b5
BLAKE2b-256 0ac589e50035a26bd4b66f79bd0b52a5c6edee25d3df639c63e2b992fa8e0baa

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.3-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.3-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 5f03bfcb2b305ea2f706204ab56fac1669c9cbf656dacf1cdcec23a7e7593e7b
MD5 2d61d62817287885d3b12c1c8a79c60e
BLAKE2b-256 c329b254169c5f10d85dcf706d49560e19ff14338bedf6b100f73b73ff63a9c2

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c51eb83fe21a52cafda87514f7548190bd5b1dd8935f71f2df0072bf84b395c1
MD5 f094881e80198a0b85ee94f2a6c82e5c
BLAKE2b-256 b51c4a874ec4e4c95f8e567dc2f1faeb7602ee23282b8e442196e21d9b641fae

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 a3b57788601a829ecb23da4ebc38727475dd0eca74e2284aed595c599f46c1e0
MD5 933c108495c116a1663f076f9fbceab5
BLAKE2b-256 cfe10494f1f27fa1140b460c7226cec6f1fe5eb7176e6e5c580b5d5c536f95ae

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 84648b1c84787aa7b6ae046dac51d11c0835e04f961ee6a006408f42ebf7036f
MD5 1c5534915c8cc7eae637586f98c65d7a
BLAKE2b-256 5e50558f17eb99fbf192c7b108d521a3d2ed7ff2194872ff6be76681a7b8a290

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.3-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 936.7 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.3-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 409a4e10712064310e47a3dc433ed76bd2f36e33634a576dbd5e2afce450c051
MD5 22b8011c53e5b257d8fc130458da8930
BLAKE2b-256 dda74855a7ccf2618c20fb55a20afbd7cd76b4b157f25dd8c9c6a927c85e21b8

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.3-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.3-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 b5a245cf090fa2de78a859b76ffe19eb396d225146e292ee3da9fe37ad9013fd
MD5 ac2e01b99cd8b8cd93b245b955e6ab6f
BLAKE2b-256 3c01119328dbe14d2303852d9d319a220a4860002ce1e7f09fe26844a1561b26

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f8a48aed0ef7fb228b568ae93aa193fc9adac6844b799f02490414ee598e7470
MD5 45362bf7559e9b16aba1cf982655144a
BLAKE2b-256 2f0392010640c7a8ab9a7504ab28618437c27ab06cdf694ecb1149941a255535

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9b26ec5ba232e15da0ce980573a935d099da70c7b67d1b008e94e5720ac2e60f
MD5 782e0e7c0a5bc40b5f777bc04c7da615
BLAKE2b-256 740ba10dd63a81522b230f215549cc47e03dbb88ae3a1a1dad4f970416b1344b

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 233297e2e357997951487453954832b7bf55577133f7e4e15d90a0208106098e
MD5 052f3f6a87008750b0f1aad62e331ed3
BLAKE2b-256 06493322863580543e2a6afca9a08284c13b350c3516967ecb87bc50a5f94e59

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.3-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 933.6 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.3-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 1b635baad3c88d3e1e0cc978fda8ad1587ac83f9dd1761d96d6a4c478b24add4
MD5 790ce0b8cf8439ed965938b6dd710a61
BLAKE2b-256 1068224b7d995e583945b0343dbce578ba69d2b2e9cf6a5cc3804b219c7974f3

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.3-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.3-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 604c19fba43d8d1fa0975906a0c1d5c4ee58370a55edca221512656468c05cac
MD5 7539c6c1f54f66bce9f6d531d1b3edf1
BLAKE2b-256 2c690ce698ca301c864251c22cd6e56ac86a80fced4905919c6c7f73682041be

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c416f5f6df902dc312807e3f006c85daf7d212e54fa4c7071d68bcbd4683f073
MD5 a75a6455adc154f57fa517d0b41b5ebd
BLAKE2b-256 5aab0723d19b3e54e4a658467c21972948a356047e82e2672cc79a004b79eac9

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 187fb81c450cd85cd9a8d532417aebfdfe9e1d25fe87a8f2b05f76bcdeaae19b
MD5 025792be57e50e8558e0cf604ffe720c
BLAKE2b-256 f0d6990bb2b8673c09a89469cd3ef921eadcdd40d3cd11b15aa7855358006d17

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 04b04776c7db94952696fc52eefae2359b8a8a3e262fbab621cfa4910713d893
MD5 357147eb1b1442aa0a097794ef8f4f96
BLAKE2b-256 59191fbc749a653b01c1d821068cfd1a1598767cdbcdf320fa513f095ad4766c

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.3-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 935.2 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.3-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 38e767c717b9c757db2f5c44451477591682742155886a46f4627facdf226710
MD5 f92db82fcd8a8a0173424d7a53d51b06
BLAKE2b-256 65e374231a1c67d305f754dd5ab21d3b2fdaf226accb9a626da749cf7b641760

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.3-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.3-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 880260990caab98728c7ca2114412cac295757b2fb38c61ff7f79ac5c843667e
MD5 fd8c9f5558e8c79951cf99ecfcbac0ee
BLAKE2b-256 be6f91462b20593e578afc8e05982384621e016a2c567b44e07467b7f4e545b9

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9ee12c01affb4de1beee81ded8d1dfe2ee3da35858d89c6e092e7e07984030d3
MD5 be85ea160775fee39cbe13ac99c6b4ac
BLAKE2b-256 fef0eb004a82a07031a9ad1ec1c4b2a9861ccb3b332cc4a7cb3e8041059a8aba

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 660e88409da900a1d31263a941149bed98f0a7c0364721bc4b985aae6d1d2fb6
MD5 6e5540e1597bf2819d880925d669825d
BLAKE2b-256 eab4c6cb4197aeab6774ad94802f57f5438b93b80c5da3bac4f128e8aff2a5bb

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 7c278a81b41849bea7b42d7fa16e4b8d340cdab8f3cfb9a798980d38f03a8141
MD5 9b6e3f7c5f93619b584b47a99e94831c
BLAKE2b-256 c670176a1118404891c6b73c358cef0a32b3702484ab51c50fe8a74b59ca78c5

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.3-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for leap_ie-0.2.3-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 5af9bc10fe88ef62eb6d92971b7e3e5bd4b833ac8fd4c8e83026e9a59dd28532
MD5 f1d404efdac46e7b82b93c139b9836d0
BLAKE2b-256 72b1b65482390ae09786dc3d2a6bdfae9e6c6073850f5623899d2c8cc562c471

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c377674da43c3c1fe8f15d6eae241186982f0a803f9b536686490a9a7fe409c0
MD5 7f967a4ee8d31998b35003504560cf54
BLAKE2b-256 31088a2d4330462e52da3d671f7fb439180485c22c8635b8d20e7c6e6c4a64b5

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 bd954fb6cbbfd563e59bc6af273fa8cb21bb9b56554543b6c739d90c4e3cb815
MD5 56a30f4cc3352b4d5e841510677ca027
BLAKE2b-256 e6751c1ba85688c8d43da37728c7ace666204c5e06b0ac126ec6431772db55df

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.3-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.3-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 035b82e59ff73a822570f03d1707684c3cfc79dadd90c61cd064091f9094a2f2
MD5 6af446624813a3b17152d9cce3e4f773
BLAKE2b-256 04b4f73d13885fb13a2835f89c34b477b0e99c1c310bb50ea277d9521455d9f2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page