Skip to main content

Leap Labs Interpretability Engine

Reason this release was yanked:

For some models input validation can fail when trying to check for a valid number of classes

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0
timm >=0.9.12

Tensorflow

Library Version
tensorflow >=2.4.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('resnet18', source='torchvision')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model("model_name", source="torchvision"). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier', source='huggingface').

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.2.0-cp312-cp312-win_arm64.whl (919.6 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.2.0-cp312-cp312-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.6 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.2.0-cp312-cp312-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.2.0-cp312-cp312-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.2.0-cp311-cp311-win_arm64.whl (945.9 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.2.0-cp311-cp311-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.4 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.2.0-cp311-cp311-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.2.0-cp311-cp311-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.2.0-cp310-cp310-win_arm64.whl (942.6 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.2.0-cp310-cp310-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.2.0-cp310-cp310-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.2.0-cp310-cp310-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.2.0-cp39-cp39-win_arm64.whl (944.9 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.2.0-cp39-cp39-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.2.0-cp39-cp39-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.2.0-cp39-cp39-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.2.0-cp38-cp38-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.8 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.2.0-cp38-cp38-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.2.0-cp38-cp38-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.2.0-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.0-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 919.6 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.0-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 34b5fec195b73b80a0a7acf566a8b5e397913e193cdedd39d4a4393c5b5e37a2
MD5 3a1e89df672e8f1687bf5da20acb23e8
BLAKE2b-256 f4a8facab42f1f66897dc8c41d5dba583bc4ef5f926bae7aa047c19280656408

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 bba0dca8576b39c9f94889ace05eef26154bd689d33af0f663f7a26c0f813190
MD5 e386de8bfb968d81db7d36d62e079348
BLAKE2b-256 101e6832d0198e7040f442f4696c725954d6f4b39b10ab0221e197c6127b6cb1

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 fdb23ba435515ff4562f74dc2885f959383e791893d2e34883ed96be0eb6fa90
MD5 c8e88fc169a06eb36bb4f8a8901a826b
BLAKE2b-256 31aacdff9bb7b09f0c2d0979c0969fa96315be4c8ac4aed5ed92ef561793503c

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 6edbd41d5c3a96f103cfccf64b4a2782aa194f0567f963fc3670f5c952336c55
MD5 04696fae58e63259d832f50ea30b46f4
BLAKE2b-256 3cd559dfe9d59614a1e19f288169914896361c211f7271e0e168c891f9e00268

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 e54c7bdf778c8b6f159af8d389b145f06e759b1ba034f9fe122a7e1680e627c1
MD5 87d6d4b832f259d8bdb428d794f697b3
BLAKE2b-256 aa49d8aaa90540de477842ec01ef2aa0e38e45b629545c07d29bfacd3d8fc75d

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.0-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 945.9 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.0-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 6c2d3a75f074e708b1daad856a07f0c4d08f79c8f6f8abf50837e4dad0d8cb09
MD5 8c893533be44a1d7549cf628acd0ea2f
BLAKE2b-256 8e2edae15c439de45e3292d08b442c4fc1a5860871537c57b5bd0b9f7ad51d2f

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 44aff2413cfaaeb3636481a1399d734581d38795134982d5c8bbcdb237635a31
MD5 41fc42b1e78fec26cc640dd5527fe9d5
BLAKE2b-256 7e250f92faba928feb36228bec49a13ec3d566c28209ef2727231e00c8613c4d

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 36078bab57d20d057e048c5db9d7c6f0a5d8ee217926fdbe26c54b800f31ebab
MD5 0962a4ee8999a94da98b4ba4d323a4fb
BLAKE2b-256 5b405971c046a272688a615bf80290cfc335fb9fae87d83a214d95376218619e

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 c73f960046d9bd9a914f5707bb9127cbc42ba932b9e391bf5ada742aa0f3f5cf
MD5 95895dae2029f9982057177898529cd7
BLAKE2b-256 830529f6cad830f7c46e9789151d877e23342f488e7bcdff4ee3b72cd6bc101c

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 98773a8c2ac8ced8ae47575c9191bf784d6f74264a0ede0d3d9adbc50166a520
MD5 3f9b584e3528052664ef7dffd581d1cd
BLAKE2b-256 c4d880edaa8bf347c28f33e1c90b60f433d072e725f08ed16a030d7369b83a9c

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.0-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 942.6 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.0-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 423854741046d2659f9c3480e8579148a9c2a2ed2df427a34305113ddc1a3f00
MD5 20df0191996c7a7e2812907c50d88a08
BLAKE2b-256 bcd29a38be64ef754f8d6ffc6f13ad00ad6f049d68cca4bd9d2e2010ccfeee66

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 1b515fe4928be8e89377750bccb4c7666c173bc16e1ae09ea8c9da772a3c9d05
MD5 ec040996f5df7ae840729ccecf20afd8
BLAKE2b-256 9838dfab907cee88655c699372da114fdc41d546fcf7599254c08b049602d48e

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 21c63ed76ab7b97b01714e02a2dbd38476fdfbd65f643084e778365cc0402eb0
MD5 b6b622dc9b77ffe99ee6a6882c4b24da
BLAKE2b-256 34c059ad2e888731e5f62922b7e985b390c8dfc478452121b66ab940a1aee332

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 63a926a9b35b4b4e1640f3a1aac373140e0fcf71f47cff733f3a153e4a73d817
MD5 25f34d499c7712e3be6c01a7fea0b449
BLAKE2b-256 b5280e4378f1e573f16a70ed6769b0fe7bb9bcbf2990102f0df4ee80aad10d73

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 fd383a41b71c764100664d95831bb1002527474088faf862708e09eb2c9deca2
MD5 167bf6af53ea69c7d877ac79f5c5aa4f
BLAKE2b-256 caa1f8d7993f6f0674a84506103d44654369b4f41662901e8f8f6ba06e1d984a

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.2.0-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 944.9 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.0-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 3b2b270e9f89015de4fa08e298cb31c60f5a2f852c2ed757b13efef12b796fc0
MD5 da22d51891abfa0707d9683e59e67b1c
BLAKE2b-256 198c5e61073deacf2d9a804f1f6190531b21bbe22ba9258d7b68348d246be967

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 abc97420c623809238c482400f27f2b1ec74a1428e8ae2fbe772659de960621d
MD5 6ed45d0a7dccec0892467317d7f812ea
BLAKE2b-256 9130ed83ae912ef52d8990b7c7384f5f17d7b21b263a2e19cf6ad006be9a7671

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 532cac233880beb616107d941bd1a5227d3558005e1663060b1787a2611673ef
MD5 9dcdc42efb66f4bcd7108dafbaa700a6
BLAKE2b-256 57542281087fd48b600ebffa60bbd484e447e9c5dbf47dbcf407689749bf1b36

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f82cd8241b19c29ee36e3691040a73a3c72068873a700bbaf8536840b3732bf7
MD5 a0ed0853855c6cc43be8cbfc0332a9ab
BLAKE2b-256 00e25853b9c845e4c6ca1dcb6b4f8bedd04e0c2970015e8d2b124da376c58257

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 b4961831d65a8846d152054bb7776215e249349560ebbaa06f0a0cf985632051
MD5 046d8b7addc6e3ff2d73c660c5505850
BLAKE2b-256 0f5ea9bc0c918fc1059da49caf20c296f81b389d5679dca7a0903ca3cb2bae92

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.2.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.2.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 34573ae99e8ef904b2613372e3ce3ed5afadf28522dc8e6230a1da3a1f0a27fc
MD5 ea3fb942412030fc0908ba7e4541d672
BLAKE2b-256 528e00a698c5d790565a696b9034d412c3e01a9fdd97c83cc3f5d81b4cdaa605

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 39b7955bd11094a527fc82627ba46b93fcc88bc18ca6ca802cb34cf2dca742ad
MD5 4a437b430d2244b576602b6c76a711cb
BLAKE2b-256 9394590628a5e41ca89b10b3244a2a4e602d576b472b1be4527c8fc1c559a8a2

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 e3a3126c5354f15f9fa863d2840a99da1ae6373b5a70114db82f250f58cacf2e
MD5 9ad2cac76ec8233a478eb1c0961bc2b4
BLAKE2b-256 68d4bfdeaef9600c86f7a3646450e4ebeda7bc271e97baaa42686f15b9d0166d

See more details on using hashes here.

File details

Details for the file leap_ie-0.2.0-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.2.0-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 cd043769bd1bd78b43f223cd984770ea75df86751243cdf69e2733c25b7567a5
MD5 799e81656ed96f5542f5b4b5878c0fee
BLAKE2b-256 77bdcad82273de98cad9d82388d9730fd15b53d335cd5a88fddbcc1d6af51781

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page