Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0

Tensorflow

Library Version
tensorflow >=2.12.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('torchvision.resnet18')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model(torchvision.[name of model]). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier')

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.0.30-cp312-cp312-win_arm64.whl (859.3 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.0.30-cp312-cp312-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.0.30-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.1 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.0.30-cp312-cp312-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.0.30-cp312-cp312-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.0.30-cp311-cp311-win_arm64.whl (883.4 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.0.30-cp311-cp311-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.0.30-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.9 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.0.30-cp311-cp311-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.0.30-cp311-cp311-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.0.30-cp310-cp310-win_arm64.whl (880.3 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.0.30-cp310-cp310-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.0.30-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.2 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.0.30-cp310-cp310-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.0.30-cp310-cp310-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.0.30-cp39-cp39-win_arm64.whl (882.5 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.0.30-cp39-cp39-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.0.30-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.2 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.0.30-cp39-cp39-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.0.30-cp39-cp39-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.0.30-cp38-cp38-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.0.30-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.4 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.0.30-cp38-cp38-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.0.30-cp38-cp38-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.0.30-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.30-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 859.3 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.30-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 bd04c55b8dd2af05496c62f4d6f43b198535b3f6ab4a26c12c2a75bea3e8c121
MD5 c1fb74b71c9da7ed67de1cf7d1d0d80b
BLAKE2b-256 fd49b5b5d7b24bc45013fa0f9c506b849d8ef1d099b42ea37219ceaae170509a

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.30-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.30-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 8009adcceeb4db9cbc6bd1d2c3925fac5ebb443798161db91c3d39631c26b5b0
MD5 df583dbe2f7431a760ff39fce6e1d4e2
BLAKE2b-256 e905744195017a9423ab75a21dd5fc6ee40f28d50cf632328be12752a81d4dd8

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2e4ded1575ac4a09079df3d043552d33f79b2afe0683d7a6cc25e6023e153032
MD5 52bfde6948a6866f5e84502d616023df
BLAKE2b-256 1deb452eba4a5535cab2bad2d9a8266a908719d51f4f214ca4899b99f342e866

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3608c74dcbeee22a5b69ed360bc1059f2e726f165bd1ee55095bc0e60d4148b7
MD5 c2f2031749e38c8a6457ff4c416b8d1f
BLAKE2b-256 4cec84ffa09f1b7bee062e20d769b29828026f6a1d93147443cce5d6112c9f8a

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 dfca4969ac07b4b351dc541e384ee2d88417723e61824275f191b187d2721988
MD5 4d363667c4e348869d735be5ac37f308
BLAKE2b-256 1fc7a671e6f60d7c657182c669aeec2530d196e61aa2a3a0a8336a821aadaf4c

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.30-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 883.4 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.30-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 520cae6a650319f7b5a89e78de948fdf6769a78c3763810f803424cba743bc38
MD5 8263a5524baee242c61038e5683a3338
BLAKE2b-256 924a76f3ed0e828b6c9e22edf3b093be471a5789ccf47b375d38ee354431a776

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.30-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.30-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 924cb886d80af74f3c76c221b34f8b047fcbf04a75c2d6375e97560b864b2799
MD5 1e1ca42f547b21513857bf2d07491dad
BLAKE2b-256 3202c83a606018207d3bcb703363209367fe38eaf02108e34187711980228b2b

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 11b79b9fe82abcb72d9259bb0a2b0432e3e2f48010806d8b8d0267dc8d8b3b40
MD5 3172e89595c8956bd59b803e04d55553
BLAKE2b-256 badca751e1be4b8cb7689656f36707823ed85fd8a1038bac13717d1f3b9e8392

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 cad6c6d5917af6845fe54f9df27e0a52488289fd0a2a406f81c75ee091a15606
MD5 cf775bc04897924797368e884de7e480
BLAKE2b-256 1172c7c91c4d31e06964b5745a61c6ecb9addc94fa2aa282b8e577f7464b7629

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 2055d6493fdce6c05426d8a395ef69402cb895b4dc9e29bf5d0601bb13041224
MD5 4e0c1dd2da9476d8b2442bb34b5b63f8
BLAKE2b-256 49760e19568ebd279701a2f53be46b1806f799bb4268c99217d96066ca2d11ef

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.30-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 880.3 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.30-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 3479a1c3624b607c642bc36d2bae8138de4d51fe73292ccd2aa981cf45bae57b
MD5 79b381a4b35d1fae54d06235c026efab
BLAKE2b-256 0b6172c3c796deb62d013ebd9b360234b43775e9ddbcbe8904b60d03a5358d40

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.30-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.30-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 eb4133116981d090fc86255b69dd91fdf3cdd1e43177813f9a15724d9fb29b0c
MD5 29e4924ed367d16d0bfa9667e1c05f64
BLAKE2b-256 6ff9ea95b9ea5a3e3376a0b5301f97267771316b3f5f6d64dfc907d51f2595e2

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 dc6e06dba5b0fa65b27cc78431e3ba4e59c95200bf1f3f09f25bc6437934fbda
MD5 5562a5e4fa1615881c2c145637d9c3b8
BLAKE2b-256 c1ce8596e0936944b0b69777a899628842f17adf36e9eb0f7bf7aaef78c38df8

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 89345d06f924b0a830fc5261d785ddb902c2887e1c923904708872af6d3baf92
MD5 44f4ef80b5ee091f5d49675a4d9ad5b7
BLAKE2b-256 92c4128a95ff1c0408d38b45a8775a0e38450b5130aa57a1e841af275a6e3979

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 3084dd1cbfce284de09e56282d14b375bac1201d4d94d0a8784ab62132c0f27d
MD5 f739aef1fd895f1dd59f5e22ffef6ec2
BLAKE2b-256 ac1c2b6798dcbe2f622722cf1f7effd47eec6d7c95780fb9c3bcc9ffdf9411e2

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.30-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 882.5 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.30-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 fb7fdfc3c6df22c74f45796e7c67b4ff32847ed8db1b8710eac1e9de0c980f2d
MD5 b0a5a176aeecb976c64b9d12187f4b54
BLAKE2b-256 3f7dc582eeefed38380475bfd4e189da839aae76a0dc9301cf0743e9bf96a2b8

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.30-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.30-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 aed438af349e500aacd4e36074d02520f4b5df65970e8b3b93d54da1bca60c7a
MD5 74322fb883b100c64e46a5637093235f
BLAKE2b-256 0a82a734be201118f3779e211fbb5a154fc58e5e0ec83951a4fc46b9b3395bee

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 331591b5a0a0173d219d1176f09f203aea675d78d2dde3d343e2bb2a0fad3d6d
MD5 d53384132754099baaf164fb2cbe7ede
BLAKE2b-256 85d9a4a7655e3f7893afb139832398aa31e0afd5d341b7dce7ccba8cdf57ae84

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 7dec3166219159ce47ef2a53ab8552422fed05772cd682a9579f61c963528763
MD5 1fbc25291fbaa477ab880845ac71e6a0
BLAKE2b-256 3a356ed6d619790c63be737e10d453bbf553de8eb906a43cd7825707ec36e7f4

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 d68aab5877ca16ce80dd3ad8de1cf86e2568bdce6f7129b571adf3dcfe84e8c0
MD5 d50bc66d21579799a5d4a341d574191d
BLAKE2b-256 1a460528e602f27a28f417d7023f0dac3566efe990b9b9be36583bd1f4588221

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.30-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.30-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 01f9da0d37bf66775c89f4ec2194e119fcf84225328198e50e090880fc97ddaf
MD5 f150e6940c54c1e7efe81931d740ff47
BLAKE2b-256 d292f822d9a4497788dcf18f9056eabe1a62bc9f48c249f53b8174ca466bd2fa

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d473cc97057750da507c1fd0d0206d81e72a66cabb81331a6defeca1989f26ba
MD5 a6bd33f911cc2ba6081eefae2bb04bde
BLAKE2b-256 14aee149d84136cb3ae8c418a25b443b4ebc022c06f25dd1ac0a05e9666e7f96

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 47c96edda26d06a41075375beb1a0516eb21a7a2f662f33226a9955996303651
MD5 a3ee735c851ad2d43a6105809641afe7
BLAKE2b-256 e56cb6c020c714beb0f738e42facdfd097300b7f438ab1380d8383531e20dbcc

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.30-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.30-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 3771adcc5b397c96d7a8466a6adc7b8a92140b5aadd9d270cb16f47f6643b65b
MD5 96f2a573889f3b45c1c351fa4b15a14d
BLAKE2b-256 f140cdeb72f70e9ffb9ecc8ef10932210ff7161c1da8eeed6d294b521bbba12e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page