Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0

Tensorflow

Library Version
tensorflow >=2.12.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('torchvision.resnet18')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model(torchvision.[name of model]). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier')

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.0.28-cp312-cp312-win_arm64.whl (862.7 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.0.28-cp312-cp312-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.0.28-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.2 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.0.28-cp312-cp312-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.0.28-cp312-cp312-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.0.28-cp311-cp311-win_arm64.whl (886.8 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.0.28-cp311-cp311-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.0.28-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.0.28-cp311-cp311-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.0.28-cp311-cp311-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.0.28-cp310-cp310-win_arm64.whl (883.6 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.0.28-cp310-cp310-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.0.28-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.2 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.0.28-cp310-cp310-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.0.28-cp310-cp310-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.0.28-cp39-cp39-win_arm64.whl (885.9 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.0.28-cp39-cp39-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.0.28-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.3 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.0.28-cp39-cp39-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.0.28-cp39-cp39-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.0.28-cp38-cp38-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.0.28-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.4 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.0.28-cp38-cp38-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.0.28-cp38-cp38-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.0.28-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.28-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 862.7 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.28-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 27ba4d02789cf5b9c157d05d9bb9a5d9222b6e7da2cb62eb02f596869ebe4a27
MD5 31047ad6d4931be476635964980ed303
BLAKE2b-256 a6d23231f1bb1f0bb5905bb84baa39cf21ac13c048a72731b01e045af0252a7b

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.28-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.28-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 bf4dbb78afe5a74fc8ec1a1c4a99da731110c4c17077737636f0d68e97a267f4
MD5 abb0b2e4d62e05fefc2885681e894bc6
BLAKE2b-256 26eb9f3c982d78ed3688abdb83530557851d14c43696c699fb4819266579dc22

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a51252278ede094071c2a6be09c50d4c9b8d18ad3bdd327da7b011a34d5214d0
MD5 34008effba017b33aac621ba39412442
BLAKE2b-256 3f11cd312e97a8bf3f3d679051b906e91318db2530918b544c85c85c27c1b14e

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 fcab63599e2aa6a7451d2917de5e5ff5255506bb5294b381a54243f164b5e6bf
MD5 b843424a8cd4e02f148839e8880dda30
BLAKE2b-256 658ff9ecc847983a45143a45af989b14ca3b62e8119bb8b9c1ff6c81136cc6b8

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 5a0cfd0b128df818c2fda3ddd62beef6acc98836d257d5de1a7de81aabacc6b2
MD5 04153d6f699ed8e796dcc5ecc60a9dd6
BLAKE2b-256 b0d8b6d7ee299e57c550e72a0176f59f011631162c58ba86581dd5644fc8ab59

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.28-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 886.8 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.28-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 8ffc1b7fc567222a82f844b48a14bb49ad11dc40c790151d4cb8937ee864d6e2
MD5 5903721f68c6552d8c156aff803641cc
BLAKE2b-256 255a0f30258bb27244e9fe188cea792d9c0538ddc27e2dfeece665daaa8c48c7

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.28-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.28-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 ec23f3cb36f061c39d008ffe3f6d4550cff82e2a6b8542c837e4096ddf871ac4
MD5 d6c6d76c65b60f70a428412cc54e482d
BLAKE2b-256 91e67ba86634812a9f3fdb3db77641a8dd4df80eaf5cb0b226d74820fc236f3b

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 927a7923db4828bb1b20248c1d81f7af0022b80c81d6bb5c5f7254dac292c68c
MD5 e29f9385238bf69b333370dbbab16613
BLAKE2b-256 a64be49f445b03ad8850c9930a162f612af27335d5a4eafba18a21d16ee6f0df

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b597107573a96f59652f8a7e1c55c607954ffdd3d40369ff914a43116d60e1e0
MD5 dd67202c77026d33cc9919b5d10cdb3c
BLAKE2b-256 c2bc5a74f1fb9d8e5d56d1bb059a626550400bb2302e921cf2a2484a15ffd792

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 79727d269314c84989cc5bfe8b770b30f24240f39c576aff1f7e96798affd9ef
MD5 b6e7bf542b496f524f3610c7504a1a93
BLAKE2b-256 c37516a666cefc314b403e2c2e75748b97aa0f6f40e63b42fe68f11d3d350ea0

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.28-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 883.6 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.28-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 53a0b4684e5ce948fe684c0138399901fa93f63e6e1a0a86700144e7d552c2c4
MD5 054a81319ca2f5bf60e95b3476d07719
BLAKE2b-256 e08a149a0d97182f602a75d904127590de730f24639236d8fba333fb7a6b10d0

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.28-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.28-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 2d6d0122907577b8448d98b9d3a3e05ca722e18d8d5f42a942ead70f94b336bc
MD5 32acb8ec66a29315223cfe7cdf6f234b
BLAKE2b-256 bd9cbea1b6b087292f3881b7687c66d2dabd28f3048b0f4f0b08c2be5375abd4

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 5471efddb5cb1788b5c70d27eaea9b5b95e41075ed60f33f8b1477bfff4a7cfe
MD5 0235e8ba24b8e6f8d2e6201249da5cff
BLAKE2b-256 7e81132972bc8e2ffea12e6170e31fdd4e12d17fc899907d1750cb38db3ef051

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f662d527e55599ce6031b7914aec1c8420ccbf2ac46bfa00e1182d1015fedd47
MD5 1d728f10c1e42658e5de164f4783e404
BLAKE2b-256 4c07529dd2c4697115220eeaa3004acdcceadbce78363e3d1236a1cddbfc294b

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 35df7fc62fd55da3971485d5440daca94c0f9178d6edb4a88e230ae55c33ce66
MD5 1e99b244938278c441024aba3b169091
BLAKE2b-256 5a33d4449379f603a769f6570ced00c4ac3fab1deec54c4f087f02c364765bdb

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.28-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 885.9 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.28-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 76c4ade120dc5edbae91e896d69f519b6d31379f28b63dfe7fa187e25e4977d6
MD5 47af04f69099481b22e9c8598d948dc2
BLAKE2b-256 88a9f7ae2739f43980b041756aed3026d2700624c5e9e3a90457d0e4d3ece3f5

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.28-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.28-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 0414bc4f0130b5658a361de84a95d2132003965295729b1dc89fe3b244cac55b
MD5 255d8d66e6b61b30aab23a1478d1dc8e
BLAKE2b-256 843c243e05349d81d9a5286487043bbd47a0953ab5ef7f6b44bdaf209d3aa7ac

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 1e37f3e8f439b84f7e40973bd50a6c30b86641f35a5d41c237673cdfbc304ca6
MD5 afc9dc4bdae44150b38086329b728127
BLAKE2b-256 dcc152e627e13000a619ad19aa5e2e6b6e9ef1435e4226ad0d3195d7c6b0d365

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 e01e31c50e49ef113843eb9e8c9f4e6319d760af9950253b5e2846e009c37897
MD5 b48612be6c5195dd6f0dc2aaa1749806
BLAKE2b-256 af68065cf3e47a14ed137974bf6e4225b998a8ffbb2e8da6cba1f9aeb13b7df9

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 1fddb1ea328ae3374fc11cef0a9825d4a9d6d3851073e136469ce801063adfff
MD5 e473b8f670de096badba12958444a7b2
BLAKE2b-256 463dad417ab7724f0fe1d7d35ce739067eff765f3abab07b8035260e3382610e

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.28-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.28-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 943c3fcc1262d390f332f9eee91005fae14b770046733005a2f4a72389b99709
MD5 8f3657887e08f6db8fdfb243cff59098
BLAKE2b-256 1fad4a6b2ca6e1c16d2393814fd912bd800e98439d20415b332b7bf64bbb7b40

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d4752f2b4cf8edfe46a9605e3501df37480ef044b6a23c252436aacfe47ebfee
MD5 7ba05cfc437c450e875d24e5e51d83c3
BLAKE2b-256 82dd9fac6b4dfcd3bb6428378dbfbad28ac425775b581d2756cffbd82853e8ab

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f69125922feb6fd858f83b56dfe34672df5a52a2df68bb4a6512296d9080aaba
MD5 6511b97885b288f965b397bba2f9d5e3
BLAKE2b-256 68b0d2c293f5a57d8e8400c759923edc3017be22332b560cb4f731d8c1d12d7d

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.28-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.28-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 0bc767f35514d0d1260c265e5816bb0a314e586fad8bb546b8d72d6fae150a34
MD5 9579412dd26f694e0235ef0ef7ba40a4
BLAKE2b-256 255de4ec412c6454966c5998d90290cbb0a900c8d2deb36be459035dd7db5850

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page