Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0

Tensorflow

Library Version
tensorflow >=2.12.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('torchvision.resnet18')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model(torchvision.[name of model]). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier')

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.0.26-cp312-cp312-win_arm64.whl (824.1 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.0.26-cp312-cp312-win_amd64.whl (988.3 kB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.0.26-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.8 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.0.26-cp312-cp312-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.0.26-cp312-cp312-macosx_10_9_x86_64.whl (1.1 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.0.26-cp311-cp311-win_arm64.whl (845.7 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.0.26-cp311-cp311-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.0.26-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.0.26-cp311-cp311-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.0.26-cp311-cp311-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.0.26-cp310-cp310-win_arm64.whl (842.7 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.0.26-cp310-cp310-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.0.26-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.0.26-cp310-cp310-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.0.26-cp310-cp310-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.0.26-cp39-cp39-win_arm64.whl (843.7 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.0.26-cp39-cp39-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.0.26-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.0.26-cp39-cp39-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.0.26-cp39-cp39-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.0.26-cp38-cp38-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.0.26-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.1 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.0.26-cp38-cp38-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.0.26-cp38-cp38-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.0.26-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.26-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 824.1 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.26-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 d6eb258e9e702cf6df2dd927320bc3bc19abb2036cce5a4331002275373ba363
MD5 8b700ec7b74ccd299f63a012721a5c70
BLAKE2b-256 81e0b79723bc7450d1d833f6c496d1a55bfa7592ce5cff30e67c30c6c91f804d

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.26-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 988.3 kB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.26-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 009ad0918e3528547e0bda664c15a34257059c34510ec83bc4945cc0309c1332
MD5 08c0cb30b7a34fb82697d8f445ca6a72
BLAKE2b-256 6ae3c11dff0c6062826815af6489f5c29a8c54b5ff23c3b8aca72ded08561b09

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 19fc0154ca672e1b5961b0ff1c218e2c487742ed0bc2dcc701b881ab56b1dd99
MD5 8314014c48d66126ccdf720ab3dee759
BLAKE2b-256 5276a47ef229e09b008de265cdc548226829977b49e76422ffd7332cf9a5655b

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 5aec62cb4e9f3f05969c2eee43ef7e69041c513900c6892f73c6fb415b37f18a
MD5 9f0912c777f93e6edab69fe2d8e7124e
BLAKE2b-256 0df44fddf9640397a11ecebe390ef5b6c602aa276a6c0f3da9789663b0e473bf

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 b9c695ab522f778e406b8e5ff89783809d38820f4eb79b8a773dbfbc54baddcf
MD5 382f04d228ecd88711ed88c6866523d3
BLAKE2b-256 83c2c97c86fa58974b0457dbc9687948c6d5f883ba38fd0e240e506997528e03

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.26-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 845.7 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.26-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 88a9a6029c5d72fc1fae7f3076b01c9d30439f31c43b8e69c7a5139ee7373306
MD5 05e63edb34e9d7e99221f7c5d5ea7317
BLAKE2b-256 e128eec6424f6b499b99c9a4a6801112a30739dbd7e1515d52fe72f5a31dad29

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.26-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.26-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 9a8fde9f5c33c875746ed04a4015ec85b77864561c1af700824d5d183fe21024
MD5 1e406f7f67d6bf167f412ce3944f0973
BLAKE2b-256 2dad7d574ca529119e63efa4a918315e0c4eb8a3ba60f9206af75aeab7f419a9

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 df56918fe900f93f0aec124ac12d68099716067679eb33030a0df6abc24e53f2
MD5 5894a3386702ddf5d3bfc33726c5424e
BLAKE2b-256 6bc68d9dcb031e4527c5d9e7c45f233002dd6614f08d3ed4c000bc3aa3d79231

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b6bf17fc55280fb7219f7a735bf17765e1ceb9010ef44b2bb0c3af8d253fd0bc
MD5 7e8f9a1e9a2eeea6491ee9aecd1a6a7e
BLAKE2b-256 555bddbe51e3c66010539b85a64faa018f14e6ef7da4db66f63aa5153941f669

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 53e10838f413dcbf851a2f0bf3af3fda0b6a7440aebbafd299d42419c5a83812
MD5 9dd94c33b58637753627b1590a39711b
BLAKE2b-256 93d56638c5860f454978faec600c3aa5c79c40b050bb53662cb95faafd8f740f

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.26-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 842.7 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.26-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 56285bfa54850c365436aeb3b5486adca9a1cd857cf32fbb6a73fe73237c8edc
MD5 760025b85c3ce0ac636b1dbdada22658
BLAKE2b-256 ca7c818315c1f119fcee85d96a1cecdd7bebfceff5cc186775d6ccdcdb598180

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.26-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.26-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 338d7b08d3b5b31f6a28b5e7113c602d2f5775b68f6b6e40afc6dc6a9fc9f9d5
MD5 a56d4632c2fb11b3ad9f8bf16178eb7b
BLAKE2b-256 67e8ebd008f581da9899cec1eefbedd95ec8ebd5a77743781ef511235e557427

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 895a7c9304e4154b118a66e92f4890d7a179cf05571892f7660e1b8a3996ba81
MD5 72fd816185cdb27d58d0718c39d90ff4
BLAKE2b-256 0dfd908a436a4505319132843a5adab7eec3f5bd51a171c1804e35e61731f34a

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 e82d55288bf5d8e985e0c53eae3ac15302f4d8d2109d549e3e6a39ae7860b955
MD5 706508c8d917f1e31f77cfbeca8819f4
BLAKE2b-256 ff9139d133d061829c2bd38dcfc9f6c4f7f854269f843c7bf58882be589e7404

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 8d5751ac2269a80d56dfe6275ced2309b53bd56991aaa81b05c2f540af7d0498
MD5 f25582a898081dfc07e0a7197747549a
BLAKE2b-256 dd22132a814f260d2b5fff4e922a119b15aa9f6179d69f115e42890f897c3e96

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.26-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 843.7 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.26-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 671a4db5243105b0cf51b7b963068ad3a6becc1478bfabc43225d1760bcad708
MD5 25ccd18eb19591e6ce75f0b114415a71
BLAKE2b-256 eda3861c89ffe1b4db2551c09e7d40573dab32594ec77821456eb71ced9131f2

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.26-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.26-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 3350fbfe24df0dc56a3e8f2db1d4fd6d62a41a71c9ed96fec469e8cc249653fa
MD5 abfed402351a43c7c666e6858bb2b4a7
BLAKE2b-256 0a873f6bbe32c6334f6ebd0c937b194085b27eec0ad7cb4fe7015f56d5366a32

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 8e806cc47fbe4337f7b0dc54c9580dda9c49bfcb4442509a32bc073d94561e85
MD5 f5a596dd7f2e62a8c86556ef21f2cb59
BLAKE2b-256 86074ad3b27a7848552ded626cd523da9470fd0578f87081889f33a4926a6210

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 0db10c21a7406d87802220a877cc048d881c79899d1761d84efa645f2461a42a
MD5 dd0dffef21ca528880221887cf9d23ed
BLAKE2b-256 4c66da98fbab498fc9a4f30280fff453e415fb7f5b3c79b309207205d6ea9dd2

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 f4bc97269a00d8ad2aefafbded34e12496944593b9f27aef735530ad49394e7e
MD5 6000a2ffc2977ce66613499b71c46906
BLAKE2b-256 77239e3c7d89f572215951e952fb23ee19317166d8fe1f444814349280d7d23a

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.26-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.26-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 4821fdc6e2035a66862bcfdb6a09e3853b63af877d63dd963c896b6d3b3af637
MD5 f190e7610d9428c96aeb3d50f38d82bb
BLAKE2b-256 e771e0d72fc9e6d3f3ed76880e5fd313ea1ebab3adcd9cc5aac7b0e4c8b21d41

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 3ed116e3b1e92e76bddc58c7da97a213ff78cc4e81bff435cd3ec397ee7dee17
MD5 14b6c220aae5f936fb2a04ad174b699f
BLAKE2b-256 197ba23b7203d236b180676ee6b131b4d8a18245624ef3eed9630105dc534ec3

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 098bcab30aa0710c382ee9f814d1fd8177fa4349eaab190b619f0beea4095f87
MD5 4cf3ad3dc5b094071258e96166efa86f
BLAKE2b-256 c749b6364517fb7b5d64de389261e7a199b0e763d17dcbe8fde2805936ac1275

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.26-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.26-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 9718fe6cc285f22d9994ec68ea9603c18b76ca14f7b0d88baa0c9f352ed97cb6
MD5 e996b9e16b4d7e270a9cb00ae2e2292e
BLAKE2b-256 0190a348fb60c0bfcd5e5d6398fd6214b3014bb887dcc20a5c96c150ae7a0011

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page