Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0

Tensorflow

Library Version
tensorflow >=2.12.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('torchvision.resnet18')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model(torchvision.[name of model]). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier')

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.0.29-cp312-cp312-win_arm64.whl (860.0 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.0.29-cp312-cp312-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.0.29-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.2 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.0.29-cp312-cp312-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.0.29-cp312-cp312-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.0.29-cp311-cp311-win_arm64.whl (885.0 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.0.29-cp311-cp311-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.0.29-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.9 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.0.29-cp311-cp311-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.0.29-cp311-cp311-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.0.29-cp310-cp310-win_arm64.whl (881.8 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.0.29-cp310-cp310-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.0.29-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.2 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.0.29-cp310-cp310-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.0.29-cp310-cp310-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.0.29-cp39-cp39-win_arm64.whl (884.1 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.0.29-cp39-cp39-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.0.29-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.2 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.0.29-cp39-cp39-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.0.29-cp39-cp39-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.0.29-cp38-cp38-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.0.29-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.4 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.0.29-cp38-cp38-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.0.29-cp38-cp38-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.0.29-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.29-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 860.0 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.29-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 5635a3a136ac45e11c3bcc728113425dd47f1bdcd8dfdad8c4890cbdc4fcaa9a
MD5 0e04a9e107183895e09ab12f37ff605b
BLAKE2b-256 eaa9d1a31c5b08681540dcd1cf0812699609d42cb5ccefe5872877c6e9d004ec

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.29-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.29-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 7933e1a95a2568bc861566a6ee3c5b4414c04d1d361715771b100888599b9dca
MD5 4a071fb088541b4dc3b8c4805321336f
BLAKE2b-256 6a2191e57191a4599116c4f12ffab58c8bb0763c18962ba68da21c42f1b047f4

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b1cec44d6250a11dd1d256174e58f4610e3f1ed82e168cb3adad31eee6740a60
MD5 44108f8cc0340e15b09cc98ed5daea08
BLAKE2b-256 bd7581f43eb6b211c5d4de68cdccd38cd5f879950767caea68a383e5dd432cc0

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3e7732107d5feb4cf66038570f6257f8080a41df1b1f48459653782be7d093c3
MD5 90900463ea8cbe6c3a93d934ca50ffcf
BLAKE2b-256 85c0a5742f381a6218b372034a3d68b3e6dcfdbde727b32d72ff0c428728c098

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 0648c7ef76490f61197e65d1cc9265bf9ebbe5ba85f128fd7632df600042fc0a
MD5 8ec04ce82ff918ccb56c6682f034bdb2
BLAKE2b-256 d843b6382837b89f26935aa65413208cd22e514eb81cbbcd201a5df0b51d364e

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.29-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 885.0 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.29-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 fd351d253698342923b7c2969355d198ecec688ea9f6701db08a93568efd6fc2
MD5 4d8e49d7cb31b34bace3ce5e024c7255
BLAKE2b-256 7eea3e096b04d1878776af3cb949f7418503783ce4df02ee175ac56a2bdc52bc

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.29-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.29-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 a07c09d7ffd23c5687bc45557ee4149561e64eeb2d7066aa05ca8d7ab9685f95
MD5 ed927d54ced40a29a4164e5f81b5a3da
BLAKE2b-256 a91d5443e79f259c736e3543ba41ae53a5621709a83a48a3060e9f26b8f41b75

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 baa9d4a8ff91326ddd68dff177d7725b1b7c0089b47c53402c0d822155629461
MD5 05d623918539d5f52bbfdf936b66025b
BLAKE2b-256 3ef6a14808b1b74e3612ddbbce005ecd2831c1396c9a41ca5a39fe61e6646b78

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b9011f6d55d87f362a51cf1589fefc3da21eea21678d468f84226db10dc3d866
MD5 e832d5ace5cab01ff3911afdbcf246f8
BLAKE2b-256 3f9b7c773ea4476887c7f656e3ee28f26e814aba5196b01daf57e81c06f5923d

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 856224581851c0753cc20bc9e538b998b87948cae15bb58bc1d0ade7c84201a8
MD5 2ab97cb92ca6934a6812be310c94fc21
BLAKE2b-256 973ef19a3b337591303800b4ea4a2d6cb99b71bd08c7b49d1c9257e69687355c

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.29-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 881.8 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.29-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 0a15e6d461075a677af6220df5b2e77c3761816b1d28cab4eaf174299d8c5a99
MD5 333434d4ff3f985df0e0c7557a2b0806
BLAKE2b-256 66c97a6fe1e6aa9505ac268061b0396ed7ee2628e46a40f96c6f4630877d2df3

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.29-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.29-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 e6f59ce9856b6005af1c7a541b61043f3a78a41b2ed8c40cd660a5e99667642f
MD5 34dd5ebbc1db920b263f964824693b49
BLAKE2b-256 4accad3ed3576e5844fb0ddcd6f1f26e2497dbd00d67c5c9591fedfef015dfdf

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ca06e2bafe8f84ed45da17badaf7af3590ecb2389641ac50bec13de971f322a7
MD5 38d911f3682820cbc7747bc6469983d4
BLAKE2b-256 f26cdc0af968be50783fbc2a5d7c91e11f0e78a862d5ccfe5a3610390feea690

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 6ce3d54eb40124dda6548b18e6ef89f2ffd67134edf1b024d55a66fef890b28b
MD5 6bf8af43267f4fae5debf2ee7180b38a
BLAKE2b-256 35a7ef1537c6373779884b29abecff4ccf5b83dc247eefc3a3e787f9d7902b69

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 b8c39ff23e7888a31ddad82c93cb4bed6d3d7cc0cd02da6d75771713fd9a6aa6
MD5 3d93967c3894fc44fbf95ebd298c34c4
BLAKE2b-256 f1fdded5c1932a6085f3cbe3a2e5994482be7c7bd67ea2b4804982dc5285c290

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.29-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 884.1 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.29-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 af30712e79e7c7bbaa7b6b286546e2fcceb2b72d8a16782554b26dcf7068ce8b
MD5 2814d43b10158b6a51a914da52b2127a
BLAKE2b-256 14a1c1b525a65cfb9b9e6d118b71a3a1bba52ee00bda262962594756c262c73a

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.29-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.29-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 be4a1e91a2cff660a4c130d385150d0a25e958b314bb4d280d7cd5dbd21bf3cd
MD5 08d2e51ae9e1543ef6fe00bd9321648d
BLAKE2b-256 364c10019b1526003c4164a5d71acd488346de06e7214fea355fb471891234d0

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 1e7f7e16716e5c95a5d37490456360338355233453f7e6b1ee66a042f6d0df90
MD5 1bc9f9f05bcd2b32a7a959ca88651244
BLAKE2b-256 0b6429dc1cef045a22c21c4958459081ee214ec939f95003a1285423b62b9b56

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3da0147f1213bb3316803a5252efcdf16ca7df7ad70869ee7b114549e11fe917
MD5 1aa6b9d6ab93b37a22d43adb42c27b9a
BLAKE2b-256 0f58f419d53ee387ed018105415f9ced0731cc586855dcf969a0289bd191285c

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 c3fae9990fd8afbf281460f1671f74764f8e88e3b7e5d507d9a9e31d0c59060b
MD5 f5e6d69b52ca782fb4d71783b6676146
BLAKE2b-256 3546ed53ed9ad56d64ab6192d60cba30263cad9a7154159cc4b32d241985dabd

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.29-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for leap_ie-0.0.29-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 8c99479258e424277526db0edb937fef4715d88bae3fb51c69f3aec89bab3f5a
MD5 5292423abd9240bf395f16a26ed7d79b
BLAKE2b-256 db95ad3326579fcf2d3d687c6b45905aebc642900b9f9c3bddf1376026394c9f

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f97895a8ed7b548916f66bf710d6d714f1b1cb615dd075a091bee9b9c73a205f
MD5 ef39494c6e9be6f7d0e6301f64eb7f50
BLAKE2b-256 df68c00a33237527e4edf7c2d2a3cfaecb809a57d576294dfdd3a8fac697f0fd

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b090b601149252c5dae49ef4d06ef348d8486f67ad546ea41b715df0aa74b23e
MD5 f5f0f9261226545e6d96c137265a7b57
BLAKE2b-256 01559842085ef42f559dab26e436b9f9bc2f853ba12cc02801c6abb3c4a9075b

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.29-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.29-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 40a9f5c1bc7e3087f36a6879325b40f963db6f2a6a0837beb9d93848a218761c
MD5 6fe6ad321b3d4eff6af00e0d563df32d
BLAKE2b-256 0af23ee5b26562c71dffafad4c5ff2168c91d0fe474f527d3621c52b533c53cc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page