Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0

Tensorflow

Library Version
tensorflow >=2.12.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('torchvision.resnet18')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model(torchvision.[name of model]). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier')

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("torchvision.resnet18")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.0.27-cp312-cp312-win_arm64.whl (814.4 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.0.27-cp312-cp312-win_amd64.whl (975.9 kB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.0.27-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.8 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.0.27-cp312-cp312-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.0.27-cp312-cp312-macosx_10_9_x86_64.whl (1.1 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.0.27-cp311-cp311-win_arm64.whl (834.8 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.0.27-cp311-cp311-win_amd64.whl (993.2 kB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.0.27-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.0.27-cp311-cp311-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.0.27-cp311-cp311-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.0.27-cp310-cp310-win_arm64.whl (832.1 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.0.27-cp310-cp310-win_amd64.whl (989.2 kB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.0.27-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.8 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.0.27-cp310-cp310-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.0.27-cp310-cp310-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.0.27-cp39-cp39-win_arm64.whl (834.0 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.0.27-cp39-cp39-win_amd64.whl (990.8 kB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.0.27-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.8 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.0.27-cp39-cp39-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.0.27-cp39-cp39-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.0.27-cp38-cp38-win_amd64.whl (1.0 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.0.27-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.0 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.0.27-cp38-cp38-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.0.27-cp38-cp38-macosx_10_9_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.0.27-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.27-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 814.4 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.27-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 a206269ba11a7575f8d3ec3183944c52f8d299bd90d3577f700a30d63e502d4b
MD5 cb3c1cb87812226402f80d30a65f917c
BLAKE2b-256 abaf25daefa6bbf3fc8c83c1ca7adf47e087e4724929ec31c66fe64d7d19687c

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.27-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 975.9 kB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.27-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 acb8214e950884fb8acbf2cb9299911f805f0193c54e8a387a854c1b105ebba4
MD5 33fbe810d85f041adf8f8728c0f092df
BLAKE2b-256 f8d070bf4d5fd2753bdfbd1006c463f7818414d25f679b6d8a46ca2786db83df

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 affba3083658d86b2834ee9b6306c81c63e8f8b5218d414b0e1d5bbbdddcb2ae
MD5 6bc019570697b648e5e6b3dd19059af6
BLAKE2b-256 eb270b2be819dbd263f61ddccba7c2d1409a9066113c75586d9040a7e38141d2

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3d5c7cef30bb07623d6de11a848dad94e804b414e0f770752e19ed46738bff35
MD5 f7bea79e8768611fd40a3dc9bdf0ba83
BLAKE2b-256 3c625a26973b7037c1377b4aed0007eed0760a07801ba4acee6f55cd7ec30e54

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 f15dc7107b50c06bf8802408e191711ea69932c6dcaf5118c602639265cda726
MD5 a1eae3df22966fdd96d5ffa96e3fb5fc
BLAKE2b-256 636df5aec83184f5295f75290b1efeac969602881e1f73be3417c23a59c6a075

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.27-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 834.8 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.27-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 8ce532a3b086071b74576fef17d6f64b02077cdfbbe3f90dcaba6f11eb0c89ac
MD5 a80ad03624fe0fe35eba3d18c8c09797
BLAKE2b-256 d2d6c3eea04a744593939df5326c90dc7838c8ed0ad02ac5d61754b84384d253

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.27-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 993.2 kB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.27-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 b9422db3e05780b5174d369a3856491ee380baefc58e531b9ffc42ba0def5d83
MD5 527e5dcf4254d28d81c47892ca501fe7
BLAKE2b-256 1d1ac9e6c89ac8c0d39f4c16c8c874860d10ea09170580a6e74c3354f97b9399

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6027fc93fde4c8e3240c80fde3203e58e3827af0399ace5b9d929ce739ca307c
MD5 467d21535e3705085cbd62f9834a240b
BLAKE2b-256 5df1cc3c7645b808a296e206925bf96b4a26f1924ff95122e81d5d8b73c5e6b5

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 2d7db7271ef5fc13eeafa646b961d44d680b40af5b8bd3d60a777f8a18db97de
MD5 701a59547d1b24cbba8191769200a1bc
BLAKE2b-256 91244680ad2f2792e2642f4c07ccbbd6803ad0ba9f9bfe1f8afe7c0d1e77c6aa

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 d12370e227a484e4d2cd3d1aa4849c3ec498c69cf7509b3b453158e2ac6d8aaa
MD5 77253a3728b64fa965e9aeb4bbe57d1c
BLAKE2b-256 b3a03a96aed5977921dd93552db54ea34912909f17f215252a5eef848bf83f8f

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.27-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 832.1 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.27-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 e7a125b21677d0a9bbda172e5e54831c2878c5181890f6d22c099c2ddb86cfff
MD5 a2a4f5d4aa3eef5a36da68e52d6d3797
BLAKE2b-256 858c483aebeab902db90a1d8029daac72841a7df278a617d36ff83448193c1db

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.27-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 989.2 kB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.27-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 b4b3c9f8d9dc88c10ae1b4683207c8f82663e088756dad3c92d82c04e912c12e
MD5 94192ebd3d7e22fa22c2d3d6513a195b
BLAKE2b-256 a9f5300aef15b5563fe0d3ddf51f4d424a6e70568f36b9c6ca1f41fbafd20e98

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 865611efee8642fc07d056c6ec1149bdd62ef0aafcd1d8f948fdb2046c4b424a
MD5 ad915d32cc2d7682c9c5da9192ab68f6
BLAKE2b-256 17feae0e396719e363d4415094e35cfa65277923aa371cce6225850c4c42eec7

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 fe9c9e28a19ec2aeb486c33f3535f0f60d1d9302596ff25ef48bff681cae9f17
MD5 a5aeefe1976a7fbc3262c775e954b9e4
BLAKE2b-256 d600a67c627a73c165d5d8cf9fc4b0d04998cc8f2b34b630a72c4584bcc56ea5

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 16f2b287fd9cc73ae8e92fcd8e900bce9e7762e6209a02b828e89e86ecfa56da
MD5 cbf8489e2bb27a228a126da3de80fd33
BLAKE2b-256 1e01286d1a308eb21c677aaa5945d3553931c90680627fcb9a6d3e42210c5892

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.0.27-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 834.0 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.27-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 bea1cb2dfaa261a81873195323873ba71fa625368322de6acdcaa4dac208da21
MD5 e09d8764e7944a6c066eb52f141854c4
BLAKE2b-256 b4cb46a904ff397f9ffdc5791ecac3ad0d9ff2a44f4832e79fcebd582312e08b

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.27-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 990.8 kB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.27-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 7fdeabb0b828094b37996751e7bc48e04c6104ac42ddae5365673a78d3fdf03c
MD5 6a3c7d2b38ace7d25e269ae7c8788de1
BLAKE2b-256 1e45387fc0d28e32f010d3ecd899bef7042a8524b9dd1bed7e52a291827bdc84

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 690fd1a4f32bafd40f1cdfed417eacdd5d56e5b4219ae4e3dd88e4acc2aebeb2
MD5 2ca3a446b192200dd03da73fa01427fa
BLAKE2b-256 2740fe69892a78f64a06470050981bf251a49229bb053efe0b6b15cc94ca5548

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 fa3e2e3091154964846b83bae2436666383ad18cc5588d2207c7be44755cba07
MD5 c587f7ec96b005ccc8b54e0471e998ce
BLAKE2b-256 590a45d8d9efa6e085f7ba1279b4a8a2083175dfb2e6302c06e1c103cc78c4b5

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 83434b9dcbab6198a1f2b845890b68237ee48460275c71f3cca02edf57879270
MD5 96e6f0fcfe5b5f13f68a92c540ab7d8c
BLAKE2b-256 b2faed59b48db7d847088ee6c917a6392f57c21e1f76de23dac65ac201283a92

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.0.27-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for leap_ie-0.0.27-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 6c3d8a8a713f680978855853a5ea4a5f8f093dc36154432761039d1187b587f3
MD5 315111abdc5b19d6ed06527a0a35e7d0
BLAKE2b-256 daa6b1a73fa27f2f37cfe64c47cf7f5e0e657bc11e9fd1113e339957f5539b87

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e71c5be75c8b96932a57fbe6ff04cd400dd763c5a6ca5d30c7a4a530a1ba6c83
MD5 58f607a0f8f55f5ca49fbb918938caab
BLAKE2b-256 d123b5dbd14b5b48c2f0c4538f61202341356b374530ab8d1adf82ca33e9ad8e

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 2810f8c455d3efeb298f38cc51acb8a5b76af3d1a3b613cd83a341291dc91e30
MD5 c294870233c1e49b33e66425916a0e63
BLAKE2b-256 7c7d18428c35e5b9188ac6ba10c3f74b5581e70099d224a9bd1d9775dbee38de

See more details on using hashes here.

File details

Details for the file leap_ie-0.0.27-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.0.27-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 4db1a82bb8afdba992a3bea22ad03515ea4c67c4c4172ac8129a82b7ac10fcec
MD5 3be7fc1a01a2acfce5eeac294a5fc638
BLAKE2b-256 208ce78f511d446c1211cdc1a376e16db94c0f080dddecab1ef8ec7f8be0323a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page