Skip to main content

Leap Labs Interpretability Engine

Project description

Leap Interpretability Engine

Congratulations on being a very early adopter of our interpretability engine! Not sure what's going on? Check out the FAQ.

Installation

Use the package manager pip to install leap-ie.

pip install leap-ie

During installation leap-ie does not modify any dependencies related to PyTorch or Tensorflow in order to preserve your development environment. However, leap-ie requires that the following version requirements are met:

PyTorch

Library Version
torch >=1.13.0
torchvision >=0.14.0
timm >=0.9.12

Tensorflow

Library Version
tensorflow >=2.4.0

If you do not have the required libraries installed, you can quickly install them by specifying them as extras:

PyTorch

pip install leap-ie[with-torch]

Tensorflow

pip install leap-ie[with-tensorflow]

Generating an API Key

Sign in and generate your API key in the leap app - you'll need this to get started.

Get started!

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

preprocessing_fn, model, class_list = get_model('resnet18', source='torchvision')

config = {"leap_api_key": "YOUR_API_KEY"}

results_df, results_dict = engine.generate(project_name="leap!", model=model, class_list=class_list, config = config, target_classes=[1], preprocessing=preprocessing_fn)

We provide easy access to all image classification torchvision models via leap_ie.models.get_model("model_name", source="torchvision"). We can also automatically pull image classification models from huggingface - just use the model id: get_model('nateraw/vit-age-classifier', source='huggingface').

Usage

Using the interpretability engine with your own models is really easy! All you need to do is import leap_ie, and wrap your model in our generate function:

from leap_ie.vision import engine

df_results, dict_results = engine.generate(
    project_name="interpretability",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config={"leap_api_key": "YOUR_LEAP_API_KEY"},
)

Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate().

class ModelWrapper(nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x):
        x = self.model(x)
        return x["logits"]

model = ModelWrapper(your_model)

Results

The generate function returns a pandas dataframe and a dictionary of numpy arrays. If you're in a jupyter notebook, you can view these dataframe inline using engine.display_df(df_results), but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.

For more information about the data we return, see prototypes, entanglements, and feature isolations. If used with samples (see Sample Feature Isolation), the dataframe contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.

Supported Frameworks

We support both pytorch and tensorflow! Specify your package with the mode parameter, using 'tf' for tensorflow and 'pt' for pytorch.

If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

Weights and Biases Integration

We can also log results directly to your WandB projects. To do this, set project_name to the name of the WandB project where you'd like the results to be logged, and add your WandB API key and entity name to the config dictionary:

config = {
    "wandb_api_key": "YOUR_WANDB_API_KEY",
    "wandb_entity": "your_wandb_entity",
    "leap_api_key": "YOUR_LEAP_API_KEY",
}
df_results, dict_results = engine.generate(
    project_name="your_wandb_project_name",
    model=your_model,
    class_list=["hotdog", "not_hotdog"],
    config=config,
)

Prototype Generation

Given your model, we generate prototypes and entanglements We also isolate entangled features in your prototypes.

from leap_ie.vision import engine
from leap_ie.vision.models import get_model

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# indexes of classes to generate prototypes for. In this case, ['tench', 'goldfish', 'great white shark'].
target_classes = [0, 1, 2]

# generate prototypes
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=target_classes,
    preprocessing=preprocessing_fn,
    samples=None,
    device=None,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

Sample Feature Isolation

Given some input image, we can show you which features your model thinks belong to each class. If you specify target classes, we'll isolate features for those, or if not, we'll isolate features for the three highest probability classes.

from torchvision import transforms
from leap_ie.vision import engine
from leap_ie.vision.models import get_model
from PIL import Image

config = {"leap_api_key": "YOUR_LEAP_API_KEY"}

# Replace this model with your own, or explore any imagenet classifier from torchvision (https://pytorch.org/vision/stable/models.html).
preprocessing_fn, model, class_list = get_model("resnet18", source="torchvision")

# load an image
image_path = "tools.jpeg"
tt = transforms.ToTensor()
image = preprocessing_fn[0](tt(Image.open(image_path)).unsqueeze(0))

# to isolate features:
df_results, dict_results = engine.generate(
    project_name="resnet18",
    model=model,
    class_list=class_list,
    config=config,
    target_classes=None,
    preprocessing=preprocessing_fn,
    samples=image,
    mode="pt",
)

# For the best experience, head to https://app.leap-labs.com/ to explore your prototypes and feature isolations in the browser!
# Or, if you're in a jupyter notebook, you can display your results inline:
engine.display_df(df_results)

engine.generate()

The generate function is used for both prototype generation directly from the model, and for feature isolation on your input samples.

leap_ie.vision.engine.generate(
    project_name,
    model,
    class_list,
    config,
    target_classes=None,
    preprocessing=None,
    samples=None,
    device=None,
    mode="pt",
)
  • project_name (str): Name of your project. Used for logging.

    • Required: Yes
    • Default: None
  • model (object): Model for interpretation. Currently we support image classification models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities). If using pytorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]. If tensorflow, channels last, e.g.[1, height, width, channels].

    • Required: Yes
    • Default: None
  • class_list (list): List of class names corresponding to your model's output classes, e.g. ['hotdog', 'not hotdog', ...].

    • Required: Yes
    • Default: None
  • config (dict or str): Configuration dictionary, or path to a json file containing your configuration. At minimum, this must contain {"leap_api_key": "YOUR_LEAP_API_KEY"}.

    • Required: Yes
    • Default: None
  • target_classes (list, optional): List of target class indices to generate prototypes or isolations for, e.g. [0,1]. If None, prototypes will be generated for the class at output index 0 only, e.g. 'hotdog', and feature isolations will be generated for the top 3 predicted classes.

    • Required: No
    • Default: None
  • preprocessing (function, optional): Preprocessing function to be used for generation. This can be None, but for best results, use the preprocessing function used on inputs for inference.

    • Required: No
    • Default: None
  • samples (array, optional): None, or a batch of images to perform feature isolation on. If provided, only feature isolation is performed (not prototype generation). We expect samples to be of shape [num_images, height, width, channels] if using tensorflow, or [1, channels, height, width] if using pytorch.

    • Required: No
    • Default: None
  • device (str, optional): Device to be used for generation. If None, we will try to find a device.

    • Required: No
    • Default: None
  • mode (str, optional): Framework to use, either 'pt' for pytorch or 'tf' for tensorflow. Default is 'pt'.

    • Required: No
    • Default: pt

Config

Leap provides a number of configuration options to fine-tune the interpretability engine's performance with your models. You can provide it as a dictionary or a path to a .json file.

  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing a microscope. Best practice is to start with zero, and gradually increase.

    • Default: 0
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1500

Here are all of the config options currently available:

config = {
    alpha_mask: bool = False
    alpha_only: bool = False
    alpha_weight: int = 1
    baseline_init: int = 0
    diversity_weight: int = 0
    find_lr_steps: int = 500
    hf_weight: int = 0
    input_dim: tuple = [3, 224, 224]
    isolate_classes: list = None
    isolation: bool = True
    isolation_hf_weight: int = 1
    isolation_lr: float = 0.05
    log_freq: int = 100
    lr: float = 0.05
    max_isolate_classes: int = 3
    max_lr: float = 1.0
    max_steps: int = 1500
    min_lr: float = 0.0001
    mode: str = "pt"
    num_lr_windows: int = 50
    project_name: str
    samples: list = None
    seed: int = 0
    stop_lr_early: bool = True
    transform: str = "xl"
    use_alpha: bool = False
    use_baseline: bool = False
    use_hipe: bool = False
    }
  • alpha_mask (bool): If True, applies a mask during prototype generation which encourages the resulting prototypes to be minimal, centered and concentrated. Experimental.

    • Default: False
  • alpha_only (bool): If True, during the prototype generation process, only an alpha channel is optimised. This results in generation prototypical shapes and textures only, with no colour information.

    • Default: False
  • baseline_init (int or str): How to initialise the input. A sensible option is the mean of your expected input data, if you know it. Use 'r' to initialise with random noise for more varied results with different random seeds.

    • Default: 0
  • diversity_weight (int): When generating multiple prototypes for the same class, we can apply a diversity objective to push for more varied inputs. The higher this number, the harder the optimisation process will push for different inputs. Experimental.

    • Default: 0
  • find_lr_steps (int): How many steps to tune the learning rate over at the start of the generation process. We do this automatically for you, but if you want to tune the learning rate manually, set this to zero and provide a learning rate with lr.

    • Default: 500
  • hf_weight (int): How much to penalise high-frequency patterns in the input. If you are generating very blurry and indistinct prototypes, decrease this. If you are getting very noisy prototypes, increase it. This depends on your model architecture and is hard for us to predict, so you might want to experiment. It's a bit like focussing binoculars. Best practice is to start with zero, and gradually increase.

    • Default: 1
  • input_dim (list): The dimensions of the input that your model expects.

    • Default: [224, 224, 3] if mode is "tf" else [3, 224, 224]
  • isolate_classes (list): If you'd like to isolate features for specific classes, rather than the top n, specify their indices here for EACH target, e.g. [[2,7,8], [2,3]].

    • Default: None
  • isolation (bool): Whether to isolate features for entangled classes. Set to False if you want prototypes only.

    • Default: True
  • isolation_hf_weight (int): How much to penalise high-frequency patterns in the feature isolation mask. See hf_weight.

    • Default: 1
  • isolation_lr (float): How much to update the isolation mask at each step during the feature isolation process.

    • Default: 0.05
  • log_freq (int): Interval at which to log images.

    • Default: 100
  • lr (float): How much to update the prototype at each step during the prototype generation process. We find this for you automatically between max_lr and min_lr, but if you would like to tune it manually, set find_lr_steps to zero and provide it here.

    • Default: 0.05
  • max_isolate_classes (int): How many classes to isolate features for, if isolate_classes is not provided.

    • Default: min(3, len(class_list))
  • max_lr (float): Maximum learning rate for learning rate finder.

  • Default: 1.0

  • max_steps (int): How many steps to run the prototype generation/feature isolation process for. If you get indistinct prototypes or isolations, try increasing this number.

    • Default: 1000
  • min_lr (float): Minimum learning rate for learning rate finder.

  • Default: 0.0001

  • seed (int): Random seed for initialisation.

    • Default: 0
  • transform (str): Random affine transformation to guard against adversarial noise. You can also experiment with the following options: ['s', 'm', 'l', 'xl']. You can also set this to None and provide your own transformation in `engine.generate(preprocessing=your transformation).

    • Default: xl
  • use_alpha (bool): If True, adds an alpha channel to the prototype. This results in the prototype generation process returning semi-transparent prototypes, which allow it to express ambivalence about the values of pixels that don't change the model prediction.

    • Default: False
  • use_baseline (bool): Whether to generate an equidistant baseline input prior to the prototype generation process. It takes a bit longer, but setting this to True will ensure that all prototypes generated for a model are not biased by input initialisation.

    • Default: False
  • wandb_api_key (str): Provide your weights and biases API key here to enable logging results directly to your WandB dashboard.

    • Default: None
  • wandb_entity (str): If logging to WandB, make sure to provide your WandB entity name here.

    • Default: None

FAQ

What is a prototype?

Prototype generation is a global interpretability method. It provides insight into what a model has learned without looking at its performance on test data, by extracting learned features directly from the model itself. This is important, because there's no guarantee that your test data covers all potential failure modes. It's another way of understanding what your model has learned, and helping you to predict how it will behave in deployment, on unseen data.

So what is a prototype? For each class that your model has been trained to predict, we can generate an input that maximises the probability of that output – this is the model's prototype for that class. It's a representation of what the model 'thinks' that class is.

For example, if you have a model trained to diagnose cancer from biopsy slides, prototype generation can show you what the model has learned to look for - what it 'thinks' malignant cells look like. This means you can check to see if it's looking for the right stuff, and ensure that it hasn't learned any spurious correlations from its training data that would cause dangerous mistakes in deployment (e.g. looking for lab markings on the slides, rather than at cell morphology).

What is entanglement?

During the prototype generation process we extract a lot of information from the model, including which other classes share features with the class prototype that we're generating. Depending on your domain, some entanglement may be expected - for example, an animal classifier is likely to have significant entanglement between 'cat' and 'dog', because those classes share (at least) the 'fur' feature. However, entanglement - especially unexpected entanglement, that doesn't make sense in your domain - can also be a very good indicator of where your model is likely to make misclassifications in deployment.

What is feature isolation?

Feature isolation does what it says on the tin - it isolates which features in the input the model is using to make its prediction.

We can apply feature isolation in two ways:

    1. On a prototype that we've generated, to isolate which features are shared between entangled classes, and so help explain how those classes are entangled; and
    1. On some input data, to explain individual predictions that your model makes, by isolating the features in the input that correspond to the predicted class (similar to saliency mapping).

So, you can use it to both understand properties of your model as a whole, and to better understand the individual predictions it makes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

leap_ie-0.3.0-cp312-cp312-win_arm64.whl (961.3 kB view details)

Uploaded CPython 3.12Windows ARM64

leap_ie-0.3.0-cp312-cp312-win_amd64.whl (1.1 MB view details)

Uploaded CPython 3.12Windows x86-64

leap_ie-0.3.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (8.0 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

leap_ie-0.3.0-cp312-cp312-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

leap_ie-0.3.0-cp312-cp312-macosx_10_9_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.12macOS 10.9+ x86-64

leap_ie-0.3.0-cp311-cp311-win_arm64.whl (990.0 kB view details)

Uploaded CPython 3.11Windows ARM64

leap_ie-0.3.0-cp311-cp311-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.11Windows x86-64

leap_ie-0.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.7 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

leap_ie-0.3.0-cp311-cp311-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

leap_ie-0.3.0-cp311-cp311-macosx_10_9_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.11macOS 10.9+ x86-64

leap_ie-0.3.0-cp310-cp310-win_arm64.whl (986.6 kB view details)

Uploaded CPython 3.10Windows ARM64

leap_ie-0.3.0-cp310-cp310-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.10Windows x86-64

leap_ie-0.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.9 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

leap_ie-0.3.0-cp310-cp310-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

leap_ie-0.3.0-cp310-cp310-macosx_10_9_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

leap_ie-0.3.0-cp39-cp39-win_arm64.whl (988.1 kB view details)

Uploaded CPython 3.9Windows ARM64

leap_ie-0.3.0-cp39-cp39-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.9Windows x86-64

leap_ie-0.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.9 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

leap_ie-0.3.0-cp39-cp39-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

leap_ie-0.3.0-cp39-cp39-macosx_10_9_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

leap_ie-0.3.0-cp38-cp38-win_amd64.whl (1.2 MB view details)

Uploaded CPython 3.8Windows x86-64

leap_ie-0.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.1 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

leap_ie-0.3.0-cp38-cp38-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

leap_ie-0.3.0-cp38-cp38-macosx_10_9_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

File details

Details for the file leap_ie-0.3.0-cp312-cp312-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.3.0-cp312-cp312-win_arm64.whl
  • Upload date:
  • Size: 961.3 kB
  • Tags: CPython 3.12, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for leap_ie-0.3.0-cp312-cp312-win_arm64.whl
Algorithm Hash digest
SHA256 4ddda4a06360e65a86a32e2138a23e00ad57b5ed53471885d5a2470fdcc44d51
MD5 775b68be5a5ee52e14f7ab1c4bded31b
BLAKE2b-256 0f33f3eed61ddd8b471ae6630007e5902b219980c8499adcde8ac18f7e21d431

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.3.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for leap_ie-0.3.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 8be07bc8a2c68551866b0b7c4b1b3eb5dfad44cc1c99bd93902cac75e1f7e2cd
MD5 39e4094ff8a5f4ba45361c56ec1c8ea7
BLAKE2b-256 d05616a824dcee081d746380c8e169a46ccde1515687a96f330a140f8ff61ee0

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 652ed033273573701067c75ce43b605ab76d67519f32734753282feb6458ac70
MD5 20e2e869b64abebac3d5ec7b914bbe93
BLAKE2b-256 a71cd826549f1c4969b9b1f0d012960bb7a4f5f134cf397a10679271f5d31215

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 05929e92e9204d48dba2fef085422014d108aa49e7d162293ccd4c3c08d3f6f4
MD5 24cf7aac1bed105189621caf6338ee3e
BLAKE2b-256 143422137b17dbfc75839194db9631e1fe46f13b7bd533ec775d713e990b097a

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp312-cp312-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp312-cp312-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 4d83ac7b6ee1663267ee42c87394ac74710d4f6679b8b28afcb2263d588c18d8
MD5 21f64fef38e05eac18838217e2e41bf5
BLAKE2b-256 ae447969c872f87eeb1fd57060ef41140eb3a7e1d13c36b511426e493aadb07e

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp311-cp311-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.3.0-cp311-cp311-win_arm64.whl
  • Upload date:
  • Size: 990.0 kB
  • Tags: CPython 3.11, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for leap_ie-0.3.0-cp311-cp311-win_arm64.whl
Algorithm Hash digest
SHA256 2d2812a031e2e213bbb96c5627844cd00c6a42b6cfff2e714172954d79d28aea
MD5 7f5f06038a23bb211d08f8bf1ce7cf12
BLAKE2b-256 c5edcb28ef701aab834cd575e8e19d73d72f2b3a7e5f872117ff0b4733ee5811

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.3.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for leap_ie-0.3.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 67bf87ea05599757f7ff83763e641898ba04a31d28f404e2626e23908153d3e4
MD5 382d7e768185438c51981f2acde7665d
BLAKE2b-256 a70a2836e7cb8d815b6faad79e465693cefc2465f84155d2eace79b1abbdc3aa

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 82db904aeaca23fa8fcb577f4977607afab21dcb993287fb2a3da380aef4e137
MD5 e80533b088c1c9c5188da877b3058b51
BLAKE2b-256 b61d2d45ae33c4e0ba589a3655727d8b495419e146b5f2a3a39ff97d76c573d5

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 34d4c20db8e6b43f8b7a9f2030b00c546741e724116e381e99c49e8a2a002c3c
MD5 eb69e50f74c4ec28326aac6fb9de2d02
BLAKE2b-256 f3dc90d7eedca7218fed5c182d6962fe0ccefdb66f71d59b61a7f0a97cf5afed

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp311-cp311-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp311-cp311-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 8dc55c56d8ec720363fca8e3862f0d746402a4698b4e27d097e56c3e373cab7d
MD5 027ce69d10243ee365b8c8fc50c20f01
BLAKE2b-256 ce41552c074a10c004f30d7f1274b3630887145b26f5320e489ce026605c44b9

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp310-cp310-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.3.0-cp310-cp310-win_arm64.whl
  • Upload date:
  • Size: 986.6 kB
  • Tags: CPython 3.10, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for leap_ie-0.3.0-cp310-cp310-win_arm64.whl
Algorithm Hash digest
SHA256 a7af0182fa54979d5bc94f6862554bd6fa9ee82bdc859616a3d5b1c35050e845
MD5 5c6a473fdd81997dab02228ba4d134c0
BLAKE2b-256 15fff699ceb2587b071290e799fbbd0fb46faa13f7d288894a59c7107e947cc3

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.3.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for leap_ie-0.3.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 f4f9fa690bf1c7a4b6eaca0be977945daa2c21159c941d95e60499ff2e186b33
MD5 f866488c50716e349f0e20bbc4da9833
BLAKE2b-256 24e4ce074985ab88511a082d61dd0af547d100cced66030f40e4c47b6346c047

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 35f7b76d91af8672de9850daf26f0d9dc125d7078959e3d8afcdf7343497c9ce
MD5 e949c8772b22e0a732f7eb2045c0a75d
BLAKE2b-256 35fca8d7553ffeadc813a0bdc1608abf4f8f8f4cebbc525db2e67b31ac3e4f57

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 33fd1b7bfe9e2f8d19a1178fdd4c585a6fc24261918f259ad0f8340faf6a4fad
MD5 967baa59bdcd5bc1a6cf4741b706308e
BLAKE2b-256 ad69ace16664e887a51b93ad371faadac62c7e2a9d713bf3572e41a0bf644efa

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 ff2474ad43f8ff39f52dcf2753b0d0a112ad2e057c07166c0f76de76a27c2ae1
MD5 1b68065db97ef1931e34fc677dbb6e3e
BLAKE2b-256 3666dae1b1072841949566fcbef1f6e4e9b5222beaa58eb7731ef05a060d19ff

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp39-cp39-win_arm64.whl.

File metadata

  • Download URL: leap_ie-0.3.0-cp39-cp39-win_arm64.whl
  • Upload date:
  • Size: 988.1 kB
  • Tags: CPython 3.9, Windows ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for leap_ie-0.3.0-cp39-cp39-win_arm64.whl
Algorithm Hash digest
SHA256 99a93086c59c489e0d19d6c557b8a6ec4cea38b19be3c36e6b4f742e8673f4c6
MD5 26de6ddfc1aec009436e7e1bf174a89d
BLAKE2b-256 5cc431491fc4bee0ac7ad3c752ee845ac4cce4a3fcd866009619af3310728ee8

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.3.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for leap_ie-0.3.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 191c29290ab8333ffae95de69c494063e9606168fed8b02442771559130bfedf
MD5 ba510822470525eb026baba049c5f68f
BLAKE2b-256 bbf2aec3ba63da2a49d3793d02b61a60e8fbb7379d868860e1b6f7dfd6c58eaa

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b6272743e9446843b79923c3c67788fcb05e6da543dfb66e51e59366d8172116
MD5 57f3968af6cb2bee21c6c71f70bfa118
BLAKE2b-256 c38b43afd0ea41fb7f1a8c7d23192876d740c640ced2f855bbb5831b57b12544

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b9588f7badde5344cd0a51a5c94ba0c5d76ff18702dc848a897aeb33cf6c5f2f
MD5 8163a32aff63b9a562076ef5e04fc915
BLAKE2b-256 e4c0e60ca8d284f1e6856d5c89440f7e57923acedfb8498cb29d12dc4f99ead4

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 c48a1a4940d59cca8e4d71ec03fa349f272794c39edf47644c537631d7d6f810
MD5 1a0e4ff088ebc8bfd88d65c8ec87e8ca
BLAKE2b-256 22b3ba930c2a12e002f2f336c3b210ce3d410dd7defeadaf8f5def287dbaa718

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: leap_ie-0.3.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 1.2 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for leap_ie-0.3.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 2c16b193cb4d75e4c1f7813f2457d0aa81dfb1868a36b172dd9242a24cac5b72
MD5 62fd7aac64762cc570df09c798a68abc
BLAKE2b-256 421bcaa1d766764a3c25b2805c333dfc903f58f9cdfaccf1e111b602be70dda4

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2467180a03ffe84f3354ebed3415d32dab7fcb4b7c8fabe377da5229f2978655
MD5 22fcb22a1d4fcd3f9863ba8794db7d4c
BLAKE2b-256 79d004683b27d92bc9fe892e4746578c9ed273f4f3d931cd98ad6d980f8c3c1b

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 4dfc90762d0156da90ec9b99bff1772f93154576136de3efbb5d2f30baff19c6
MD5 31774a21f7e6afe4ded16ef7358dc026
BLAKE2b-256 89609f4366b015d825c7cf9dd403a253a07a94668e099230b3c56656705c3266

See more details on using hashes here.

File details

Details for the file leap_ie-0.3.0-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for leap_ie-0.3.0-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 51ae46dfd7ae85c53e63beb6742bf065995d3c95c28b15a51ec282609e897ffb
MD5 fef17e82fbd43478f1b93d821ae4d57c
BLAKE2b-256 d41a5512fb49b4287dac54a8bbfc2b74ec42f521b1b7d4f3bb116663bdb5d200

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page