Skip to main content

A package for evaluating adversarial attacks on deep learning models

Project description

Evauate attacks

A package for evaluating adversarial attacks on deep learning models.

Installation

pip install my_adversarial_attacks

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

attack_evaluation-0.1.1-py3-none-any.whl (2.6 kB view details)

Uploaded Python 3

File details

Details for the file attack_evaluation-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for attack_evaluation-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1f749a9a80ca26063458329f0866b7ef63fa28e6e54a67f34828e7776ab98f9b
MD5 8e6393f607b4ead56364ee9ecd01b59e
BLAKE2b-256 0c608b4fbb2173f9ff7e8c4e4898746a8613195e792fa82932f2b62ce01b24b3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page