Skip to main content
Join the official 2019 Python Developers SurveyStart the survey!

TODO

Project description


.. raw:: html

<div align="center">

.. raw:: html

</div>

Intro
-----

**Toxic** is an open source software library for machine learning
security. It contains tools for adversarial example generation and
provides a framework for building new types of attack methods.

Currently in the dev stage.

Attacks
-------

Available attack algorithms implemented in Toxic:

- Fast Gradient Methods (FGM/FGSM)
```Tutorial`` </tutorial/source/fgsm.ipynb>`__
- Basic Iterative
```Tutorial`` </tutorial/source/basic_iterative.ipynb>`__
- Momentum Iterative
```Tutorial`` </tutorial/source/momentum_iterative.ipynb>`__
- DeepFool
- Universal Adversarial Perturbation (UAP)
- Jacobian-based Saliency Map Approach (JSMA)
- One Pixel Attack
- LBFGS
- Carlini Wagner L2
- Carlini Wagner L-inf
- Feature Adversaries
- Boundary Attack
- Elastic Net
- Natural Adversarial Examples (NAE)

The Team
~~~~~~~~

Toxic is a community driven project. The project was initiated by
machine learning security team @ `KakaoBrain <kakaobrain.com>`__.

.. |license| image:: https://img.shields.io/github/license/mashape/apistatus.svg


Project details


Release history Release notifications

This version

0.1.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for toxic, version 0.1.0
Filename, size File type Python version Upload date Hashes
Filename, size toxic-0.1.0-py3-none-any.whl (50.3 kB) File type Wheel Python version py3 Upload date Hashes View hashes
Filename, size toxic-0.1.0.tar.gz (17.3 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page