Skip to main content

Performance hacking for your deep learning models

Project description

Build Status codecov License PyPI Gitter Codacy Badge

Darkon: Performance hacking for your deep learning models

Darkon is an open source toolkit for improving and debugging deep learning models. People think that deep neural network is a black-box that requires only large dataset and expect learning algorithms returns well-performing models. However, trained models often fail in real world usages, and it is difficult to fix such failure due to the black-box nature of deep neural networks. We are developing Darkon to ease effort to improve performance of deep learning models.

In this first release, we provide influence score calculation easily applicable to existing Tensorflow models (other models to be supported later) Influence score can be used for filtering bad training samples that affects test performance negatively. It can be used for prioritize potential mislabeled examples to be fixed, and debugging distribution mismatch between train and test samples.

Darkon will gradually provide performance hacking methods easily applicable to existing projects based on following technologies.

  • Dataset inspection/filtering/management

  • Continual learning

  • Meta/transfer learning

  • Interpretable ML

  • Hyper parameter optimization

  • Network architecture search

More features will be released soon. Feedback and feature request are always welcome, which help us to manage priorities. Please keep your eyes on Darkon.

Dependencies

Installation

pip install darkon

Usage

inspector = darkon.Influence(workspace_path,
                             YourDataFeeder(),
                             loss_op_train,
                             loss_op_test,
                             x_placeholder,
                             y_placeholder)

scores = inspector.upweighting_influence_batch(sess,
                                               test_indices,
                                               test_batch_size,
                                               approx_params,
                                               train_batch_size,
                                               train_iterations)

Examples

API Documentation

Communication

Authors

Neosapience, Inc.

License

Apache License 2.0

References

[1] Pang Wei Koh and Percy Liang “Understanding Black-box Predictions via Influence Functions“ ICML2017

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

darkon-0.0.3-py2.py3-none-any.whl (19.2 kB view details)

Uploaded Python 2Python 3

File details

Details for the file darkon-0.0.3-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for darkon-0.0.3-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 1d0a449bf54bb6da115df663915d278003cb4e03ed35989361d630e6ed1c0df8
MD5 0f2fac010dad43bd2c98daedce160ca8
BLAKE2b-256 41560d56b0e0ef8b1dd44f2266a83c6e9d83b7a5a8b3eda73898315d5175361d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page