Skip to main content

No project description provided

Project description

Useckit: An Open-Source Deep Learning Toolkit for Behavioral Biometrics

Useckit is an open-source Python toolkit designed for the development, evaluation, and deployment of deep learning-based user authentication systems. The toolkit bundles algorithms to evaluate behavioral biometrics for both user verification and identification tasks. It is offering a high-level API for quick experiments and a low-level API for custom model implementations. A full description can be found in our publication.

Overview

  • Useckit bundles several deep learning paradigms for user authentication, including time-series classification, distance metric learning, and anomaly detection.
  • Automatically computes key metrics like accuracy, F1-score, equal error rate (EER), receiver operating characteristic (ROC) curves, and more.
  • Supports open-set and closed-set user identification as well as user verification.
  • Customizable neural network models for advanced users.
  • Extensible with custom datasets, preprocessing functions, and evaluation methods.

Usage

Installation

Use pip to install useckit and dependencies:

pip install useckit

Features

  • Useckit offers a high-level API and a low-level API.
  • The high-level API allows the quick application of predefined models that are grounded in literature.
    • We provide implementations, for example, for the approaches of Fawaz et al. (Time-Series Classification), Chen et al. (AutoEncoder-based authentication), and Schroff et al (Two-Stream Networks).
  • The low-level API exists behind the facade of the high-level API and can also be used to use custom models within useckit.
  • All results are automatically serialized to the filesystem.

Basic Usage (High-Level API)

To evaluate a dataset using default models:

import numpy as np
import useckit
from useckit.Evaluators import TSCEvaluator, DistanceLearningEvaluator

# Prepare dataset
x_train, y_train = np.array([...]), np.array([...])
x_test, y_test = np.array([...]), np.array([...])

dataset = useckit.Dataset(
    trainset_data=x_train,
    trainset_labels=y_train,
    testset_enrollment_data=x_train,
    testset_enrollment_labels=y_train,
    testset_matching_data=x_test,
    testset_matching_labels=y_test
)

# E.g., run time series classification evaluator
tse = TSCEvaluator(dataset, epochs=1000, verbose=False)
tse.evaluate()

# E.g., distance learning evaluator
dle = DistanceLearningEvaluator(dataset, epochs=1000, verbose=False)
dle.evaluate()

A comprehensible example can also be found in examples/useckit-high-level.ipynb.

Advanced Usage (Low-Level API)

You can customize models or extend Useckit with your own deep learning architectures. Examples and documentation are provided in examples/useckit-low-level.ipynb.

API Documentation

A sphinx-based documentation is work in progress and soon to be released. For now, we recommend checking the comprehensive examples and tests.

Features

Evaluation Methods

Useckit supports several evaluation paradigms, including:

  • Verification Mode: For user verification tasks where a claim of identity is verified against a reference sample.
  • Closed-Set Identification: For identifying users from a predefined set of identities.
  • Open-Set Identification: Identifies known users and rejects unknown users.

Models

Useckit provides the following model architectures:

  • Time Series Classification: FCN, Inception, ResNet, Encoder, TWIESN, MCDCNN, MLP, CNN (valid/same padding) padding, MCNN, t-leNet from Fawaz et al.
  • Distance Learning: Two-stream networks with contrastive and triplet loss from Schroff et al.
  • Outlier Detection: AutoEncoders from Chen et al.

Dataset Preparation

Datasets need to be provided in the following format:

  • Train Set: Data and labels for model training.
  • Validation Set: (Optional) Data and labels for model validation.
  • Test Enrollment Set: Data and labels for system enrollment during testing.
  • Test Matching Set: Data and labels for matching during testing.

Useckit supports k-fold cross-validation if your dataset is small.

Preprocessing Functions

The toolkit provides several preprocessing methods, including:

  • Window Slicing: Sliding window for time-series data augmentation.
  • Majority Voting: Aggregates predictions from window-sliced data.
  • Normalization: Ensures data is normalized between [-1, +1].
  • Data Checks: Verifies data integrity and format.

Evaluation Metrics

Useckit automatically computes the following key metrics:

  • Accuracy
  • Precision, Recall, F1-score
  • Equal Error Rate (EER)
  • Receiver Operating Characteristic (ROC) Curve
  • Area Under the ROC Curve (AUC/AUROC)
  • Confusion Matrix

Metrics are saved in both human-readable text format and machine-readable JSON format.

Authors

License

See LICENSE.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

useckit-0.5.0a12.tar.gz (226.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

useckit-0.5.0a12-py3-none-any.whl (275.2 kB view details)

Uploaded Python 3

File details

Details for the file useckit-0.5.0a12.tar.gz.

File metadata

  • Download URL: useckit-0.5.0a12.tar.gz
  • Upload date:
  • Size: 226.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for useckit-0.5.0a12.tar.gz
Algorithm Hash digest
SHA256 aaea49c636633cf8d3da3a0fe515f14a40fbdad1338ec05f9b4230bc72a9d045
MD5 e0854a013fce2a83c5141e6537d288c0
BLAKE2b-256 c71415e7f03cd4e17a5d222b9b598957d2a8a48da4b0b23eb0a4024fe8a7f7dd

See more details on using hashes here.

File details

Details for the file useckit-0.5.0a12-py3-none-any.whl.

File metadata

  • Download URL: useckit-0.5.0a12-py3-none-any.whl
  • Upload date:
  • Size: 275.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for useckit-0.5.0a12-py3-none-any.whl
Algorithm Hash digest
SHA256 327ea8a130d7f4b4b7ae11643d1de38c2a904b59c19471a1f08d5f045a74f257
MD5 53d720e8a036536577273c543649d519
BLAKE2b-256 2a46a65473f5ca4b79b514a9bb59dfddd7e1e3da71fc7bc478bc68f8e2adc2ef

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page