Skip to main content

No project description provided

Project description

Useckit: An Open-Source Deep Learning Toolkit for Behavioral Biometrics

Useckit is an open-source Python toolkit designed for the development, evaluation, and deployment of deep learning-based user authentication systems. The toolkit bundles algorithms to evaluate behavioral biometrics for both user verification and identification tasks. It is offering a high-level API for quick experiments and a low-level API for custom model implementations. A full description can be found in our publication.

Overview

  • Useckit bundles several deep learning paradigms for user authentication, including time-series classification, distance metric learning, and anomaly detection.
  • Automatically computes key metrics like accuracy, F1-score, equal error rate (EER), receiver operating characteristic (ROC) curves, and more.
  • Supports open-set and closed-set user identification as well as user verification.
  • Customizable neural network models for advanced users.
  • Extensible with custom datasets, preprocessing functions, and evaluation methods.

Usage

Installation

Use pip to install useckit and dependencies:

pip install useckit

Features

  • Useckit offers a high-level API and a low-level API.
  • The high-level API allows the quick application of predefined models that are grounded in literature.
    • We provide implementations, for example, for the approaches of Fawaz et al. (Time-Series Classification), Chen et al. (AutoEncoder-based authentication), and Schroff et al (Two-Stream Networks).
  • The low-level API exists behind the facade of the high-level API and can also be used to use custom models within useckit.
  • All results are automatically serialized to the filesystem.

Basic Usage (High-Level API)

To evaluate a dataset using default models:

import numpy as np
import useckit
from useckit.Evaluators import TSCEvaluator, DistanceLearningEvaluator

# Prepare dataset
x_train, y_train = np.array([...]), np.array([...])
x_test, y_test = np.array([...]), np.array([...])

dataset = useckit.Dataset(
    trainset_data=x_train,
    trainset_labels=y_train,
    testset_enrollment_data=x_train,
    testset_enrollment_labels=y_train,
    testset_matching_data=x_test,
    testset_matching_labels=y_test
)

# E.g., run time series classification evaluator
tse = TSCEvaluator(dataset, epochs=1000, verbose=False)
tse.evaluate()

# E.g., distance learning evaluator
dle = DistanceLearningEvaluator(dataset, epochs=1000, verbose=False)
dle.evaluate()

A comprehensible example can also be found in examples/useckit-high-level.ipynb.

Advanced Usage (Low-Level API)

You can customize models or extend Useckit with your own deep learning architectures. Examples and documentation are provided in examples/useckit-low-level.ipynb.

API Documentation

A sphinx-based documentation is work in progress and soon to be released. For now, we recommend checking the comprehensive examples and tests.

Features

Evaluation Methods

Useckit supports several evaluation paradigms, including:

  • Verification Mode: For user verification tasks where a claim of identity is verified against a reference sample.
  • Closed-Set Identification: For identifying users from a predefined set of identities.
  • Open-Set Identification: Identifies known users and rejects unknown users.

Models

Useckit provides the following model architectures:

  • Time Series Classification: FCN, Inception, ResNet, Encoder, TWIESN, MCDCNN, MLP, CNN (valid/same padding) padding, MCNN, t-leNet from Fawaz et al.
  • Distance Learning: Two-stream networks with contrastive and triplet loss from Schroff et al.
  • Outlier Detection: AutoEncoders from Chen et al.

Dataset Preparation

Datasets need to be provided in the following format:

  • Train Set: Data and labels for model training.
  • Validation Set: (Optional) Data and labels for model validation.
  • Test Enrollment Set: Data and labels for system enrollment during testing.
  • Test Matching Set: Data and labels for matching during testing.

Useckit supports k-fold cross-validation if your dataset is small.

Preprocessing Functions

The toolkit provides several preprocessing methods, including:

  • Window Slicing: Sliding window for time-series data augmentation.
  • Majority Voting: Aggregates predictions from window-sliced data.
  • Normalization: Ensures data is normalized between [-1, +1].
  • Data Checks: Verifies data integrity and format.

Evaluation Metrics

Useckit automatically computes the following key metrics:

  • Accuracy
  • Precision, Recall, F1-score
  • Equal Error Rate (EER)
  • Receiver Operating Characteristic (ROC) Curve
  • Area Under the ROC Curve (AUC/AUROC)
  • Confusion Matrix

Metrics are saved in both human-readable text format and machine-readable JSON format.

Authors

License

See LICENSE.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

useckit-0.5.0a5.tar.gz (225.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

useckit-0.5.0a5-py3-none-any.whl (273.6 kB view details)

Uploaded Python 3

File details

Details for the file useckit-0.5.0a5.tar.gz.

File metadata

  • Download URL: useckit-0.5.0a5.tar.gz
  • Upload date:
  • Size: 225.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for useckit-0.5.0a5.tar.gz
Algorithm Hash digest
SHA256 8991b696d0fdb589e22aaddc5ac0c5cc857a5727c5ef3f318492219feb9ae655
MD5 bc84b2ff902919bb11522ec7ebb19d9d
BLAKE2b-256 ca5537c25a29c946d7f2c9c2e5ed238f47bec1214de47c31d9fd9470badbf10c

See more details on using hashes here.

File details

Details for the file useckit-0.5.0a5-py3-none-any.whl.

File metadata

  • Download URL: useckit-0.5.0a5-py3-none-any.whl
  • Upload date:
  • Size: 273.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for useckit-0.5.0a5-py3-none-any.whl
Algorithm Hash digest
SHA256 532f8a17377bf3d15db5febbd74b3816b7550f7b10de664e19986053a6e30d18
MD5 580cec1667197613857946d6ece5045f
BLAKE2b-256 0f4ed668931f0cf2243b3fdff300cc58cb5d4a290e7b13b84867891fe20d426a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page