Skip to main content

No project description provided

Project description

Useckit: An Open-Source Deep Learning Toolkit for Behavioral Biometrics

Useckit is an open-source Python toolkit designed for the development, evaluation, and deployment of deep learning-based user authentication systems. The toolkit bundles algorithms to evaluate behavioral biometrics for both user verification and identification tasks. It is offering a high-level API for quick experiments and a low-level API for custom model implementations. A full description can be found in our publication.

Overview

  • Useckit bundles several deep learning paradigms for user authentication, including time-series classification, distance metric learning, and anomaly detection.
  • Automatically computes key metrics like accuracy, F1-score, equal error rate (EER), receiver operating characteristic (ROC) curves, and more.
  • Supports open-set and closed-set user identification as well as user verification.
  • Customizable neural network models for advanced users.
  • Extensible with custom datasets, preprocessing functions, and evaluation methods.

Usage

Installation

Use pip to install useckit and dependencies:

pip install useckit

Features

  • Useckit offers a high-level API and a low-level API.
  • The high-level API allows the quick application of predefined models that are grounded in literature.
    • We provide implementations, for example, for the approaches of Fawaz et al. (Time-Series Classification), Chen et al. (AutoEncoder-based authentication), and Schroff et al (Two-Stream Networks).
  • The low-level API exists behind the facade of the high-level API and can also be used to use custom models within useckit.
  • All results are automatically serialized to the filesystem.

Basic Usage (High-Level API)

To evaluate a dataset using default models:

import numpy as np
import useckit
from useckit.Evaluators import TSCEvaluator, DistanceLearningEvaluator

# Prepare dataset
x_train, y_train = np.array([...]), np.array([...])
x_test, y_test = np.array([...]), np.array([...])

dataset = useckit.Dataset(
    trainset_data=x_train,
    trainset_labels=y_train,
    testset_enrollment_data=x_train,
    testset_enrollment_labels=y_train,
    testset_matching_data=x_test,
    testset_matching_labels=y_test
)

# E.g., run time series classification evaluator
tse = TSCEvaluator(dataset, epochs=1000, verbose=False)
tse.evaluate()

# E.g., distance learning evaluator
dle = DistanceLearningEvaluator(dataset, epochs=1000, verbose=False)
dle.evaluate()

A comprehensible example can also be found in examples/useckit-high-level.ipynb.

Advanced Usage (Low-Level API)

You can customize models or extend Useckit with your own deep learning architectures. Examples and documentation are provided in examples/useckit-low-level.ipynb.

API Documentation

A sphinx-based documentation is work in progress and soon to be released. For now, we recommend checking the comprehensive examples and tests.

Features

Evaluation Methods

Useckit supports several evaluation paradigms, including:

  • Verification Mode: For user verification tasks where a claim of identity is verified against a reference sample.
  • Closed-Set Identification: For identifying users from a predefined set of identities.
  • Open-Set Identification: Identifies known users and rejects unknown users.

Models

Useckit provides the following model architectures:

  • Time Series Classification: FCN, Inception, ResNet, Encoder, TWIESN, MCDCNN, MLP, CNN (valid/same padding) padding, MCNN, t-leNet from Fawaz et al.
  • Distance Learning: Two-stream networks with contrastive and triplet loss from Schroff et al.
  • Outlier Detection: AutoEncoders from Chen et al.

Dataset Preparation

Datasets need to be provided in the following format:

  • Train Set: Data and labels for model training.
  • Validation Set: (Optional) Data and labels for model validation.
  • Test Enrollment Set: Data and labels for system enrollment during testing.
  • Test Matching Set: Data and labels for matching during testing.

Useckit supports k-fold cross-validation if your dataset is small.

Preprocessing Functions

The toolkit provides several preprocessing methods, including:

  • Window Slicing: Sliding window for time-series data augmentation.
  • Majority Voting: Aggregates predictions from window-sliced data.
  • Normalization: Ensures data is normalized between [-1, +1].
  • Data Checks: Verifies data integrity and format.

Evaluation Metrics

Useckit automatically computes the following key metrics:

  • Accuracy
  • Precision, Recall, F1-score
  • Equal Error Rate (EER)
  • Receiver Operating Characteristic (ROC) Curve
  • Area Under the ROC Curve (AUC/AUROC)
  • Confusion Matrix

Metrics are saved in both human-readable text format and machine-readable JSON format.

Authors

License

See LICENSE.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

useckit-0.5.0a21.tar.gz (228.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

useckit-0.5.0a21-py3-none-any.whl (276.6 kB view details)

Uploaded Python 3

File details

Details for the file useckit-0.5.0a21.tar.gz.

File metadata

  • Download URL: useckit-0.5.0a21.tar.gz
  • Upload date:
  • Size: 228.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for useckit-0.5.0a21.tar.gz
Algorithm Hash digest
SHA256 7aef81b04d28d39a42a780ab74fc20711d6e9a2cc1c57e8b5a563a5a1a8e3d0c
MD5 3f183c09921fbeea15cef01b0eca0c1a
BLAKE2b-256 0825e5af168b20ac5adb333adfe7a9cc6e88082af5fa968d79548e51df3f88b0

See more details on using hashes here.

File details

Details for the file useckit-0.5.0a21-py3-none-any.whl.

File metadata

  • Download URL: useckit-0.5.0a21-py3-none-any.whl
  • Upload date:
  • Size: 276.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for useckit-0.5.0a21-py3-none-any.whl
Algorithm Hash digest
SHA256 5019f2e980ce2e29af7924777e6857e9804d7b9b96c152905dd56bd96d8fd4d1
MD5 c8aa11bd523ecb70bf6576b779874f26
BLAKE2b-256 99204ce934760428b826e9ea88f6cb0ffe8ea53b5f7e3a9a95938b66822c583b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page