Skip to main content

No project description provided

Project description

Useckit: An Open-Source Deep Learning Toolkit for Behavioral Biometrics

Useckit is an open-source Python toolkit designed for the development, evaluation, and deployment of deep learning-based user authentication systems. The toolkit bundles algorithms to evaluate behavioral biometrics for both user verification and identification tasks. It is offering a high-level API for quick experiments and a low-level API for custom model implementations. A full description can be found in our publication.

Overview

  • Useckit bundles several deep learning paradigms for user authentication, including time-series classification, distance metric learning, and anomaly detection.
  • Automatically computes key metrics like accuracy, F1-score, equal error rate (EER), receiver operating characteristic (ROC) curves, and more.
  • Supports open-set and closed-set user identification as well as user verification.
  • Customizable neural network models for advanced users.
  • Extensible with custom datasets, preprocessing functions, and evaluation methods.

Usage

Installation

Use pip to install useckit and dependencies:

pip install useckit

Features

  • Useckit offers a high-level API and a low-level API.
  • The high-level API allows the quick application of predefined models that are grounded in literature.
    • We provide implementations, for example, for the approaches of Fawaz et al. (Time-Series Classification), Chen et al. (AutoEncoder-based authentication), and Schroff et al (Two-Stream Networks).
  • The low-level API exists behind the facade of the high-level API and can also be used to use custom models within useckit.
  • All results are automatically serialized to the filesystem.

Basic Usage (High-Level API)

To evaluate a dataset using default models:

import numpy as np
import useckit
from useckit.Evaluators import TSCEvaluator, DistanceLearningEvaluator

# Prepare dataset
x_train, y_train = np.array([...]), np.array([...])
x_test, y_test = np.array([...]), np.array([...])

dataset = useckit.Dataset(
    trainset_data=x_train,
    trainset_labels=y_train,
    testset_enrollment_data=x_train,
    testset_enrollment_labels=y_train,
    testset_matching_data=x_test,
    testset_matching_labels=y_test
)

# E.g., run time series classification evaluator
tse = TSCEvaluator(dataset, epochs=1000, verbose=False)
tse.evaluate()

# E.g., distance learning evaluator
dle = DistanceLearningEvaluator(dataset, epochs=1000, verbose=False)
dle.evaluate()

A comprehensible example can also be found in examples/useckit-high-level.ipynb.

Advanced Usage (Low-Level API)

You can customize models or extend Useckit with your own deep learning architectures. Examples and documentation are provided in examples/useckit-low-level.ipynb.

API Documentation

A sphinx-based documentation is work in progress and soon to be released. For now, we recommend checking the comprehensive examples and tests.

Features

Evaluation Methods

Useckit supports several evaluation paradigms, including:

  • Verification Mode: For user verification tasks where a claim of identity is verified against a reference sample.
  • Closed-Set Identification: For identifying users from a predefined set of identities.
  • Open-Set Identification: Identifies known users and rejects unknown users.

Models

Useckit provides the following model architectures:

  • Time Series Classification: FCN, Inception, ResNet, Encoder, TWIESN, MCDCNN, MLP, CNN (valid/same padding) padding, MCNN, t-leNet from Fawaz et al.
  • Distance Learning: Two-stream networks with contrastive and triplet loss from Schroff et al.
  • Outlier Detection: AutoEncoders from Chen et al.

Dataset Preparation

Datasets need to be provided in the following format:

  • Train Set: Data and labels for model training.
  • Validation Set: (Optional) Data and labels for model validation.
  • Test Enrollment Set: Data and labels for system enrollment during testing.
  • Test Matching Set: Data and labels for matching during testing.

Useckit supports k-fold cross-validation if your dataset is small.

Preprocessing Functions

The toolkit provides several preprocessing methods, including:

  • Window Slicing: Sliding window for time-series data augmentation.
  • Majority Voting: Aggregates predictions from window-sliced data.
  • Normalization: Ensures data is normalized between [-1, +1].
  • Data Checks: Verifies data integrity and format.

Evaluation Metrics

Useckit automatically computes the following key metrics:

  • Accuracy
  • Precision, Recall, F1-score
  • Equal Error Rate (EER)
  • Receiver Operating Characteristic (ROC) Curve
  • Area Under the ROC Curve (AUC/AUROC)
  • Confusion Matrix

Metrics are saved in both human-readable text format and machine-readable JSON format.

Authors

License

See LICENSE.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

useckit-0.5.0a8.tar.gz (226.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

useckit-0.5.0a8-py3-none-any.whl (275.1 kB view details)

Uploaded Python 3

File details

Details for the file useckit-0.5.0a8.tar.gz.

File metadata

  • Download URL: useckit-0.5.0a8.tar.gz
  • Upload date:
  • Size: 226.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for useckit-0.5.0a8.tar.gz
Algorithm Hash digest
SHA256 e1bb474cac164fc2cdee8d9aa87babeb89c41f72503ee9586a7f01fc851d40ac
MD5 c894d157059cc66e48b0fcacac08f330
BLAKE2b-256 06200a16aac674465eb5d12b4dab14f1161b74f520a8e8891700cbea5bd17383

See more details on using hashes here.

File details

Details for the file useckit-0.5.0a8-py3-none-any.whl.

File metadata

  • Download URL: useckit-0.5.0a8-py3-none-any.whl
  • Upload date:
  • Size: 275.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for useckit-0.5.0a8-py3-none-any.whl
Algorithm Hash digest
SHA256 ea3bd9069be3ee32b1f10ffca8f98d61cc52470505378c2703f6ea6825fcf2ad
MD5 8a64740f8a1eedfb772b4ba6b7629ddb
BLAKE2b-256 d188f4cae42fd93a74dbe5ffc4c92027525582e34a9afcb2d93dfabcc99db450

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page