Skip to main content

A Python script to perform a clustering based on descriptive keys.

Project description

Work-set Clustering

DOI

A Python script to perform a clustering based on descriptive keys. It can be used to identify work clusters for manifestations according to the FRBR (IFLA-LRM) model.

This tool only performs the clustering. It needs a list of manifestation identifiers and their descriptive keys as input. If already computed cluster identifiers and descriptive keys from a previous run are provided, they can be reused.

Usage via the command line

Create and activate a Python virtual environment

# Create a new Python virtual environment
python3 -m venv py-request-isni-env

# Activate the virtual environment
source py-request-isni-env/bin/activate

# There are no depdendenies to install

# install the tool
pip install .

Available options:

usage: clustering.py [-h] -i INPUT_FILE -o OUTPUT_FILE --id-column ID_COLUMN --key-column KEY_COLUMN [--delimiter DELIMITER] [--existing-clusters EXISTING_CLUSTERS]
                     [--existing-clusters-keys EXISTING_CLUSTERS_KEYS]

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT_FILE, --input-file INPUT_FILE
                        The CSV file(s) with columns for elements and descriptive keys, one row is one element and descriptive key relationship
  -o OUTPUT_FILE, --output-file OUTPUT_FILE
                        The name of the output CSV file containing two columns: elementID and clusterID
  --id-column ID_COLUMN
                        The name of the column with element identifiers
  --key-column KEY_COLUMN
                        The name of the column that contains a descriptive key
  --delimiter DELIMITER
                        Optional delimiter of the input/output CSV, default is ','
  --existing-clusters EXISTING_CLUSTERS
                        Optional file with existing element-cluster mapping
  --existing-clusters-keys EXISTING_CLUSTERS_KEYS
                        Optional file with element-descriptive key mapping for existing clusters mapping

Clustering from scratch

Given a CSV file where each row contains the relationship between one manifestation identifier and one descriptive key, the tool can be called the following to create cluster assignments.

python -m work_set_clustering.clustering \
  --input-file "descriptive-keys.csv" \
  --output-file "clusters.csv" \
  --id-column "elementID" \
  --key-column "descriptiveKey"

Example CSV which should result in two clusters, one for book1 and book2 (due to a similar key) and one for book3:

elementID descriptiveKey
book1 theTitle/author1
book1 isbnOfTheBook/author1
book2 isbnOfTheBook/author1
book3 otherBookTitle/author1

The script can also read descriptive keys that are distributed across several files. Therefore you only have to use the --input-file parameter several times. Please note that all of those input files should have the same column names specified with --id-column and --key-column.

You can find more examples of cluster input in the test/resources directory.

Reuse existing clusters

You can reuse the clusters created from an earlier run, but you also have to provide the mapping between the previous elements and optionally their descriptive keys.

python -m work_set_clustering.clustering \
  --input-file "descriptive-keys.csv" \
  --output-file "clusters.csv" \
  --id-column "elementID" \
  --key-column "descriptiveKey" \
  --existing-clusters "existing-clusters.csv" \
  --existing-cluster-keys "initial-descriptive-keys.csv"

Please note that with the two parameters --existing-clusters and --existing-cluster-keys the data from a previous run are provided.

Similar to the initial clustering, you can provide several input files.

[!NOTE] When skipping existing descriptive keys, existing cluster identifiers and assigments are kept, even if their elements have overlapping descriptive keys. Additionally, none of the new elements can be mapped to the existing clusters, because no descriptive keys are provided (more info in https://github.com/kbrbe/work-set-clustering/issues/9)

Usage as a library

The tool can also be used as a library within another Python script or a Jupyter notebook.

from work_set_clustering.clustering import clusterFromScratch as clustering

clustering(
  inputFilename=["descriptive-keys.csv"],
  outputFilename="cluster-assignments.csv",
  idColumnName="elementID",
  keyColumnName="descriptiveKey",
  delimiter=',')

Or if you want to reuse existing clusters:

from work_set_clustering.clustering import updateClusters as clustering

clustering(
  inputFilename=["descriptive-keys.csv"],
  outputFilename="cluster-assignments.csv",
  idColumnName="elementID",
  keyColumnName="descriptiveKey",
  delimiter=',',
  existingClustersFilename="existing-clusters.csv",
  existingClusterKeysFilename="initial-descriptive-keys.csv")

Software Tests

  • You can execute the unit tests of the lib.py file with the following command: python work_set_clustering.lib.
  • You can execute the integration tests with the following command: python -m unittest discover -s test

Contact

Sven Lieber - Sven.Lieber@kbr.be - Royal Library of Belgium (KBR) - https://www.kbr.be/en/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

work_set_clustering-0.4.0.tar.gz (21.7 kB view details)

Uploaded Source

File details

Details for the file work_set_clustering-0.4.0.tar.gz.

File metadata

  • Download URL: work_set_clustering-0.4.0.tar.gz
  • Upload date:
  • Size: 21.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for work_set_clustering-0.4.0.tar.gz
Algorithm Hash digest
SHA256 5f72c4039c0ccd272a7e65b5cc2753c46b4b669241496d2a247845b77bbc7581
MD5 92752c35740889db522ffcb6c607bff1
BLAKE2b-256 89460f2906b1d9f2b87cf8d52c3f149e6626b0e227f3c7ac2b78cec903a8f939

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page