Skip to main content

A Python script to perform a clustering based on descriptive keys.

Project description

Work-set Clustering

DOI

A Python script to perform a clustering based on descriptive keys. It can be used to identify work clusters for manifestations according to the FRBR (IFLA-LRM) model.

This tool only performs the clustering. It needs a list of manifestation identifiers and their descriptive keys as input. If already computed cluster identifiers and descriptive keys from a previous run are provided, they can be reused.

Usage via the command line

Create and activate a Python virtual environment

# Create a new Python virtual environment
python3 -m venv py-request-isni-env

# Activate the virtual environment
source py-request-isni-env/bin/activate

# There are no depdendenies to install

# install the tool
pip install .

Available options:

usage: clustering.py [-h] -i INPUT_FILE -o OUTPUT_FILE --id-column ID_COLUMN --key-column KEY_COLUMN [--delimiter DELIMITER] [--existing-clusters EXISTING_CLUSTERS]
                     [--existing-clusters-keys EXISTING_CLUSTERS_KEYS]

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT_FILE, --input-file INPUT_FILE
                        The CSV file(s) with columns for elements and descriptive keys, one row is one element and descriptive key relationship
  -o OUTPUT_FILE, --output-file OUTPUT_FILE
                        The name of the output CSV file containing two columns: elementID and clusterID
  --id-column ID_COLUMN
                        The name of the column with element identifiers
  --key-column KEY_COLUMN
                        The name of the column that contains a descriptive key
  --delimiter DELIMITER
                        Optional delimiter of the input/output CSV, default is ','
  --existing-clusters EXISTING_CLUSTERS
                        Optional file with existing element-cluster mapping
  --existing-clusters-keys EXISTING_CLUSTERS_KEYS
                        Optional file with element-descriptive key mapping for existing clusters mapping

Clustering from scratch

Given a CSV file where each row contains the relationship between one manifestation identifier and one descriptive key, the tool can be called the following to create cluster assignments.

python -m work_set_clustering.clustering \
  --input-file "descriptive-keys.csv" \
  --output-file "clusters.csv" \
  --id-column "elementID" \
  --key-column "descriptiveKey"

Example CSV which should result in two clusters, one for book1 and book2 (due to a similar key) and one for book3:

elementID descriptiveKey
book1 theTitle/author1
book1 isbnOfTheBook/author1
book2 isbnOfTheBook/author1
book3 otherBookTitle/author1

The script can also read descriptive keys that are distributed across several files. Therefore you only have to use the --input-file parameter several times. Please note that all of those input files should have the same column names specified with --id-column and --key-column.

You can find more examples of cluster input in the test/resources directory.

Reuse existing clusters

You can reuse the clusters created from an earlier run, but you also have to provide the mapping between the previous elements and their descriptive keys.

python -m work_set_clustering.clustering \
  --input-file "descriptive-keys.csv" \
  --output-file "clusters.csv" \
  --id-column "elementID" \
  --key-column "descriptiveKey" \
  --existing-clusters "existing-clusters.csv" \
  --existing-cluster-keys "initial-descriptive-keys.csv"

Please note that with the two parameters --existing-clusters and --existing-cluster-keys the data from a previous run are provided.

Similar to the initial clustering, you can provide several input files.

Usage as a library

The tool can also be used as a library within another Python script or a Jupyter notebook.

from work_set_clustering.clustering import clusterFromScratch as clustering

clustering(
  inputFilename=["descriptive-keys.csv"],
  outputFilename="cluster-assignments.csv",
  idColumnName="elementID",
  keyColumnName="descriptiveKey",
  delimiter=',')

Or if you want to reuse existing clusters:

from work_set_clustering.clustering import updateClusters as clustering

clustering(
  inputFilename=["descriptive-keys.csv"],
  outputFilename="cluster-assignments.csv",
  idColumnName="elementID",
  keyColumnName="descriptiveKey",
  delimiter=',',
  existingClustersFilename="existing-clusters.csv",
  existingClusterKeysFilename="initial-descriptive-keys.csv")

Software Tests

  • You can execute the unit tests of the lib.py file with the following command: python work_set_clustering.lib.
  • You can execute the integration tests with the following command: python -m unittest discover -s test

Contact

Sven Lieber - Sven.Lieber@kbr.be - Royal Library of Belgium (KBR) - https://www.kbr.be/en/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

work_set_clustering-0.3.0.tar.gz (20.5 kB view details)

Uploaded Source

File details

Details for the file work_set_clustering-0.3.0.tar.gz.

File metadata

  • Download URL: work_set_clustering-0.3.0.tar.gz
  • Upload date:
  • Size: 20.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for work_set_clustering-0.3.0.tar.gz
Algorithm Hash digest
SHA256 770ec9b99fb7db3fda7607b054deb8ce8bf1051fc697b3d873f0c7e612111b3c
MD5 badc808f4d9fbe3adb738eee07318618
BLAKE2b-256 156ea86c29a56f4d3d49c760f941a89774821b66482d266be65c9c932134e27e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page