Skip to main content

engine for the analysis of DROID and Siegfried file format reports

Project description

Demystify

Static analysis and reporting for file-format reports generated by digital preservation tools, DROID and Siegfried.

Working example Siegfried: Siegfried Govdocs Select Results...
Working example DROID: DROID Govdocs Select Results...

Introduction

Utility for the analysis of DROID CSV and Seigfried file-format reports. The tool has three purposes:

  1. break the export into its components and store them within a set of tables in a SQLite database for performance and consistent access;
  2. provide additional information about a collection's profile where useful;
  3. and query the SQLite database, outputting results in a visually pleasant report for further analysis by digital preservation specialists and archivists.

For departments on-boarding archivists or building digital capability, the report contains descriptions, written by archivists for each of the statistics output.

archivist descriptions in demystify

Analysis of file format reports

This Code4Lib article published early in 2022 describes some of the important information in file-format reports that appear, in-aggregate. It describes the challenges of accessing that information consistently also.

2020/2021 refactor

This utility was first written in 2013. The code was pretty bad, but worked. It wrapped a lot of technical debt into a small package.

The 2020/2021 refactor tries to do three things:

  1. Fix minor issues.
  2. Make compatible with Python 3 and temporarily, one last time with Python 2.
  3. Add unit tests.

Adding unit tests is the key to your contributions and greater flexibility with refactoring. One a release candidate is available of this work, there is more freedom to think about next steps including exposing queries more generically so that more folk can work with sqlitefid. And finding more generic API-like abstractions in general so the utility is less like a monolith and more like a configurable static analysis engine analogous to something you might work with in Python or Golang.

More information

See the following blogs for more information:

COPTR Link: DROID_Siegfried_Sqlite_Analysis_Engine

Components

There are three components to the tool.

sqlitefid

Adds identification data to an SQLite database that forms the basis of the entire analysis. There are five tables.

  • DBMD - Database Metadata
  • FILEDATA - File manifest and filesystem metadata
  • IDDATA - Identification metadata
  • IDRESULTS - FILEDATA/IDRESULTS junction table
  • NSDATA - Namespace metadata, also secondary key (NS_ID) in IDDATA table

Will also augment DROID or Siegfried export data with additional columns:

  • URI_SCHEME: Separates the URI_SCHEME from the DROID URI column. This is to enable the identification of container objects found in the export specifically, and the distinction of files stored in container objects from standalone files.
  • DIR_NAME: Returns the base directory name from the file path to enable analysis of directory names, e.g. the number of directories in the collection.

demystify

Outputs an analysis from extensive querying of the SQLite database created by sqlitefid,

HTML is the default report output, with plain-text, and file-listings also available.

It is a good idea to run the analysis and > pipe the result to a file, e.g. python demystify.py --export my_export.csv > my_analysis.htm.

Rogues Gallery (v.0.2.0, v0.5.0+)

The following flags provide Rogue or Hero output:

  • --rogues

Outputs a list of files returned by the identification tool that might require more analysis e.g. non-IDs, multiple IDs, extension mismatches, zero-byte objects and duplicate files.

  • --heroes

Outputs a list of files considered to need less analysis.

The options can be configured by looking at denylist.cfg. More information can be found here.

Rogues Gallery Animation

pathlesstaken

A string analysis engine created to highlight when string values, e.g. file paths might need more care taken of them in a digital preservation environment, e.g. so we don't lose diacritics during transfer - providing a checklist of items to look at.

Includes:

  • Class to handle analysis of non-recommended filenames from Microsoft.
  • Copy of a library from Cooper Hewitt to enable writing of plain text descriptions of Unicode characters.

Architecture

The tool is designed to be easily modified to create your own output by using the Analysis Results class as a further abstraction layer (API).

Analysis Engine Architecture

The recent re-factor resulted in more generic python data structures being returned from queries and less (if not zero) formatted output. This means a little more work has to be put into presentation of results, but it is more flexible to what you want to do.

Tests are being implemented to promote the reliability of data returned.

Design Decisions

There should be no dependencies associated with this tool. That being said, you may need lxml for HTML output. An alternative may be found as the tool is refactored.

If we can maintain a state of few repositories then it should promote use across a wide-number of institutions. This has been driven by my previous two working environments where installing Python was the first challenge... PIP and the ability to get hold of code dependencies another - especially on multiple user's machines where we want this tool to be successful.

Usage Notes

Summary/Aggregate Binary / Text / Filename identification statistics are output with the following priority:

Namespace (e.g. ordered by PRONOM first [configurable])

  1. Binary and Container Identifiers
  2. XML Identifiers
  3. Text Identifiers
  4. Filename Identifiers
  5. Extension Identifiers

We need to monitor how well this works. Namespace specific statistics are also output further down the report.

TODO, and how you can get involved

  • Internationalizing archivist descriptions here.
  • Improved container listing/handling.
  • Improved 'directory' listing and handling.
  • Output formatting unit tests!

As you use the tool or find problems, please report them. If you find you are missing summaries that might be useful to you please let me know. The more the utility is used, the more we can all benefit.

I have started a discussion topic for improvements: here.

Installation

Installation should be easy. Until the utility is packaged, you need to do the following:

  1. Find a directory you want to install demystify to.
  2. Run git clone.
  3. Navigate into the demystify repository, cd demystify.
  4. Checkout the sub-modules (pathlesstaken, and sqlitefid): git submodule update --init --recursive.
  5. Install lxml: python -m pip install -r requirements/production.txt.
  6. Run tests to make sure everything works: tox -e py39.

NB. tox is cool. If you're working on this code and want to format it idiomatically, run tox -e linting. If there are errors, they will point to where you may need to improve your code.

Virtual environment

A virtual environment is recommended in some instances, e.g. you don't want to pollute your Python environment with other developer's code. To do this, for Linux you can do the following:

  1. Create a virtual environment: python3 -m virtualenv venv-py3.
  2. Activate the virtual environment: source venv-py3/bin/activate.

Then follow the installation instructions above this.

Releases

See the Releases section on GitHub.

License

Copyright (c) 2013 Ross Spencer

This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software.

Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions:

The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required.

Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software.

This notice may not be removed or altered from any source distribution.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

demystify_digipres-2.0.0.tar.gz (685.8 kB view details)

Uploaded Source

Built Distribution

demystify_digipres-2.0.0-py3-none-any.whl (345.9 kB view details)

Uploaded Python 3

File details

Details for the file demystify_digipres-2.0.0.tar.gz.

File metadata

  • Download URL: demystify_digipres-2.0.0.tar.gz
  • Upload date:
  • Size: 685.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.12

File hashes

Hashes for demystify_digipres-2.0.0.tar.gz
Algorithm Hash digest
SHA256 1b736a0e8f7998fd3c64cdc00f878decc1df6c398589840060ddb3a54c7a401f
MD5 ad4591ac303c38776572eed8ca13a26b
BLAKE2b-256 eab55ff195c23494f418a8b7e0d065630e0b852f26a14bded403b49fe929b030

See more details on using hashes here.

File details

Details for the file demystify_digipres-2.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for demystify_digipres-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ae6aad2755ce5148d8ff9d31a9d344e01ad23d6fd4df41e7edbc6154928615d8
MD5 3a8587c3ad7c0d5d734f54572757f3b6
BLAKE2b-256 ea20b3ec85da695a2051e85d680577adc1e560bf500a703cc63bbc778ae0eb65

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page