Skip to main content

A tool for working with archival description for public access.

Project description

description_harvester

A tool for working with archival description for public access. description_harvester reads archival description into a minimalist data model for public-facing archival description and then converts it to the Arclight data model and POSTs it into an Arclight Solr index using PySolr.

description_harvester is designed to be extensible and harvest archival description from a number of sources. Currently the only available source harvests data from the ArchivesSpace API using ArchivesSnake. It is possible in the future to add modules for EAD2002 and other sources. Its also possible to add additional output modules to serialize description to EAD or other formats in addition to or in replace of sending description to an Arclight Solr instance. This potential opens up new possibilities of managing description using low-barrier formats and tools.

The main branch is designed to be a drop-in replacement for the Arclight Traject indexer, while the dao-indexing branch tries to fully index digital objects from digital repositories and other sources, including item-level metadata fields, embedded text, OCR text, and transcriptions.

This is still a bit drafty, as its only tested on ASpace v2.8.0 and needs better error handling. Validation is also very minimal, but there is potential to add detailed validation with jsonschema .

Installation

pip install description_harvester

First, you need to configure ArchivesSnake by creating a ~/.archivessnake.ymlfile with your API credentials as detailed by the ArchivesSnake configuration docs.

Next, you also need a ~/.description_harvester.yml file that lists your Solr URL and the core you want to index to. These can also be overridden with args.

solr_url: http://127.0.0.1:8983/solr
solr_core: blacklight-core
last_query: 0

Repositories

By default, when reading from ArchivesSpace, description harvester will use the repository name stored there.

To enable the --repo argument, place a copy of your ArcLight repositories.yml file in ~. You can then use harvest --id mss001 --repo slug to index using the slug from repositories.yml. This will overrite the ArchivesSpace repository name.

There is also the option do customize this with a plugin.

Indexing from ArchivesSpace API to Arclight

Once description_harvester is set up, you can index from the ASpace API to Arclight using the to-arclight command.

Index by id_0

You can provide one or more IDs to index using a resource's id_0` field

harvest --id ua807

harvest --id mss123 apap106

Index by URI

You can also use integers from ASpace URIs for resource, such as 263 for https://my.aspace.edu/resources/263

harvest --uri 435

harvest --uri 1 755

Indexing by modified time

Index collections modified in the past hour: harvest --hour

Index collections modified in the past day: harvest --today

Index collections modified since las run: harvest --updated

Index collections not already in the index: harvest --new

Deleting collections

You can delete one or more collections using the --delete argument. This uses the Solr document ID, such as apap106 for https://my.arclight.edu/catalog/apap106.

harvest --delete apap101 apap301

Plugins

Local implementations may have to override some description_harvester logic. Indexing digital objects from local systems may be a common use case.

To create a plugin, create a plugin directory, either at ~/.description_harvester or a path you pass with a DESCRIPTION_HARVESTER_PLUGIN_DIR environment variable.

Use the example default.py and make a copy in your plugin directory.

Use custom_repository() to customize how repository names are set. This has access to an ArchivesSpace resource API object.

Use read_data() to customize DigitalObject objects.

The plugin importer will first import plugins from within the package, second it will look in ~/.description_harvester, and finally it will look in the DESCRIPTION_HARVESTER_PLUGIN_DIR path.

Use as a library

You can also use description harvester in a script

from description_harvester import harvest

harvest(["--id", "myid001"])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

description_harvester-0.3.9.tar.gz (21.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

description_harvester-0.3.9-py3-none-any.whl (22.5 kB view details)

Uploaded Python 3

File details

Details for the file description_harvester-0.3.9.tar.gz.

File metadata

  • Download URL: description_harvester-0.3.9.tar.gz
  • Upload date:
  • Size: 21.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for description_harvester-0.3.9.tar.gz
Algorithm Hash digest
SHA256 a37e52f90b6479ecebb1f0698d9f604580a2aae62d1aa9930230993e6f5f0f8a
MD5 b3f9d7c8d5570bad4c1fab48d66a3030
BLAKE2b-256 8718297b268e943c8494e94af7c1eb214f6f18a16bca0da6798c9a567677417c

See more details on using hashes here.

File details

Details for the file description_harvester-0.3.9-py3-none-any.whl.

File metadata

File hashes

Hashes for description_harvester-0.3.9-py3-none-any.whl
Algorithm Hash digest
SHA256 820b2710b014db717fa0747e3e539f16df8085e986a938fcb15a701576aff23f
MD5 404e8c29c475f9c836f706969fef4c6a
BLAKE2b-256 44cd96adbba5aa7153dac636ca94c15f566d1a42d6b6492b51772589cc3c2ef4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page