Skip to main content

Python client for the SewerRat API

Project description

Python interface to the SewerRat API

Unit tests Documentation PyPI-Server

Pretty much as it says on the tin: provides a Python client for the API of the same name. It is assumed that the users of the sewerrat client and the SewerRat API itself are accessing the same shared filesystem; this is typically the case for high-performance computing clusters in scientific institutions. To demonstrate, let's spin up a mock SewerRat instance:

import sewerrat as sr
_, url = sr.start_sewerrat()

Let's mock up a directory of metadata files:

import tempfile
import os

mydir = tempfile.mkdtemp()
with open(os.path.join(mydir, "metadata.json"), "w") as handle:
    handle.write('{ "first": "foo", "last": "bar" }')

os.mkdir(os.path.join(mydir, "diet"))
with open(os.path.join(mydir, "diet", "metadata.json"), "w") as handle:
    handle.write('{ "fish": "barramundi" }')

We can then easily register it via the register() function. Similarly, we can deregister this directory with deregister(mydir).

# Only indexing metadata files named 'metadata.json'.
sr.register(mydir, names=["metadata.json"], url=url)

To search the index, we use the query() function to perform free-text searches. This does not require filesystem access and can be done remotely.

sr.query(url, "foo")
sr.query(url, "bar%") # partial match to 'bar...'
sr.query(url, "bar% AND foo") # boolean operations
sr.query(url, "fish:bar%") # match in the 'fish' field

We can also search on the user, path components, and time of creation:

sr.query(url, user="LTLA") # created by myself
sr.query(url, path="diet/") # path has 'diet/' in it

import time
sr.query(url, after=time.time() - 3600) # created less than 1 hour ago

Once we find a file of interest from a registered directory, we can retrieve its metadata, or other files in the same directory, or the entire directory itself:

sr.retrieve_metadata(mydir + "/metadata.json", url)
sr.list_files(mydir, url)
sr.retrieve_file(mydir + "/diet/metadata.json", url)
sr.retrieve_directory(mydir, url)

Check out the API documentation for more details on each function. For the concepts underlying the SewerRat itself, check out the repository for a detailed explanation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sewerrat-0.2.0.tar.gz (27.2 kB view details)

Uploaded Source

Built Distribution

SewerRat-0.2.0-py3-none-any.whl (13.7 kB view details)

Uploaded Python 3

File details

Details for the file sewerrat-0.2.0.tar.gz.

File metadata

  • Download URL: sewerrat-0.2.0.tar.gz
  • Upload date:
  • Size: 27.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for sewerrat-0.2.0.tar.gz
Algorithm Hash digest
SHA256 eda666da5e8434a0b0dd2790f2aa6308e07fe224a82d63a7d0a7c46cb11c101f
MD5 b7b83b9ef294bc90b56389c8a3ee7c04
BLAKE2b-256 cb66158b440160e8bfabf35e868ef3990ec23ee2c260663ee8949a914d9975de

See more details on using hashes here.

File details

Details for the file SewerRat-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: SewerRat-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 13.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for SewerRat-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 93d2177d1fdd5e3cc7e64f565ff8d37b1a4f0ead2f90edc8a5542eeabe20d644
MD5 8a0c6a5907398d2398cd0d93f9791265
BLAKE2b-256 5f717664f57b0edab03cd2122491b6e1168aeb2017e196144cdd37a1822a5df9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page