Skip to main content

A package used for downloading and processing files from multiple habitats.

Project description

Xtract SDK v0.0.7a6

Login: Creating an XtractClient object

First, we import the XtractClient class from the Xtract SDK

from xtract_sdk.client import XtractClient

Here we create an XtractClient object to request tokens from Globus Auth.

xtr = XtractClient(auth_scopes=[scope_1, ..., scope_n], force_login=False)

While additional auth scopes may be added with the auth_scopes argument, there are a number of default scopes automatically requested within the system. These are:

  • openid: provides username for identity.
  • search: interact with Globus Search
  • petrel: read or write data on Petrel. Not needed if no data going to Petrel.
  • transfer: needed to crawl the Globus endpoint and transfer metadata to its final location.
  • funcx_scope: needed to orchestrate the metadata exraction at the given funcX endpoint.

Additional auth scopes can be added with the auth_scopes argument.

When true, force_login makes you go through the full authorization flow again.

Defining endpoints: Creating an XtractEndpoint object

Endpoints in Xtract are the computing fabric that enable us to move files and apply extractors to files. To this end, an Xtract endpoint is the combination of the following two software endpoints:

  • Globus endpoints [required] enable us to access all file system metadata about files stored on an endpoint, and enables us to transfer files between machines for more-efficient processing.
  • FuncX endpoints [optional] are capable of remotely receiving extraction functions that can be applied to files on the Globus endpoint. Note that the absence of a funcX endpoint on an Xtract endpoint means that a file must be transferred to an endpoint with a valid funcX endpoint in able to have its metadata extracted.

In order to create an Xtract endpoint, we first import the XtractEndpoint class from the Xtract SDK

from xtract_sdk.endpoint import XtractEndpoint

Here we create two XtractEndpoint objects to be used later in a crawl, etc.

xep1 = XtractEndpoint(repo_type='globus',
                      globus_ep_id='aaaa-0000-3333',
                      funcx_ep_id='aaaa-0000-3333',
                      dirs=['str1', 'str2', ..., 'strn'], 
                      grouper='file_is_group',
                      local_mdata_path='/home/user/metadata')

xep2 = XtractEndpoint(repo_type='globus',
                      globus_ep_id='aaaa-0000-3333',
                      dirs=['str1', 'str2', ..., 'strn'], 
                      local_mdata_path='/home/user/metadata',
                      grouper='file_is_group')

The arguments are as follow:

  • repo_type: (str) at this point, only Globus is accepted. Google Drive and others will be made available at a later date.
  • globus_ep_id: (uuid str) the Globus endpoint ID.
  • funcx_ep_id: (uuid str) optional funcX endpoint ID.
  • dirs: (list of str) directory paths on Globus endpoint for where the data reside.
  • local_mdata_path (str) directory path on Globus endpoint for where xtraction metadata should go.
  • grouper: (str) grouping strategy for files.

Crawling

xtr.crawl([xep_1, ..., xep_n])

Where [xep_1, ..., xep_n] is a list of XtractEndpoint objects.

The crawl ID for each endpoint will be stored in the XtractClient object as a list xtr.crawl_ids. Furthermore, each endpoint will be stored in the XtractClient object in a dictionary cid_to_xep_map, where each crawl id key maps to the corresponding endpoint as a value.

Behind the scenes, this will scan a Globus directory breadth-first (using globus_ls), first extracting physical metadata such as path, size, and extension. Next, since the grouper we selected is 'file_is_group', the crawler will simply create n single-file groups.

The crawl is non-blocking, and the crawl_id here will be used to execute and monitor downstream extraction processes.

Getting Crawl status

crawl_statuses = xtr.get_crawl_status(crawl_ids=None)

We can get crawl status, seeing how many groups have been identified in the crawl. If xtr.crawl() has already been run, then xtr.get_crawl_status() will get the status of the IDs stored in xtr.crawl_ids. Otherwise, a list of crawl_ids may be given to xtr.get_crawl_status().

This will return a dictionary resembling:

{‘crawl_id’: String,
 ‘status’: String, 
 ‘message’: “OK” if everything is fine otherwise describes error,
 ‘data’: {'bytes_crawled': Integer, ..., 'files_crawled': Integer}}

Note that measuring the total files yet to crawl is impossible, as the BFS may not have discovered all files yet, and Globus does not yet have a file counting feature for all directories and subdirectories. I.e., we know when we're done, but we do not know until we get there.

Warning: it currently takes up to 30 seconds for a crawl to start. Why? Container warming time.

Crawl and wait

For ease of testing, we've implemented a crawl_and_wait functionality, which will crawl the given endpoints and then print the crawl status of all given endpoints every two seconds until all have completed crawling. This can be used as follows:

xtr.crawl_and_wait([xep_1,...,xep_n])

Flushing Crawl metadata

xtr.flush_crawl_metadata(crawl_ids=None, first_n_files=100)

After running a crawl, we can use xtr.flush_crawl_metadata() to return a list of all metadata from the crawl.

Similarly with .get_crawl_status(), if xtr.crawl() has already been run, then xtr.flush_crawl_metadata() will get the status of the IDs stored in xtr.crawl_ids. Otherwise, a list of crawl_ids may be given to xtr.flush_crawl_metadata().

Each time metadata is flushed, the number of files for which metadata is returned will be equal to first_n_files, and then that metadata will not be able to be flushed again.

Flushing crawl metadata will return a dictionary resembling:

{"crawl_id": String,
 "file_ls": List,
 "num_files": Integer,
 "queue_empty": Boolean}

Xtract-ing

Registering containers for Xtraction

xtr.register_containers(endpoint, container_path)

In order to perform an xtraction, we must have the requisite containers for each extractor that is to be used. After creating client and endpoint instances, containers must be registered for each endpoint, using .register_containers() as follows:

xtr.register_containers(xep1, container_path='/home/user/containers')
xtr.register_containers(xep2, container_path='/home/user/containers')

Where the container_path (str) argument should be the path to the xtraction containers on the Globus endpoint.

This can be executed regardless of crawl completion status.

Xtract

xtr.xtract()

The crawl method must have already been run, and an xtraction will be run for each endpoint that was given to crawl. xtract will return the HTTP status response code, which should be 200.

Getting Xtract status

xtr.get_xtract_status()

The xtract method must have already been run, and this call will return a list of xtract statuses, one for each endpoint given to crawl.

This will return a dictionary resembling:

{'xtract_status': String,
 'xtract_counters': {'cumu_orch_enter': Integer, 
                     'cumu_pulled': Integer, 
                     'cumu_scheduled': Integer, 
                     'cumu_to_schedule': Integer, 
                     'flagged_unknown': Integer, 
                     'fx': {'failed': Integer, 
                            'pending': Integer, 
                            'success': Integer}}

Offload metadata

xtr.offload_metadata(dest_ep_id, dest_path="", timeout=600, delete_source=False)

The offload_metadata method can be used to transfer files between two endpoints, and is included in this SDK for the purpose of transferring metadata from xtraction. It takes the following arguments:

  • dest_ep_id: (str) the ID of the endpoint to which the files are being transferred.
  • dest_path: (optional str) the path on the destination endpoint where the files should go
  • timeout: (optional int, default 600) how long the transfer should wait until giving up if unsuccessful
  • delete_source: (optional boolean, default False) set to True if the source files should be deleted after metadata completion

This method will transfer the metadata to a new folder (in the destination path, if supplied) which is named in the convention YYYY-MM-DD-hh:mm:ss. Calling the function will return the path to this folder on the destination endpoint.

Search: coming soon!

Downloaders: coming soon!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

xtract_sdk-0.0.7a10-py3-none-any.whl (31.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page