Skip to main content

Python client for standard-daq

Project description

Standard DAQ client

Python client and CLI tools for interacting with Standard DAQ.

Install with Pip:

pip install std_daq_client

The only dependency is the requests library.

Getting started

Python client

from std_daq_client import StdDaqClient
rest_server_url = 'http://localhost:5000'
client = StdDaqClient(url_base=rest_server_url)

client.get_status()
client.get_config()
client.set_config(daq_config={'bit_depth': 16, 'writer_user_id': 0})
client.get_logs(n_last_logs=5)
client.get_stats()
client.get_deployment_status()
client.start_writer_async({'output_file': '/tmp/output.h5', 'n_images': 10})
client.start_writer_sync({'output_file': '/tmp/output.h5', 'n_images': 10})
client.stop_writer()

CLI interface

  • std_cli_get_status
  • std_cli_get_config
  • std_cli_set_config [config_file]
  • std_cli_get_logs
  • std_cli_get_stats
  • std_cli_get_deploy_status
  • std_cli_write_async [output_file] [n_images]
  • std_cli_write_sync [output_file] [n_images]
  • std_cli_write_stop

All CLI tools accept the --url_base parameter used to point the client to the correct API base url. Contact your DAQ integrator to find out this address.

Redis interface

System state is available also on Redis in the form of Redis Streams. Each Redis instance represents a DAQ instance. Contact your DAQ integrator to find out the host and port on which Redis is running - generally this should be the same network interface as the REST api, on the default Redis port.

Currently available streams:

  • daq:status (DAQ status updates with 1Hz, DAQ status json)
  • daq:config (DAQ config pushed on change, DAQ config json)
  • daq:log (DAQ logs about created files, DAQ logs json)
  • daq:stat (DAQ performance statistics, DAQ stats json)

The JSON object encoded in UTF-8 is stored in the b'json' field.

Example on how to access the status stream:

import json
from redis.client import Redis

redis = Redis()

# Start listening for new statuses.
last_status_id = '$'
while True:
    # Block until new status is available.
    response = redis.xread({"daq:writer_status": last_status_id}, block=0)
    # Decode status from 'json' field.
    last_status_id, status = response[0][0], json.loads(response[0][1][b'json'])
    
    print(status)

Interface objects

Every call returns a dictionary with some values inside. In case of state or logic problems with your request, an instance of StdDaqAdminException will be thrown. For everything else an instance of RuntimeError will be raised.

Below we will describe the different returned objects and describe their fields.

DAQ status

Represents the current state of the DAQ with the current state of the last acquisition (either completed or still running).

This object is returned by:

  • get_status
  • start_writer_async
  • start_writer_sync
  • stop_writer
  • Redis stream daq:status
{
  "acquisition": {                        // Stats about the currently running or last finished acquisition
    "info": {                             //   User request that generated this acquisition
      "n_images": 100,                    //     Number of images
      "output_file": "/tmp/test.h5",      //     Output file
      "run_id": 1684930336122153839       //     Run_id (request timestamp by default, generated by the API)
    },
    "message": "Completed.",              // User displayable message from the writer.
    "state": "FINISHED",                  // State of the acquisition.
    "stats": {                            
      "n_write_completed": 100,           //   Number of completed writes
      "n_write_requested": 100,           //   Number of requested writers from the driver
      "start_time": 1684930336.1252322,   //   Start time of request as seen by writer driver
      "stop_time": 1684930345.2723851     //   Stop time of request as seen by writer driver
    }
  },
  "state": "READY"                        // State of the writer: READY (to write), WRITING
}

output_file

The path will always be an absolute path or null (when no acquisition happened on the system yet).

state

State Description
READY DAQ is ready to start writing.
WRITING DAQ is writing at the moment. Wait for it to finish or call Stop to interrupt.
UNKNOWN The DAQ is in an unknown state (usually after reboot). Call Stop to reset.

When the state of the writer is READY, the writer can receive the next write request. Otherwise, the request will be rejected. A Stop request can always be sent and will reset the writer status to READY (use the Stop request when the writer state is UNKNOWN to try to reset it).

acquisition/state

State Description
FINISHED The acquisition has finished successfully.
FAILED The acquisition has failed.
WRITING Currently receiving and writing images.
WAITING_IMAGES Writer is ready and waiting for images.
ACQUIRING_IMAGES DAQ is receiving images but writer is not writing them yet.
FLUSHING_IMAGES All needed images acquired, writer is flushing the buffer.

In case of a FAILED acquisition, the acquisition/message will be set to the error that caused it to fail.

acquisition/message

Message Description
Completed. The acquisition has written all the images.
Interrupted. The acquisition was interrupted before it acquired all the images.
ERROR:... An error happened during the acquisition.

In case of ERROR, the message will reflect what caused the acquisition to fail.

acquisition/stats/start_time, acquisition/stats/stop_time

All timestamps are Unix timestamps generated on the DAQ machine and are not really correlated with external sources (clock skew can be up to 100s of milliseconds). In particular cases any of the timestamp can be null (when no acquisition happened on the system yet).

DAQ config

Represents the DAQ configuration that is loaded by all services. This configuration describes the data source and the way the data source is processed by stream processors.

This object is returned by:

  • get_config
  • set_config
  • Redis stream daq:config
{
  "bit_depth": 16,                   // Bit depth of the image. Supported values are dependent on the detector.
  "detector_name": "EG9M",           // Name of the detector. Must be unique, used as internal DAQ identifier.
  "detector_type": "eiger",          // Type of detector. Currently supported: eiger, jungfrau, gigafrost, bsread
  "image_pixel_height": 3264,        // Assembled image height in pixels, including gap pixels.
  "image_pixel_width": 3106,         // Assembled image width in pixels, including gap pixels.
  "n_modules": 2,                    // Number of modules to assemble.
  "start_udp_port": 50000,           // Start UDP port where the detector is streaming modules.
  "writer_user_id": 12345,           // User_id under which the writer will create and write files.
  "module_positions": {              // Dictionary with mapping between module number -> image position.
    "0": [0, 3263, 513, 3008 ],      //   Format: [start_x, start_y, end_x, end_y]
    "1": [516, 3263, 1029, 3008 ]
  }
}

writer_user_id

Must be an integer representing the user_id. For e-accounts, it's simply the number after the 'e'. For example e12345 has a user_id of 12345. For other users you can find out their user_id by running:

id -u [user_name]

detector_type

Possible values: eiger, gigafrost, jungfrau, bsread

This is not something you usually change without hardware changes on the beamline.

DAQ statistics

Current data flow statistics of the DAQ.

This object is returned by:

  • get_stats
  • Redis stream daq:stat
{
  "detector": {                 // Detector statistics
    "bytes_per_second": 0.0,    //   Throughput
    "images_per_second": 0.0    //   Frequency
  },
  "writer": {                   // Writer statistics
    "bytes_per_second": 0.0,    //   Throughput
    "images_per_second": 0.0    //   Frequency
  }
}

The statistics is refreshed and aggregated with 1 Hz.

DAQ logs

Log of all acquisitions that produced a file. It is a list of acquisition objects in reverse chronological order.

This object is returned by:

  • get_logs
  • Redis stream daq:log
[
  {                                       
    "info": {                             //   User request that generated this acquisition
      "n_images": 100,                    //     Number of images
      "output_file": "/tmp/test.h5",      //     Output file
      "run_id": 1684930336122153839       //     Run_id (request timestamp by default, generated by the API)
    },
    "message": "Completed.",              // User displayable message from the writer.
    "state": "FINISHED",                  // Final state of the acquisition.
    "stats": {                            // Stats of the acquisition
      "n_write_completed": 100,           //   Number of completed writes
      "n_write_requested": 100,           //   Number of requested writers
      "start_time": 1684930336.1252322,   //   Start time of request as seen by writer driver
      "stop_time": 1684930345.2723851     //   Stop time of request as seen by writer driver
    }
  },
  { ... }
]

In case the file could not be created or another error occurred this will not be logged in the acquisition log.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

std_daq_client-1.3.0.tar.gz (15.5 kB view details)

Uploaded Source

Built Distribution

std_daq_client-1.3.0-py3-none-any.whl (9.7 kB view details)

Uploaded Python 3

File details

Details for the file std_daq_client-1.3.0.tar.gz.

File metadata

  • Download URL: std_daq_client-1.3.0.tar.gz
  • Upload date:
  • Size: 15.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.2

File hashes

Hashes for std_daq_client-1.3.0.tar.gz
Algorithm Hash digest
SHA256 236e1769b2573e7928de8c45cdbf3fd9f1a6ecedea3aa52aba016a5c0c227ad6
MD5 0b60acea213b72651c0a9869d77ddfb8
BLAKE2b-256 8bf5584d2663b48d3e85f043f38ab3452c1041d04c9efddf507742bff87cc8cf

See more details on using hashes here.

File details

Details for the file std_daq_client-1.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for std_daq_client-1.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f7fe9c743daeb0d604ef50b5d92dc6006e809f19c8cb2b14670c7806ee55cc14
MD5 9c66b4e485000aff71f0b835175de900
BLAKE2b-256 4315e27fd250a0764572431ae49f95ea13d2c956567c75124f07ecb9bc790f5e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page