Skip to main content

Python client for standard-daq

Project description

Standard DAQ client

Python client and CLI tools for interacting with Standard DAQ.

Install with Pip:

pip install std_daq_client

The only dependency is the requests library.

Getting started

Python client

from std_daq_client import StdDaqClient
rest_server_url = 'http://localhost:5000'
client = StdDaqClient(url_base=rest_server_url)

client.get_status()
client.get_config()
client.set_config(daq_config={'bit_depth': 16, 'writer_user_id': 0})
client.get_logs(n_last_logs=5)
client.get_stats()
client.get_deployment_status()
client.start_writer_async({'output_file': '/tmp/output.h5', 'n_images': 10})
client.start_writer_sync({'output_file': '/tmp/output.h5', 'n_images': 10})
client.stop_writer()

CLI interface

  • std_cli_get_status
  • std_cli_get_config
  • std_cli_set_config [config_file]
  • std_cli_get_logs
  • std_cli_get_stats
  • std_cli_get_deploy_status
  • std_cli_write_async [output_file] [n_images]
  • std_cli_write_sync [output_file] [n_images]
  • std_cli_write_stop

All CLI tools accept the --url_base parameter used to point the client to the correct API base url. Contact your DAQ integrator to find out this address.

Redis interface

System state is available also on Redis in the form of Redis Streams. Each Redis instance represents a DAQ instance. Contact your DAQ integrator to find out the host and port on which Redis is running - generally this should be the same network interface as the REST api, on the default Redis port.

Currently available streams:

  • daq:status (DAQ status updates with 1Hz, DAQ status json)
  • daq:config (DAQ config pushed on change, DAQ config json)
  • daq:log (DAQ logs about created files, DAQ logs json)
  • daq:stat (DAQ performance statistics, DAQ stats json)

The JSON object encoded in UTF-8 is stored in the b'json' field.

Example on how to access the status stream:

import json
from redis.client import Redis

redis = Redis()

# Start listening for new statuses.
last_status_id = '$'
while True:
    # Block until new status is available.
    response = redis.xread({"daq:writer_status": last_status_id}, block=0)
    # Decode status from 'json' field.
    last_status_id, status = response[0][0], json.loads(response[0][1][b'json'])
    
    print(status)

Interface objects

Every call returns a dictionary with some values inside. In case of state or logic problems with your request, an instance of StdDaqAdminException will be thrown. For everything else an instance of RuntimeError will be raised.

Below we will describe the different returned objects and describe their fields.

DAQ status

Represents the current state of the DAQ with the current state of the last acquisition (either completed or still running).

This object is returned by:

  • get_status
  • start_writer_async
  • start_writer_sync
  • stop_writer
  • Redis stream daq:status
{
  "acquisition": {                        // Stats about the currently running or last finished acquisition
    "info": {                             //   User request that generated this acquisition
      "n_images": 100,                    //     Number of images
      "output_file": "/tmp/test.h5",      //     Output file
      "run_id": 1684930336122153839       //     Run_id (request timestamp by default, generated by the API)
    },
    "message": "Completed.",              // User displayable message from the writer.
    "state": "FINISHED",                  // State of the acquisition.
    "stats": {                            
      "n_write_completed": 100,           //   Number of completed writes
      "n_write_requested": 100,           //   Number of requested writers from the driver
      "start_time": 1684930336.1252322,   //   Start time of request as seen by writer driver
      "stop_time": 1684930345.2723851     //   Stop time of request as seen by writer driver
    }
  },
  "state": "READY"                        // State of the writer: READY (to write), WRITING
}

output_file

The path will always be an absolute path or null (when no acquisition happened on the system yet).

state

State Description
READY DAQ is ready to start writing.
WRITING DAQ is writing at the moment. Wait for it to finish or call Stop to interrupt.
UNKNOWN The DAQ is in an unknown state (usually after reboot). Call Stop to reset.

When the state of the writer is READY, the writer can receive the next write request. Otherwise, the request will be rejected. A Stop request can always be sent and will reset the writer status to READY (use the Stop request when the writer state is UNKNOWN to try to reset it).

acquisition/state

State Description
FINISHED The acquisition has finished successfully.
FAILED The acquisition has failed.
WRITING Currently receiving and writing images.
WAITING_IMAGES Writer is ready and waiting for images.
ACQUIRING_IMAGES DAQ is receiving images but writer is not writing them yet.
FLUSHING_IMAGES All needed images acquired, writer is flushing the buffer.

In case of a FAILED acquisition, the acquisition/message will be set to the error that caused it to fail.

acquisition/message

Message Description
Completed. The acquisition has written all the images.
Interrupted. The acquisition was interrupted before it acquired all the images.
ERROR:... An error happened during the acquisition.

In case of ERROR, the message will reflect what caused the acquisition to fail.

acquisition/stats/start_time, acquisition/stats/stop_time

All timestamps are Unix timestamps generated on the DAQ machine and are not really correlated with external sources (clock skew can be up to 100s of milliseconds). In particular cases any of the timestamp can be null (when no acquisition happened on the system yet).

DAQ config

Represents the DAQ configuration that is loaded by all services. This configuration describes the data source and the way the data source is processed by stream processors.

This object is returned by:

  • get_config
  • set_config
  • Redis stream daq:config
{
  "bit_depth": 16,                   // Bit depth of the image. Supported values are dependent on the detector.
  "detector_name": "EG9M",           // Name of the detector. Must be unique, used as internal DAQ identifier.
  "detector_type": "eiger",          // Type of detector. Currently supported: eiger, jungfrau, gigafrost, bsread
  "image_pixel_height": 3264,        // Assembled image height in pixels, including gap pixels.
  "image_pixel_width": 3106,         // Assembled image width in pixels, including gap pixels.
  "n_modules": 2,                    // Number of modules to assemble.
  "start_udp_port": 50000,           // Start UDP port where the detector is streaming modules.
  "writer_user_id": 12345,           // User_id under which the writer will create and write files.
  "module_positions": {              // Dictionary with mapping between module number -> image position.
    "0": [0, 3263, 513, 3008 ],      //   Format: [start_x, start_y, end_x, end_y]
    "1": [516, 3263, 1029, 3008 ]
  }
}

writer_user_id

Must be an integer representing the user_id. For e-accounts, it's simply the number after the 'e'. For example e12345 has a user_id of 12345. For other users you can find out their user_id by running:

id -u [user_name]

detector_type

Possible values: eiger, gigafrost, jungfrau, bsread

This is not something you usually change without hardware changes on the beamline.

DAQ statistics

Current data flow statistics of the DAQ.

This object is returned by:

  • get_stats
  • Redis stream daq:stat
{
  "detector": {                 // Detector statistics
    "bytes_per_second": 0.0,    //   Throughput
    "images_per_second": 0.0    //   Frequency
  },
  "writer": {                   // Writer statistics
    "bytes_per_second": 0.0,    //   Throughput
    "images_per_second": 0.0    //   Frequency
  }
}

The statistics is refreshed and aggregated with 1 Hz.

DAQ logs

Log of all acquisitions that produced a file. It is a list of acquisition objects in reverse chronological order.

This object is returned by:

  • get_logs
  • Redis stream daq:log
[
  {                                       
    "info": {                             //   User request that generated this acquisition
      "n_images": 100,                    //     Number of images
      "output_file": "/tmp/test.h5",      //     Output file
      "run_id": 1684930336122153839       //     Run_id (request timestamp by default, generated by the API)
    },
    "message": "Completed.",              // User displayable message from the writer.
    "state": "FINISHED",                  // Final state of the acquisition.
    "stats": {                            // Stats of the acquisition
      "n_write_completed": 100,           //   Number of completed writes
      "n_write_requested": 100,           //   Number of requested writers
      "start_time": 1684930336.1252322,   //   Start time of request as seen by writer driver
      "stop_time": 1684930345.2723851     //   Stop time of request as seen by writer driver
    }
  },
  { ... }
]

In case the file could not be created or another error occurred this will not be logged in the acquisition log.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

std_daq_client-1.3.2.tar.gz (15.5 kB view details)

Uploaded Source

Built Distribution

std_daq_client-1.3.2-py3-none-any.whl (9.7 kB view details)

Uploaded Python 3

File details

Details for the file std_daq_client-1.3.2.tar.gz.

File metadata

  • Download URL: std_daq_client-1.3.2.tar.gz
  • Upload date:
  • Size: 15.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.2

File hashes

Hashes for std_daq_client-1.3.2.tar.gz
Algorithm Hash digest
SHA256 16683f7908a69c699804d06759143180c06e31b190b9fced7e8c8de0b8530df4
MD5 6396910561b385ec919ce09e1e2c71dd
BLAKE2b-256 9f114747c4bcf2024dff628d1887a079663076b84cbd49c9689dad1f0e82a3a8

See more details on using hashes here.

File details

Details for the file std_daq_client-1.3.2-py3-none-any.whl.

File metadata

File hashes

Hashes for std_daq_client-1.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 139ca2ea06fa69814f28b81b2c211fd3e74f94308987659ca30e317b0a8805f0
MD5 65bc9a915824c3f19d186bd6b956b22d
BLAKE2b-256 87c3d25afe5d5d6fb445321b4451824dc1726c4e36e4e3e5dff5257c0f5c2efb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page