Skip to main content

Python client for standard-daq

Project description

Standard DAQ client

Python client and CLI tools for interacting with Standard DAQ.

Install with Pip:

pip install std_daq_client

The only dependency is the requests library.

Getting started

Python client

from std_daq_client import StdDaqClient
rest_server_url = 'http://localhost:5000'
client = StdDaqClient(url_base=rest_server_url)

client.get_status()
client.get_config()
client.set_config(daq_config={'bit_depth': 16, 'writer_user_id': 0})
client.get_logs(n_last_logs=5)
client.get_stats()
client.get_deployment_status()
client.start_writer_async({'output_file': '/tmp/output.h5', 'n_images': 10})
client.start_writer_sync({'output_file': '/tmp/output.h5', 'n_images': 10})
client.stop_writer()

CLI interface

  • std_cli_get_status
  • std_cli_get_config
  • std_cli_set_config [config_file]
  • std_cli_get_logs
  • std_cli_get_stats
  • std_cli_get_deploy_status
  • std_cli_write_async [output_file] [n_images]
  • std_cli_write_sync [output_file] [n_images]
  • std_cli_write_stop

All CLI tools accept the --url_base parameter used to point the client to the correct API base url. Contact your DAQ integrator to find out this address.

Redis interface

Status streams are available also on Redis in the form of Redis PUB/SUB. The client does not communicate with Redis for you, but it can parse the Redis response into standard objects. We recommended the usage of these parsers as the format in which data is streamed on Redis can change - it is considered an implementation detail and not part of the public interface.

Interface objects

Every call returns a dictionary with some values inside. In case of state or logic problems with your request, an instance of StdDaqAdminException will be thrown. For everything else an instance of RuntimeError will be raised.

Below we will describe the different returned objects and describe their fields.

DAQ status

Represents the current state of the DAQ with the current state of the last acquisition (either completed or still running).

This object is returned by:

  • get_status
  • start_writer_async
  • start_writer_sync
  • stop_writer
{
  "acquisition": {                        // Stats about the currently running or last finished acquisition
    "info": {                             //   User request that generated this acquisition
      "n_images": 100,                    //     Number of images
      "output_file": "/tmp/test.h5",      //     Output file
      "run_id": 1684930336122153839       //     Run_id (request timestamp by default, generated by the API)
    },
    "message": "Completed.",              // User displayable message from the writer.
    "state": "FINISHED",                  // State of the acquisition.
    "stats": {                            
      "n_write_completed": 100,           //   Number of completed writes
      "n_write_requested": 100,           //   Number of requested writers from the driver
      "start_time": 1684930336.1252322,   //   Start time of request as seen by writer driver
      "stop_time": 1684930345.2723851     //   Stop time of request as seen by writer driver
    }
  },
  "state": "READY"                        // State of the writer: READY (to write), WRITING
}

output_file

The path will always be an absolute path or null (when no acquisition happened on the system yet).

state

State Description
READY DAQ is ready to start writing.
WRITING DAQ is writing at the moment. Wait for it to finish or call Stop to interrupt.
UNKNOWN The DAQ is in an unknown state (usually after reboot). Call Stop to reset.

When the state of the writer is READY, the writer can receive the next write request. Otherwise, the request will be rejected. A Stop request can always be sent and will reset the writer status to READY (use the Stop request when the writer state is UNKNOWN to try to reset it).

acquisition/state

State Description
FINISHED The acquisition has finished successfully.
FAILED The acquisition has failed.
WRITING Currently receiving and writing images.
WAITING_IMAGES Writer is ready and waiting for images.
ACQUIRING_IMAGES DAQ is receiving images but writer is not writing them yet.
FLUSHING_IMAGES All needed images acquired, writer is flushing the buffer.

In case of a FAILED acquisition, the acquisition/message will be set to the error that caused it to fail.

acquisition/message

Message Description
Completed. The acquisition has written all the images.
Interrupted. The acquisition was interrupted before it acquired all the images.
ERROR:... An error happened during the acquisition.

In case of ERROR, the message will reflect what caused the acquisition to fail.

acquisition/stats/start_time, acquisition/stats/stop_time

All timestamps are Unix timestamps generated on the DAQ machine and are not really correlated with external sources (clock skew can be up to 100s of milliseconds). In particular cases any of the timestamp can be null (when no acquisition happened on the system yet).

DAQ config

Represents the DAQ configuration that is loaded by all services. This configuration describes the data source and the way the data source is processed by stream processors.

This object is returned by:

  • get_config
  • set_config
{
  "bit_depth": 16,                   // Bit depth of the image. Supported values are dependent on the detector.
  "detector_name": "EG9M",           // Name of the detector. Must be unique, used as internal DAQ identifier.
  "detector_type": "eiger",          // Type of detector. Currently supported: eiger, jungfrau, gigafrost, bsread
  "image_pixel_height": 3264,        // Assembled image height in pixels, including gap pixels.
  "image_pixel_width": 3106,         // Assembled image width in pixels, including gap pixels.
  "n_modules": 2,                    // Number of modules to assemble.
  "start_udp_port": 50000,           // Start UDP port where the detector is streaming modules.
  "writer_user_id": 12345,           // User_id under which the writer will create and write files.
  "module_positions": {              // Dictionary with mapping between module number -> image position.
    "0": [0, 3263, 513, 3008 ],      //   Format: [start_x, start_y, end_x, end_y]
    "1": [516, 3263, 1029, 3008 ]
  }
}

writer_user_id

Must be an integer representing the user_id. For e-accounts, it's simply the number after the 'e'. For example e12345 has a user_id of 12345. For other users you can find out their user_id by running:

id -u [user_name]

detector_type

Possible values: eiger, gigafrost, jungfrau, bsread

This is not something you usually change without hardware changes on the beamline.

DAQ statistics

Current data flow statistics of the DAQ.

{
  "detector": {                 // Detector statistics
    "bytes_per_second": 0.0,    //   Throughput
    "images_per_second": 0.0    //   Frequency
  },
  "writer": {                   // Writer statistics
    "bytes_per_second": 0.0,    //   Throughput
    "images_per_second": 0.0    //   Frequency
  }
}

The statistics is refreshed and aggregated with 1 Hz.

DAQ logs

Log of all acquisitions that produced a file. It is a list of acquisition objects in reverse chronological order.

[
  {                                       
    "info": {                             //   User request that generated this acquisition
      "n_images": 100,                    //     Number of images
      "output_file": "/tmp/test.h5",      //     Output file
      "run_id": 1684930336122153839       //     Run_id (request timestamp by default, generated by the API)
    },
    "message": "Completed.",              // User displayable message from the writer.
    "state": "FINISHED",                  // Final state of the acquisition.
    "stats": {                            // Stats of the acquisition
      "n_write_completed": 100,           //   Number of completed writes
      "n_write_requested": 100,           //   Number of requested writers
      "start_time": 1684930336.1252322,   //   Start time of request as seen by writer driver
      "stop_time": 1684930345.2723851     //   Stop time of request as seen by writer driver
    }
  },
  { ... }
]

In case the file could not be created or another error occurred this will not be logged in the acquisition log.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

std_daq_client-1.2.0.tar.gz (15.4 kB view details)

Uploaded Source

Built Distribution

std_daq_client-1.2.0-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file std_daq_client-1.2.0.tar.gz.

File metadata

  • Download URL: std_daq_client-1.2.0.tar.gz
  • Upload date:
  • Size: 15.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for std_daq_client-1.2.0.tar.gz
Algorithm Hash digest
SHA256 0d554469e94a882d138f03b093f120f55a1e77d1d73100f2aa8471e2c745e0d9
MD5 05202510c66a032c883dc9f23a74e39e
BLAKE2b-256 d820a0f384dcedceceff1d79a91f236cd166f76cae894d5181162cd2d4eaeb02

See more details on using hashes here.

File details

Details for the file std_daq_client-1.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for std_daq_client-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 29255bd2435c97ff0d6b1e167f3e4a4476f83df101b50d2982f0ff0d611d5c3b
MD5 2bf868b4b2a5647331c1d13c140d7d5f
BLAKE2b-256 9222fdb4e254b02b46be42d95e4d97bcd1c15c496b9c3750c8d08a731ebb227d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page