Control the LArPix chip
Project description
larpix-control
Control the LArPix chip
Setup and installation
This code is intended to work on both Python 2.7+ and Python 3.6+.
Install larpix-control from pip with
pip install larpix-control
To return your namespace to the pre-larpix state, just
run pip uninstall larpix-control
. If you'd prefer to download the code
yourself, you can. Just run pip install .
from the root directory of
the repository.
Tests
You can run tests to convince yourself that the software works as
expected. After pip install
ing this package, you can run the tests
from the repository root directory with the simple command pytest
.
You can read the tests to see examples of how to call all of the common functions.
File structure
The larpix package contains:
larpix
├── controller.py
├── timestamp.py
├── key.py
├── chip.py
├── quickstart.py
├── bitarrayhelper.py
├── packet
│ ├── packet_v2.py
│ ├── packet_v1.py
│ ├── packet_collection.py
│ ├── sync_packet.py
│ ├── trigger_packet.py
│ ├── message_packet.py
│ ├── timestamp_packet.py
│ └── __init__.py
├── io
│ ├── pacman_io.py
│ ├── serialport.py
│ ├── multizmq_io.py
│ ├── fakeio.py
│ ├── zmq_io.py
│ ├── io.py
│ └── __init__.py
├── __init__.py
├── configuration
│ ├── configuration_v2.py
│ ├── configuration_v1.py
│ ├── configuration.py
│ ├── configuration_lightpix_v1.py
│ └── __init__.py
├── format
│ ├── hdf5format.py
│ ├── pacman_msg_format.py
│ ├── message_format.py
│ └── __init__.py
├── logger
│ ├── h5_logger.py
│ ├── logger.py
│ ├── stdout_logger.py
│ └── __init__.py
├── configs
│ ├── __init__.py
│ ├── controller
│ │ ├── __init__.py
│ │ ├── network-3x3-tile-channel0.json
│ │ ├── network-3x3-tile-channel2.json
│ │ ├── network-3x3-tile-channel1.json
│ │ ├── lightpix_v1_example.json
│ │ ├── v2_example.json
│ │ ├── bare-die-v2-v1.0.0.json
│ │ ├── pcb-10_chip_info.json
│ │ ├── pcb-1_chip_info.json
│ │ ├── pcb-2_chip_info.json
│ │ ├── pcb-3_chip_info.json
│ │ ├── pcb-4_chip_info.json
│ │ ├── pcb-5_chip_info.json
│ │ └── pcb-6_chip_info.json
│ ├── io
│ │ ├── __init__.py
│ │ ├── default.json
│ │ ├── daq-srv1.json
│ │ ├── daq-srv2.json
│ │ ├── daq-srv3.json
│ │ ├── daq-srv4.json
│ │ ├── daq-srv5.json
│ │ ├── daq-srv6.json
│ │ ├── daq-srv7.json
│ │ ├── pacman.json
│ │ ├── pacman_loopback.json
│ │ └── loopback.json
│ └── chip
│ ├── __init__.py
│ ├── default_v2.json
│ ├── csa_bypass.json
│ ├── quiet.json
│ ├── default.json
│ ├── physics.json
│ └── default_lightpix_v1.json
├── serial_helpers
│ ├── analyzers.py
│ ├── dataformatter.py
│ ├── dataloader.py
│ ├── datalogger.py
│ └── __init__.py
└── larpix.py
scripts/
├── gen_controller_config.py
└── gen_hydra_simple.py
Minimal working example
So you're not a tutorials kind of person. Here's a minimal working example for you to play around with:
>>> from larpix import Controller, Packet_v2
>>> from larpix.io import FakeIO
>>> from larpix.logger import StdoutLogger
>>> controller = Controller()
>>> controller.io = FakeIO()
>>> controller.logger = StdoutLogger(buffer_length=0)
>>> controller.logger.enable()
>>> chip1 = controller.add_chip('1-1-2', version=2) # (access key)
>>> chip1.config.threshold_global = 25
>>> controller.write_configuration('1-1-2', chip1.config.register_map['threshold_global']) # chip key, register 64
[ Key: 1-1-2 | Chip: 2 | Upstream | Write | Register: 64 | Value: 25 | Parity: 1 (valid: True) ]
Record: [ Key: 1-1-2 | Chip: 2 | Upstream | Write | Register: 64 | Value: 25 | Parity: 1 (valid: True) ]
>>> packet = Packet_v2(b'\x02\x91\x15\xcd[\x07\x85\x00')
>>> packet_bytes = packet.bytes()
>>> pretend_input = ([packet], packet_bytes)
>>> controller.io.queue.append(pretend_input)
>>> controller.run(0.05, 'test run')
Record: [ Key: None | Chip: 2 | Downstream | Data | Channel: 5 | Timestamp: 123456789 | First packet: 0 | Dataword: 145 | Trigger: normal | Local FIFO ok | Shared FIFO ok | Parity: 0 (valid: True) ]
>>> print(controller.reads[0])
[ Key: None | Chip: 2 | Downstream | Data | Channel: 5 | Timstamp: 123456789 | Dataword: 145 | Trigger: normal | Local FIFO ok | Shared FIFO ok | Parity: 0 (valid: True) ]
Tutorial
This tutorial runs through how to use all of the main functionality of larpix-control.
To access the package contents, use one of the two following import
statements:
import larpix # use the larpix namespace
# or ...
from larpix import * # import all core larpix classes into the current namespace
The rest of the tutorial will assume you've imported all of the core larpix
classes via a from larpix import *
command.
Create a LArPix Controller
The LArPix Controller translates high-level ideas like "read configuration register 10" into communications to and from LArPix ASICs, and interprets the received data into a usable format.
Controller objects communicate with LArPix ASICs via an IO interface.
Currently available IO interfaces are SerialPort
, ZMQ_IO
and
FakeIO
. We'll work with FakeIO
in this tutorial, but all the
code will still work with properly initialized versions of the other IO
interfaces.
Set things up with
from larpix.io import FakeIO
from larpix.logger import StdoutLogger
controller = Controller()
controller.io = FakeIO()
controller.logger = StdoutLogger(buffer_length=0)
controller.logger.enable()
The FakeIO
object imitates a real IO interface for testing purposes.
It directs its output to stdout (i.e. it prints the output), and it
takes its input from a manually-updated queue. At the end of each
relevant section of the tutorial will be code for adding the expected
output to the queue. You'll have to refill the queue each time you run
the code.
Similarly, the StdoutLogger
mimics the real logger interface for testing. It
prints nicely formatted records of read / write commands to stdout every
buffer_length
packets. The logger interface requires enabling the
logger before messages will be stored. Before ending the python session, every
logger should be disabled to flush any remaining packets stored in the buffer.
Set up LArPix Chips
Chip objects represent actual LArPix ASICs. For each ASIC you want to communicate with, create a LArPix Chip object and add it to the Controller.
chipid = 5
chip_key = '1-1-5'
chip5 = controller.add_chip(chip_key, version=2)
chip5 = controller[chip_key] # get chip object
chip5 = controller[1,1,5] # gets same chip object
The chip_key
field specifies the necessary information for the controller.io
object to route packets to/from the chip. The details of how this key maps to a
physical chip is implemented separately for each larpix.io
class.
The key itself consists of 3 1-byte integer values that represent the 3 low-level layers in larpix readout:
-
the io group: this is the highest layer and represents a control system that communicates with multiple IO channels
-
the io channel: this is the middle layer and represents a single MOSI/MISO pair
-
the chip id: this is the lowest layer and represents a single chip on a MOSI/MISO network
If you want to interact with chip keys directly, you can instantiate one using
a valid keystring (three 1-byte integers separated by dashes, e.g. '1-1-2'
).
Please note that the ids of 0, 1, and 255 are reserved for special functions.
from larpix import Key
example_key = Key(1,2,3)
You can grab relevant information from the key via a number of useful methods and attributes:
example_key.io_group # 1
example_key.io_channel # 2
example_key.chip_id # 3
example_key.to_dict() # returns a dict with the above keys / values
If you are using a Key
in a script, we recommend that you generate the keys
via the Key(<io_group>,<io_channel>,<chip_id>)
method which will protect against updates to the keystring
formatting.
You can read the docs to learn more about Key
functionality.
Set up LArPix Hydra network
The controller object contains an internal structure representing the Hydra
networks on each of the io channels. This structure can be accessed via
controller.network
and modified using the controller.add_network_node
and controller.add_network_link
methods. However, it can be a tedious
and error-prone process to add each link to the network representation. So,
there exists a friendlier configuration file that is used to generate these network links.
To load a network configuration into the controller:
controller.load('controller/v2_example.json')
print(controller.chips) # chips that have been loaded into controller
list(controller.network[1][1]['miso_ds'].edges) # all links contained in the miso_ds graph
list(controller.network[1][1]['miso_us'].nodes) # all nodes within the miso_us graph
list(controller.network[1][1]['mosi'].edges) # all links within the mosi graph
Each graph is represented by a networkx directed graph and can be examined and queried in that way. All edges point in the direction of data flow.
list(controller.network[1][1]['mosi'].in_edges(2)) # all links pointing to chip 2 in mosi graph
list(controller.network[1][1]['miso_ds'].successors(3)) # all chips receiving downstream data packets from chip 3
controller.network[1][1]['mosi'].edges[(3,2)]['uart'] # check the physical uart channel that chip 2 listens to chip 3 via
controller.network[1][1]['mosi'].nodes[2]['root'] # check if designated root chip
After loading the network into the controller, the init_network
command
automates the process of bringing up individual chips in the network.
controller.init_network(1,1) # issues packets required to initialize the 1,1 hydra network
print(controller['1-1-2'].config.chip_id)
print(controller['1-1-3'].config.enable_miso_downstream)
This issues configuration commands in the proper order so that upstream chips are configured before downstream chips. If you'd like to reset the network configuration
controller.reset_network(1,1)
can be used to reverse the configuration commands issued with init_network
.
These processes are not "smart" in that they blindly issue config commands
assuming the network is either fully configured or in a default state, so buyer
beware.
The network initialization can be broken down into single steps by also passing along the chip id:
controller.init_network(1,1,2) # configures only chip 2
controller.init_network(1,1,3) # configures only chip 3
But this requires initializing the chips in the proper order. You can get the chip keys in order of their depth within the network via
controller.get_network_keys(1,1) # gets a list of chip keys starting at the root node and descending
controller.get_network_keys(1,1,root_first_traversal=False) # get list of chip keys starting at deepest chips and ascending
Adjust the configuration of the LArPix Chips
Each Chip object manages its own configuration in software. Configurations can be adjusted by name using attributes of the Chip's configuration:
chip5.config.threshold_global = 35 # entire register = 1 number
chip5.config.enable_periodic_reset = 1 # one bit as part of a register
chip5.config.channel_mask[20] = 1 # one bit per channel
Values are validated, and invalid values will raise exceptions.
Note: Changing the configuration of a Chip object does not change the configuration on the ASIC.
Once the configuration is set, the new values must be sent to the LArPix ASICs. There is an appropriate Controller method for that:
controller.write_configuration(chip_key) # send all registers
controller.write_configuration(chip_key, 32) # send only register 32
controller.write_configuration(chip_key, [32, 50]) # send registers 32 and 50
controller.write_configuration(chip_key, 'threshold_global') # send register for 'threshold_global'
Register addresses can be looked up using the configuration object:
threshold_global_reg = chip5.config.register_map['threshold_global']
And register names:
threshold_global_name = chip5.config.register_map_inv[64]
For configurations which extend over multiple registers, the relevant
attribute will end in _addresses
. Certain configurations share a
single register, whose attribute has all of the names in it. View the
documentation or source code to find the name to look up. (Or look at
the LArPix data sheet.)
Reading the configuration from LArPix ASICs
The current configuration state of the LArPix ASICs can be requested by sending out "configuration read" requests using the Controller:
controller.read_configuration(chip_key)
The same variations to read only certain registers are implemented for reading as for writing.
The responses from the LArPix ASICs are stored for inspection. See the section on "Inspecting received data" for more.
FakeIO queue code:
packets = chip5.get_configuration_read_packets()
bytestream = b'bytes for the config read packets'
controller.io.queue.append((packets, bytestream))
Receiving data from LArPix ASICs
When it is first initialized, the LArPix Controller ignores and discards
all data that it receives from LArPix. The Controller must be activated
by calling start_listening()
. All received data will then be
accumulated in an implementation-dependent queue or buffer, depending
on the IO interface used. To read the data from the buffer, call the
controller's read()
method, which returns both the raw bytestream
received as well as a list of LArPix Packet objects which have been
extracted from the bytestream. To stop listening for new data, call
stop_listening()
. Finally, to store the data in the controller
object, call the store_packets
method. All together:
controller.start_listening()
# Data arrives...
packets, bytestream = controller.read()
# More data arrives...
packets2, bytestream2 = controller.read()
controller.stop_listening()
message = 'First data arrived!'
message2 = 'More data arrived!'
controller.store_packets(packets, bytestream, message)
controller.store_packets(packets, bytestream2, message2)
There is a common pattern for reading data, namely to start listening,
then check in periodically for new data, and then after a certain amount
of time has passed, stop listening and store all the data as one
collection. The method run(timelimit, message)
accomplishes just this.
duration = 10 # seconds
message = '10-second data run'
controller.run(duration, message)
FakeIO queue code for the first code block:
packets = [Packet_v2()] * 40
bytestream = b'bytes from the first set of packets'
controller.io.queue.append((packets, bytestream))
packets2 = [Packet_v2()] * 30
bytestream2 = b'bytes from the second set of packets'
controller.io.queue.append((packets2, bytestream2))
fakeIO queue code for the second code block:
packets = [Packet_v2()] * 5
bytestream = b'[bytes from read #%d] '
for i in range(100):
controller.io.queue.append((packets, bytestream%i))
Inspecting received data
Once data is stored in the controller, it is available in the reads
attribute as a list of all data runs. Each element of the list is a
PacketCollection object, which functions like a list of Packet objects
each representing one LArPix packet.
PacketCollection objects can be indexed like a list:
run1 = controller.reads[0]
first_packet = run1[0] # Packet object
first_ten_packets = run1[0:10] # smaller PacketCollection object
first_packet_bits = run1[0, 'bits'] # string representation of bits in packet
first_ten_packet_bits = run1[0:10, 'bits'] # list of strings
PacketCollections can be printed to display the contents of the Packets they contain. To prevent endless scrolling, only the first ten and last ten packets are displayed, and the number of omitted packets is noted. To view the omitted packets, use a slice around the area of interest.
print(run1) # prints the contents of the packets
print(run1[10:30]) # prints 20 packets from the middle of the run
In interactive Python, returned objects are not printed, but rather
their "representation" is printed (cf. the __repr__
method). The
representation of PacketCollections is a listing of the number of
packets, the "read id" (a.k.a. the run number), and the message
associated with the PacketCollection when it was created.
Individual LArPix Packets
LArPix Packet objects represent individual LArPix UART packets. They have attributes which can be used to inspect or modify the contents of the packet.
packet = run1[0]
# all packets
packet.packet_type # unique in that it gives the bits representation
packet.chip_id # all other properties return Python numbers
packet.chip_key # key for association to a unique chip (can be None)
packet.parity
packet.downstream_marker
# data packets
packet.channel_id
packet.dataword
packet.timestamp
packet.trigger_type
packet.local_fifo
packet.shared_fifo
# config packets
packet.register_address
packet.register_data
Internally, packets are represented as an array of bits, and the
different attributes use Python "properties" to seamlessly convert
between the bits representation and a more intuitive integer
representation. The bits representation can be inspected with the
bits
attribute.
Packet objects do not restrict you from adjusting an attribute for an
inappropriate packet type. For example, you can create a data packet and
then set packet.register_address = 5
. This will adjust the packet
bits corresponding to a configuration packet's "register_address"
region, which is probably not what you want for your data packet.
Packets have a parity bit which enforces odd parity, i.e. the sum of
all the individual bits in a packet must be an odd number. The parity
bit can be accessed as above using the parity
attribute.
The correct parity bit can be computed using compute_parity()
,
and the validity of a packet's parity can be checked using
has_valid_parity()
. When constructing a new packet, the correct
parity bit can be assigned using assign_parity()
.
Individual packets can be printed to show a human-readable interpretation of the packet contents. The printed version adjusts its output based on the packet type, so a data packet will show the data word, timestamp, etc., while a configuration packet will show the register address and register data.
Like with PacketCollections, Packets also have a "representation" view based on the bytes that make up the packet. This can be useful for creating new packets since a Packet's representation is also a vaild call to the Packet constructor. So the output from an interactive session can be copied as input or into a script to create the same packet.
With the v2 chip, more information about the internal fifo can be gathered by
running with fifo diagonstics enabled on a given asic. In this case, the bits of
each packet are to be interpreted differently. Each packet object can be set to
be interpreted in this mode via the fifo_diagnostics_enabled
flag. See the
Packet_v2 documentation for
more details.
Logging communications with LArPix ASICs using the HDF5Logger
To create a permanent record of communications with the LArPix ASICs, an
HDF5Logger
is used. To create a new logger
from larpix.logger import HDF5Logger
controller.logger = HDF5Logger(filename=None, buffer_length=10000) # a filename of None uses the default filename formatting
controller.logger.enable() # starts tracking all communications
You can also initialize and enable the logger in one call by passing the
enabled
keyword argument (which defaults to False
):
controller.logger = HDF5Logger(filename=None, enabled=True)
Now whenever you send or receive packets, they will be captured by the logger
and added to the logger's buffer. Once buffer_length
packets have been
captured the packets will be written out to the file. You can force the logger
to dump the currently held packets at any time using HDF5Logger.flush()
controller.verify_configuration()
controller.logger.flush()
In the event that you want to temporarily stop tracking communications,
the disable
and enable
commands do exactly what you think they might.
controller.logger.disable() # stop tracking
# any communication here is ignored
controller.logger.enable() # start tracking again
controller.logger.is_enabled() # returns True if tracking
Once you have finished your tests, be sure to disable the logger. If you do not,
you will lose any data still in the buffer of the logger object. We strongly recommend
wrapping logger code with a try, except
statement if you can. Any remaining
packets in the buffer are flushed to the file upon disabling.
controller.logger.disable()
Viewing data from the HDF5Logger
The HDF5Logger
uses a format called LArPix+HDF5v1.0 that is specified in
the larpix.format.hdf5format
module (and
documentation
starting in v2.3.0). That module contains a to_file
method which is
used internally by HDF5Logger
and a from_file
method that you
can use to load the file contents back into LArPix Control. The
LArPix+HDF5 format is a "plain HDF5" format that can be inspected with
h5py
or any language's HDF5 binding.
To open the HDF5 file from python
import h5py
datafile = h5py.File('<filename>')
Within the datafile there is one group ('_header'
) and two datasets
('packets'
and 'messages'
). The header group contains some useful meta information about
when the datafile was created and the file format version number, stored as
attributes.
list(datafile.keys()) # ['_header', 'messages', 'packets']
list(datafile['_header'].attrs) # ['created', 'modified', version']
The packets are stored sequentially as a numpy
mixed-type arrays within the
rows of the HDF5 dataset. The columns refer to the element of the numpy mixed
type array. The specifics of the data type and entries are set by the
larpix.format.hdf5format.dtype
object - see the larpix-control docs for more information.
You can inspect a packet as a tuple simply by accessing its respective position within the
HDF5 dataset.
raw_value = datafile['packets'][0] # e.g. (b'0-246', 3, 246, 1, 1, -1, -1, -1, -1, -1, -1, 0, 0)
raw_values = datafile['packets'][-100:] # last 100 packets in file
If you want to make use of numpy
's mixed type arrays, you can convert the
raw values to the proper encoding by retrieving it as a list (of length
1, for example) via
packet_repr = raw_values[0:1] # list with one element
packet_repr['chip_id'] # chip key for packet, e.g. 246
packet_repr['dataword'] # list of ADC values for each packet
packet_repr.dtype # description of data type (with names of each column)
You can also view entire "columns" of data:
# all packets' ADC counts, including non-data packets
raw_values['dataword']
# Select based on data type using a numpy bool / "mask" array:
raw_values['dataword'][raw_values['packet_type'] == 0] # all data packets' ADC counts
h5py
and numpy
optimize the retrieval of data so you can read
certain columns or rows without loading the entire data file into memory.
Don't forget to close the file when you are done. (Not necessary in interactive python sessions if you are about to quit.)
datafile.close()
Running with a PACMANv1r1 board (v2 asic)
Before you can configure the system, you need to generate a configuration file
for the PACMAN_IO interface. This sets up the mapping from chip keys to the ip
addresses of the physical devices. One example configuration is provided in
larpix/configs/io/pacman.json
, which assumes that you can perform hostname
DNS resolution.
After powering up the PACMAN board, you can create a new PACMAN io object with
from larpix import Controller
from larpix.io import PACMAN_IO
controller = Controller()
controller.io = PACMAN_IO(config_filepath='<io config file path>')
controller.load('<controller config file path>')
controller.io.ping() # returns a dict of (io_group, ping_success)
To set the correct supply voltages
controller.io.set_vddd() # set default vddd (~1.8V)
controller.io.set_vdda() # set default vdda (~1.8V)
These automatically query the built-in ADCs and return the set voltage and current in mV and mA, respectively. And to power on the chips
controller.io.enable_tile()
which will enable the LDOs for VDDD/VDDA and the driver chips / FPGA outputs for IO.
To bring up the Hydra network (and work around the known bugs in v2), do the following:
# First bring up the network using as few packets as possible
controller.io.group_packets_by_io_group = False # this throttles the data rate to avoid FIFO collisions
for io_group, io_channels in controller.network.items():
for io_channel in io_channels:
controller.init_network(io_group, io_channel)
# Configure the IO for a slower UART and differential signaling
controller.io.double_send_packets = True # double up packets to avoid 512 bug when configuring
for io_group, io_channels in controller.network.items():
for io_channel in io_channels:
chip_keys = controller.get_network_keys(io_group,io_channel,root_first_traversal=False)
for chip_key in chip_keys:
controller[chip_key].config.clk_ctrl = 1
controller[chip_key].config.enable_miso_differential = [1,1,1,1]
controller.write_configuration(chip_key, 'enable_miso_differential')
controller.write_configuration(chip_key, 'clk_ctrl')
for io_group, io_channels in controller.network.items():
for io_channel in io_channels:
controller.io.set_uart_clock_ratio(io_channel, 4, io_group=io_group)
controller.io.double_send_packets = False
controller.io.group_packets_by_io_group = True
At this point, you can happily interface with the ASICs using everything you
learned above. I also recommend you glance at the section below (Running with
a Bern DAQ board), which describes some shortcut functions available in the
Controller
class. In particular, it is good practice to verify_configuration
before proceeding with anything.
Running with a Bern DAQ board (v1 asic)
Since you have completed the tutorial with the FakeIO
class, you are now ready
to interface with some LArPix ASICs. If you have a Bern DAQ v2-3 setup you can
follow along with the rest of the tutorial.
Before you can configure the system, you will need to generate a configuration
file for the ZMQ_IO or MultiZMQ_IO interface. This provides the mapping from
chip keys to physical devices. In the case of the ZMQ interface, it maps
io group #s to the IP address of the DAQ board. A number of example
configurations are provided in the installation under
larpix/configs/io/<config name>.json
, which may work for your purposes. We
recommend reading the docs about how to create one of these configuration files.
By default the system looks for configuration in the pwd, before looking for the
installation files. If you only have one DAQ board on your network, likely
you will load the io/daq-srv<#>.json
configuration.
With the DAQ system up and running
>>> from larpix import Controller
>>> from larpix.io import ZMQ_IO
>>> controller = Controller()
>>> controller.io = ZMQ_IO(config_filepath='<path to config>')
>>> controller.load('controller/pcb-<#>_chip_info.json')
>>> controller.io.ping()
>>> for key,chip in controller.chips.items():
... chip.config.load('chip/quiet.json')
... print(key, chip.config)
... controller.write_configuration(key)
>>> controller.run(1,'checking the data rate')
>>> controller.reads[-1]
<PacketCollection with 0 packets, read_id 0, 'checking the data rate'>
This should give you a quiet state with no data packets. Occasionally, there can be a few packets left in one of the system buffers (LArPix, FPGA, DAQ server). A second run command should return without any new packets.
If you are using the v1.5 anode, you may need to reconfigure the miso/mosi mapping (since the miso/mosi pair for a daisy chain is not necessarily on a single channel). To do this, pass a miso_map
or mosi_map
to the ZMQ_IO
object on initialization:
>>> controller.io = ZMQ_IO(config_filepath='<path to config>', miso_map={2:1}) # relabels packets received on channel 2 as packets from channel 1
Check configurations
If you are still receiving data, you can check that the hardware chip configuration match the software chip configurations with
>>> controller.verify_configuration()
(True, {})
If the configuration read packets don't match the software chip configuration, this will return
>>> controller.verify_configuration()
(False, {<register>: (<expected>, <received>), ...})
Missing packets will show up as
>>> controller.verify_configuration()
(False, {<register>: (<expected>, None), ...})
If your configurations match, and you still receive data then you are likely seeing some pickup on the sensor from the environment -- good luck!
Enable a single channel
>>> chip_key = '1-1-3'
>>> controller.disable() # mask off all channel
>>> controller.enable(chip_key, [0]) # enable channel 0 of chip
Set the global threshold of a chip
>>> controller.chips[chip_key].config.global_threshold = 40
>>> controller.write_configuration(chip_key)
>>> controller.verify_configuration(chip_key)
(True, {})
Inject a pulse into a specific channel
>>> controller.enable_testpulse(chip_key, [0]) # connect channel 0 to the test pulse circuit and initialize the internal DAC to 255
>>> controller.issue_testpulse(chip_key, 10) # inject a pulse of size 10DAC by stepping down the DAC
<PacketCollection with XX packets, read_id XX, "configuration write">
>>> controller.disable_testpulse(chip_key) # disconnect test pulse circuit from all channels on chip
You will need to periodically reset the DAC to 255, otherwise you will receive a
ValueError
once the DAC reaches the minimum specified value.
>>> controller.enable_testpulse(chip_key, [0], start_dac=255)
>>> controller.issue_testpulse(chip_key, 50, min_dac=200) # the min_dac keyword sets the lower bound for the DAC (useful to avoid non-linearities at around 70-80DAC)
<PacketCollection with XX packets, read_id XX, "configuration write">
>>> controller.issue_testpulse(chip_key, 50, min_dac=200)
ValueError: Minimum DAC exceeded
>>> controller.enable_testpulse(chip_key, [0], start_dac=255)
>>> controller.issue_testpulse(chip_key, 50, min_dac=200)
<PacketCollection with XX packets, read_id XX, "configuration write">
Enable the analog monitor on a channel
>>> controller.enable_analog_monitor(chip_key, 0) # drive buffer output of channel 0 out on analog monitor line
>>> controller.disable_analog_monitor(chip_key) # disable the analog monitor on chip
While the software enforces that only one channel per chip is being driven out on the analog monitor, you must disable the analog monitor if moving between chips.
Miscellaneous implementation details
Endian-ness
We use the convention that the LSB is sent out first and read in first. The location of the LSB in arrays and lists changes from object to object based on the conventions of the other packages we interact with.
In particular, pyserial sends out index 0 first, so for bytes
objects,
index 0 will generally have the LSB. On the other hand, bitstrings
treats the last index as the LSB, which is also how numbers are
usually displayed on screen, e.g. 0100
in binary means 4 not 2. So for
BitArray
and Bits
objects, the LSB will generally be last.
Note that this combination leads to the slightly awkward convention that
the least significant bit of a bytestring is the last bit of the
first byte. For example, if bits[15:0] of a packet are
0000 0010 0000 0001
( = 0x0201 = 513), then the bytes will be sent out as
b'\x01\x02'
.
The Configuration object
The Configuration
object represents all of the options in the LArPix
configuration register. Each row in the configuration table in the LArPix datasheet
has a corresponding attribute in the Configuration
object. Per-channel
attributes are stored in a list, and all other attributes are stored as
a simple integer. (This includes everything from single bits to values
such as "reset cycles," which spans 3 bytes.)
Configuration
objects also have some helper methods for enabling and
disabling per-channel settings (such as csa_testpulse_enable
or
channel_mask
). The relevant methods are listed here and should be
prefixed with either enable_
or disable_
:
channels
enables/disables thechannel_mask
registerexternal_trigger
enables/disables theexternal_trigger_mask
registertestpulse
enables/disables thecsa_testpulse_enable
registeranalog_monitor
enables/disables thecsa_monitor_select
register
Most of these methods accept an optional list of channels to enable or
disable (and with no list specified acts on all channels). The exception
is enable_analog_monitor
(and its disable
counterpart): the enable
method requires a particular channel to be specified, and the disable
method does not require any argument at all. This is because at most one
channel is allowed to have the analog monitor enabled.
The machinery of the Configuration
object ensures that each value is
converted to the appropriate set of bits when it comes time to send
actual commands to the physical chip. Although this is not transparent
to you as a user of this library, you might want to know that two sets of
configuration options are always sent together in the same configuration
packet:
-
csa_gain
,csa_bypass
, andinternal_bypass
are combined into a single byte, so even though they have their own attributes, they must be written to the physical chip together -
test_mode
,cross_trigger_mode
,periodic_reset
, andfifo_diagnostic
work the same way
Similarly, all of the per-channel options (except for the pixel trim thresholds) are sent in 4 groups of 8 channels.
Configurations can be loaded by importing larpix.configs
and running
the load
function. This function searches for a configuration with the
given filename relative to the current directory before searching the
"system" location (secretly it's in the larpix/configs/ folder). This is
similar to #include "header.h"
behavior in C.
Configurations can be saved by calling chip.config.write
with the
desired filename.
Once the Chip object has been configured, the configuration must be sent to the physical chip.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file larpix-control-3.7.0.tar.gz
.
File metadata
- Download URL: larpix-control-3.7.0.tar.gz
- Upload date:
- Size: 157.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.10.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 229694445c7f328505640cb18b2c88ed45e71797d91e0e0b5e01e7eab1053901 |
|
MD5 | e94accc40777724bf7c1acbb0e45b0d1 |
|
BLAKE2b-256 | 8cdd6c204462cf79cea500e189576d4c2c2c28ee629dbc0ad41b3ad96429bdee |
File details
Details for the file larpix_control-3.7.0-py3-none-any.whl
.
File metadata
- Download URL: larpix_control-3.7.0-py3-none-any.whl
- Upload date:
- Size: 138.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.10.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4fc5b24ac9ed01c29a9993f89650490fa448b4055a0cf9c00f0e8916894fa5f5 |
|
MD5 | e864676c5084d6aa5b60e66e87ee2510 |
|
BLAKE2b-256 | 5fea88400ea4027ea20e2e2677df9632778b2414282bab5df997a50ee108d356 |