Difference between revisions of "SAMPA SRO"

From epsciwiki
Jump to navigation Jump to search
Line 5: Line 5:
 
# [https://github.com/JeffersonLab/ersap-cpp ersap-cpp]
 
# [https://github.com/JeffersonLab/ersap-cpp ersap-cpp]
 
# [https://github.com/JeffersonLab/ersap-sampa ersap-actor]
 
# [https://github.com/JeffersonLab/ersap-sampa ersap-actor]
 +
 +
=== What do we currently have? ===
 +
 +
The four software components (actors/microservices) that make up the ERSAP-based GEM detector data processing program are as follows:
 +
 +
# A SAMA source actor with the capability described below
 +
## Accepts an arbitrary and adjustable number of SAMA streams, often known as links. So that you know, each SAMPA front-end card has a capacity for two Links. It is necessary to set a number of streams equal to six to read out the present GEM setup.
 +
## Decodes the raw SAMA data coming from each stream.
 +
## Aggregates the decoded data into a single array of byte buffers to transmit it to other actors in the application. (It is essential to remember that the array size is equivalent to the number of channels; for instance, in the case of six links and for a DAS mode, we have 480 channels)
 +
# A statistical actor that:
 +
## Receives decoded and aggregated arrays of byte buffers;
 +
## Extracts data for each channel; and
 +
## Determines the mean and sigma of the distribution in each channel.
 +
# Histogram actor that:
 +
## Receives decoded and aggregated arrays of byte buffers
 +
## Extracts data for each channel
 +
## Fills the histograms (selected channels) requested by the user
 +
## Visualizes histograms in real-time on a configurable grid of a canvas 
 +
# File sink actor that
 +
## Writes every 100-time slices decoded and aggregated data (array of byte buffers) into a file (Note: need to be activated file output in the application configuration/composition file: services.yaml)
  
 
=== Installation ===
 
=== Installation ===
Line 29: Line 49:
  
 
<br>
 
<br>
NB. We recommend defining and creating a $ERSAP_USER_DATA directory. No worries about the mentioned directory structure. After the first ERSAP execution, the $ERSAP_USER_DATA directory will get proper structure. Use the ''ersap-shell'' (ERSAP CLI) to run the services locally. The CLI provides a high-level interface to configure and start the different ERSAP components required to run an application.
+
NB. We recommend copying /home/gurjyan/Workspace/ersap/sampa dir, simplifying the installation process.  
 +
 
 +
The CLI provides a high-level interface to configure and start the different ERSAP components required to run an application.
  
 
#Start the ERSAP shell:
 
#Start the ERSAP shell:
 
#* $ERSAP_HOME/bin/ersap-shell
 
#* $ERSAP_HOME/bin/ersap-shell
# Define the application within a ''services.yaml'' file. An example of the file can be found in the ersap-jana installation manual. NB: The default location for the application definition file is in $ERSAP_USER_DATA/config dir  
+
# Define the application within a ''services.yaml'' file. An example of the file can be found below. NB: The default location for the application definition file is in $ERSAP_USER_DATA/config dir  
#Optionally, you can change the number of parallel threads used by the services to process requests
+
# Start the data processing. This will start the main Java DPE, a C++ DPE if the C++ service is listed in ''services.yaml'', and it will run the streaming orchestrator to process the data stream.
#* ersap> set threads <NUM_THREADS>
 
#Start the data processing. This will start the main Java DPE, a C++ DPE if the C++ service is listed in ''services.yaml'', and it will run the streaming orchestrator to process the data stream.
 
 
#* ersap> run local
 
#* ersap> run local
 
# Run SAMPA FE  (on some other terminal. NB: use bash shell)
 
# Run SAMPA FE  (on some other terminal. NB: use bash shell)

Revision as of 22:49, 20 February 2023


Project dependencies

  1. ersap-java
  2. ersap-cpp
  3. ersap-actor

What do we currently have?

The four software components (actors/microservices) that make up the ERSAP-based GEM detector data processing program are as follows:

  1. A SAMA source actor with the capability described below
    1. Accepts an arbitrary and adjustable number of SAMA streams, often known as links. So that you know, each SAMPA front-end card has a capacity for two Links. It is necessary to set a number of streams equal to six to read out the present GEM setup.
    2. Decodes the raw SAMA data coming from each stream.
    3. Aggregates the decoded data into a single array of byte buffers to transmit it to other actors in the application. (It is essential to remember that the array size is equivalent to the number of channels; for instance, in the case of six links and for a DAS mode, we have 480 channels)
  2. A statistical actor that:
    1. Receives decoded and aggregated arrays of byte buffers;
    2. Extracts data for each channel; and
    3. Determines the mean and sigma of the distribution in each channel.
  3. Histogram actor that:
    1. Receives decoded and aggregated arrays of byte buffers
    2. Extracts data for each channel
    3. Fills the histograms (selected channels) requested by the user
    4. Visualizes histograms in real-time on a configurable grid of a canvas
  4. File sink actor that
    1. Writes every 100-time slices decoded and aggregated data (array of byte buffers) into a file (Note: need to be activated file output in the application configuration/composition file: services.yaml)

Installation

NB. For installation you should define ERSAP_HOME environmental variable.

  1. ersap-java instructions
  2. ersap-cpp instructions
  3. ersap-actor instructions

SAMPA SRO diagram

  1. diagram

Building SAMA DAQ codebase

NB. The SAMPA SRO package is kindly provided by the ALICE collaboration and is modified by the EPSCI SRO group to make it streaming. The modified package can be found at /home/gurjyan/Devel/stream/exp-sampa

  1. login into alkaid.jlab.org
  2. copy the ALICE modified package into your directory
  3. follow instructions in README to build the package

Configuration and running

NB: Keeping the order of instructions is important.

NB: On alkaid.jlab.org source setup_ersap.bash/tcsh from /home/gurjyan/Workspace/ersap/sampa. This script sets up necessary environmental variables pointing to a correct JAVA SDK.


NB. We recommend copying /home/gurjyan/Workspace/ersap/sampa dir, simplifying the installation process.

The CLI provides a high-level interface to configure and start the different ERSAP components required to run an application.

  1. Start the ERSAP shell:
    • $ERSAP_HOME/bin/ersap-shell
  2. Define the application within a services.yaml file. An example of the file can be found below. NB: The default location for the application definition file is in $ERSAP_USER_DATA/config dir
  3. Start the data processing. This will start the main Java DPE, a C++ DPE if the C++ service is listed in services.yaml, and it will run the streaming orchestrator to process the data stream.
    • ersap> run local
  4. Run SAMPA FE (on some other terminal. NB: use bash shell)
    • >source [modified ALICE code directory]/dist/trorc/trorc-operator/setenv.sh
    • >treadout --data-type 1 --frames 4000 --mode das --mask 0x7 --port 6000 --host_ip localhost --events 0

ERSAP application data-stream pipeline

The following is an ERSAP application composition file (services.yaml), describing SAMPA SRO and data-stream processing back-end.

--- io-services:

 reader:
   class: org.jlab.ersap.actor.sampa.engine.SampaDASSourceEngine
   name: SMPSource
 writer:
   class: org.jlab.ersap.actor.sampa.engine.SampaFileSinkEngine
   name: SMPWriter

services:

 - class: org.jlab.ersap.actor.sampa.engine.SampaStatProcEngine
   name: SMPStreamTest
 - class: org.jlab.ersap.actor.sampa.engine.SampaHistogramProcEngine
   name: SMPHistogram

configuration:

 io-services:
   reader:
     stream_count: 6
     port: 6000
   writer:
     file_output: "false"
 services:
   SMPStreamTest:
     verbose: "false"
   SMPHistogram:
     frame_title: "ERSAP"
     frame_width: 1400
     frame_height: 1200
     grid_size: 2
     #> hist_titles is a string containing the list of integers=channels separated by ,
     hist_titles: "1, 3, 7, 17"
     hist_bins: 100
     hist_min: 0
     hist_max: 500

mime-types:

 - binary/data-sampa