Difference between revisions of "SAMPA SRO"

From epsciwiki
Jump to navigation Jump to search
 
(51 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
 +
 +
=== What do we currently have? ===
 +
 +
The four software components (actors/microservices) that make up the ERSAP-based GEM detector data processing program are as follows:
 +
 +
# A SAMPA source actor with the capability described below
 +
## Accepts an arbitrary and adjustable number of SAMA streams, often known as links. So that you know, each SAMPA front-end card has a capacity for two Links. It is necessary to set a number of streams equal to six to read out the present GEM setup.
 +
## Decodes the raw SAMPA data coming from each stream.
 +
## Aggregates the decoded data into a single array of byte buffers to transmit it to other actors in the application. (It is essential to remember that the array size is equivalent to the number of channels; for instance, in the case of six links and for a DAS mode, we have 480 channels)
 +
# A statistical actor that:
 +
## Receives decoded and aggregated arrays of byte buffers;
 +
## Extracts data for each channel and
 +
## Determines the mean and sigma of the distribution in each channel.
 +
# Histogram actor that:
 +
## Receives decoded and aggregated arrays of byte buffers
 +
## Extracts data for each channel
 +
## Fills the histograms (selected channels) requested by the user
 +
## Visualizes histograms in real-time on a configurable grid of a canvas 
 +
# File sink actor that
 +
## Writes every 100-time slices decoded and aggregated data (array of byte buffers) into a file (Note: need to be activated file output in the application configuration/composition file: services.yaml)
 +
 +
=== Quick, two-step DAS mode data acquisition and processing ===
 +
1. Programming SAMP FECs:
 +
 +
To program SAMP FECs, navigate to the node alkaid.jlab.org and cd to the user's home directory (~gurjyan/Devel/stream/exp-sampa). Then, run the following command to initiate the programming process:
 +
 
 +
  ''python init/go-trorc.py -mode das -mask 0x1F -nbp 5 -c''
 +
 +
This will program the SAMP FECs with the specified settings.
 +
 +
2. Setting up the environment for data acquisition:
 +
 +
To set up the environment for data acquisition, navigate to the /usr/local/trorc directory and run the following command:
 +
 +
  ''source /usr/local/trorc/trorc-operator/setenv.sh''
 +
 +
This will set up the environment for data acquisition.
 +
 +
3. Starting SAMPA data acquisition:
 +
 +
To start SAMPA data acquisition, run the following command:
 +
 +
  ''treadout --data-type 1 --frames 4000 --mode das --mask 0x1F --nr 121 --output-dir user_output_dir''
 +
 +
Put the mask = 0x18 if you want to read FEC4 and 5.
 +
This will start the data acquisition process, recording the data in the specified (by --output-dir) output directory. Note that two files will be created for each FEC, one for each GBT link.
 +
 +
4. Preparing ERSAP data processing:
 +
 +
After the data has been recorded, navigate to the ~gurjyan/Workspace/ersap/sampa directory and execute the following command to set up the ERSAP environment:
 +
 +
  ''source setup-ersap.bash''
 +
 +
This will set up the ERSAP environment for data processing.
 +
 +
5. Running ERSAP data processing pipeline:
 +
 +
To run the ERSAP data processing pipeline, navigate to the ~gurjyan/Workspace/ersap/sampa directory and execute the following command:
 +
 +
  ''$ERSAP_HOME/bin/ersap-shell''
 +
 +
This will launch the ERSAP data processing pipeline, which will read and process the DAQ recorded file.
 +
 +
6. Run it in a single-threaded mode:
 +
 +
  ''ersap>set threads 1''
 +
 +
7. '''You need to edit the pipeline configuration file and make sure frameCount parameter value is consistent with the  --frames settings during the data acquisition (see point 3).'''
 +
 +
8. Make sure you specify all the data files to be processed in the files.txt
 +
 +
  ''ersap>show files''
 +
  ''ersap>edit files''
 +
 +
9. Make sure data files are accessible.
 +
Note that you can copy data files into the $ERSAP_USER_DATA/data/input directory, or you can specify the input directory using ERSAP CLI:
 +
 +
  ''ersap>set inputDir'' [full path to the directory]
 +
 +
10. Running the local ERSAP processing:
 
   
 
   
 +
  ''ersap>run-local''
 +
 +
This will run the local ERSAP processing on the recorded data.
 +
 +
Processed files will be created in the $ERSAP_USEWR_DATA/data/output directory. You can copy them to a different directory for further analysis to free up space for consequent data processing runs.
 +
 +
That's it! With these steps, you should be able to set up and use the SAMPA and ERSAP data acquisition and processing systems on the node alkaid.jlab.org.
 +
 +
=== How to write your ERSAP processor engine? ===
 +
 +
Implementing the ERSAP interface is what you should do to make your processor engine.
 +
The user must put their code in the area of the execute method that has been left blank since it has only been partially completed.
 +
 +
NB. See already available engines to fill the rest of the interface methods (they are fairly similar)
 +
 +
public class TestProcEngine implements Engine {
 +
    @Override
 +
    public EngineData configure(EngineData engineData) {
 +
 +
              ByteBuffer bb = (ByteBuffer)input.getData();
 +
        ByteBuffer[] data;
 +
        try {
 +
            data = DasDataType.deserialize(bb);
 +
            int sampleLimit = data[0].limit()/2;
 +
            for (int channel = 0; channel < chNum; channel++) {
 +
 +
                    short[] _sData = new short[sampleLimit];
 +
                    for (int sample = 0; sample < sampleLimit; sample++) {
 +
                      _sData[sample] =  data[channel].getShort(2 * sample);
 +
                    }
 +
                  ''' USER CODE GOES HERE '''
 +
                  ''' deals with _sData[] containing data for a single channel '''
 +
            }
 +
 +
        } catch (ErsapException e) {
 +
            e.printStackTrace();
 +
        }
 +
        return input;
 +
    }
 +
    @Override
 +
    public EngineData execute(EngineData engineData) {
 +
        return null;
 +
    }
 +
    @Override
 +
    public EngineData executeGroup(Set<EngineData> set) {
 +
        return null;
 +
    }
 +
    @Override
 +
    public Set<EngineDataType> getInputDataTypes() {
 +
        return null;
 +
    }
 +
    @Override
 +
    public Set<EngineDataType> getOutputDataTypes() {
 +
        return null;
 +
    }
 +
    @Override
 +
    public Set<String> getStates() {
 +
        return null;
 +
    }
 +
    @Override
 +
    public String getDescription() {
 +
        return null;
 +
    }
 +
    @Override
 +
    public String getVersion() {
 +
        return null;
 +
    }
 +
    @Override
 +
    public String getAuthor() {
 +
        return null;
 +
    }
 +
    @Override
 +
    public void reset() {
 +
    }
  
 
=== Project dependencies ===
 
=== Project dependencies ===
 
# [https://github.com/JeffersonLab/ersap-java ersap-java]
 
# [https://github.com/JeffersonLab/ersap-java ersap-java]
 
# [https://github.com/JeffersonLab/ersap-cpp ersap-cpp]
 
# [https://github.com/JeffersonLab/ersap-cpp ersap-cpp]
# [https://github.com/JeffersonLab/ersap-vtp ersap-vtp]
+
# [https://github.com/JeffersonLab/ersap-sampa ersap-actor]
# [https://github.com/JeffersonLab/ersap-jana ersap-jana]
 
  
 
=== Installation ===
 
=== Installation ===
NB. For installation you should define ERSAP_HOME environmental variable.
+
'''NB.''' For installation you should define ERSAP_HOME environmental variable.
  
 
# [https://jeffersonlab.github.io/ersap-java/ ersap-java instructions]
 
# [https://jeffersonlab.github.io/ersap-java/ ersap-java instructions]
 
# [https://jeffersonlab.github.io/ersap-cpp/ ersap-cpp instructions]
 
# [https://jeffersonlab.github.io/ersap-cpp/ ersap-cpp instructions]
# [https://jeffersonlab.github.io/ersap-vtp/ ersap-vtp instructions]
+
# [https://jeffersonlab.github.io/ersap-sampa/ ersap-actor instructions]
# [https://jeffersonlab.github.io/ersap-jana/ ersap-jana instructions]
+
 
 +
=== SAMPA SRO diagram ===
 +
# [https://wiki.jlab.org/epsciwiki/index.php/File:Indra-astra.png#file diagram]
 
   
 
   
 
=== Building SAMA DAQ codebase ===  
 
=== Building SAMA DAQ codebase ===  
NB. The package is kindly provided by the ALICE collaboration and is modified the EPSCI SRO group to make it streaming
+
'''NB.''' The SAMPA SRO package is kindly provided by the ALICE collaboration and is modified by the EPSCI SRO group to make it streaming.
 
The modified package can be found at /home/gurjyan/Devel/stream/exp-sampa
 
The modified package can be found at /home/gurjyan/Devel/stream/exp-sampa
# login into alkaloid.jlab.org
+
# login into alkaid.jlab.org
# copy the ALICE modified package into your own directory
+
# copy the ALICE modified package into your directory
 
# follow instructions in README to build the package
 
# follow instructions in README to build the package
 +
 +
Open 2 terminal sessions on alkaid
 +
 +
 +
Terminal 1 (Configure front-end boards; needs to be done ONCE unless you change mode or c++ code):
 +
----------
 +
 +
bash
 +
cd /home/gurjyan/Devel/stream/exp-sampa
 +
python init/go-trorc.py -mode das -mask 0x1F -nbp 5 -c
 +
 +
‘das’ for raw ADC sample mode; substitute ‘dsp’ for threshold zero suppression mode
 +
‘0x1F’ with ‘5’ configures all five front-end cards; do this even if you don’t read them all out
 +
‘c’ compiles and copies modules – needed if you change modes; do it to be safe (few seconds)
 +
 +
 +
Terminal 2 (run and analyze data):
 +
----------
 +
 +
bash
 +
source /home/gurjyan/Devel/stream/exp-sampa/dist/trorc/trorc-operator/setenv.sh
 +
cd /home/gurjyan/Devel/stream/exp-sampa
 +
 +
treadout --data-type 1 --frames 2000 --mode das --mask 0x1 --events 0
 +
treadout --data-type 1 --frames 4000 --mode das --mask 0x11 --port 6000 --host_ip localhost --events 0
 +
 +
(treadout --data-type 1 --frames 4000 --mode das --mask 0x1F --nr 121 --output-dir /home/gurjyan/Devel/stream/exp-sampa) old way
 +
 +
'--data-type 1' - always use
 +
'--frames 4000' - collect 4000 GBT frames; with 1 board read out (see mask) this number can be large (e.g. 100000)
 +
  if a large number of frames is chosen with multiple boards read out, some disk files may be truncated due to
 +
      inability of file writing to keep up with data volume (file is still readable)
 +
‘--mode das’ for raw ADC sample mode; substitute ‘dsp’ for threshold zero suppression mode (MUST be same as in terminal 1)
 +
‘--0x1F’ reads out all 5 front-end cards; substitute '0x1' to read out card #0, '0x3' to read out cards #0 & #1, etc.
 +
‘--nr 121 --output-dir /daqfs/gurjyan/trorc’ run #121; creates output files in directory /daqfs/gurjyan/trorc/run000121;
 +
  replace with your run number
 +
 +
<CTRL C>
 +
    to end run
 +
 +
Depending on the mask, up to 10 raw data files are created; 2 per front-end card (FEC)
 +
These are named (for run #121): run000121_trorc00_link00.bin  (FEC 0, ch 0-79)
 +
                                run000121_trorc00_link01.bin  (FEC 0, ch 80-159)
 +
                                    run000121_trorc00_link02.bin  (FEC 1, ch 0-79)
 +
                                    run000121_trorc00_link03.bin  (FEC 1, ch 80-159)
 +
                                    run000121_trorc00_link04.bin  (FEC 2, ch 0-79)
 +
                                    run000121_trorc00_link05.bin  (FEC 2, ch 80-159)
 +
                                    run000121_trorc00_link06.bin  (FEC 3, ch 0-79)
 +
                                    run000121_trorc00_link07.bin  (FEC 3, ch 80-159)
 +
                                    run000121_trorc00_link08.bin  (FEC 4, ch 0-79)
 +
                                    run000121_trorc00_link09.bin  (FEC 4, ch 80-159)
  
 
=== Configuration and running ===
 
=== Configuration and running ===
NB. We recommend defining and creating $ERSAP_USER_DATA directory. No worries about the mentioned directory structure. After the first ERSAP execution the $ERSAP_USER_DATA directory will get proper structure. Use the 'ersap-shell' (ERSAP CLI) to run the services locally. The CLI provides a high-level interface to configure and start the different ERSAP components required to run an application.
+
NB: Keeping the order of instructions is important.
 +
 
 +
NB: On alkaid.jlab.org source setup_ersap.bash/tcsh from /home/gurjyan/Workspace/ersap/sampa. This script sets up necessary environmental variables pointing to a correct JAVA SDK.
 +
 
 +
<br>
 +
NB. We recommend copying /home/gurjyan/Workspace/ersap/sampa dir, simplifying the installation process.  
 +
 
 +
The CLI provides a high-level interface to configure and start the different ERSAP components required to run an application.
  
 
#Start the ERSAP shell:
 
#Start the ERSAP shell:
 
#* $ERSAP_HOME/bin/ersap-shell
 
#* $ERSAP_HOME/bin/ersap-shell
# Define the application within a `services.yaml` file. An example of the file can be found in the ersap-jana installation manual. NB: The default location for the application definition file is in $ERSAP_USER_DATA/config dir  
+
# Define the application within a ''services.yaml'' file. An example of the file can be found below. NB: The default location for the application definition file is in $ERSAP_USER_DATA/config dir  
#* ersap> set servicesFile services.yaml
+
# Start the data processing. This will start the main Java DPE, a C++ DPE if the C++ service is listed in ''services.yaml'', and it will run the streaming orchestrator to process the data stream.
#Optionally you can change the number of parallel threads used by the services to process requests
 
#* ersap> set threads <NUM_THREADS>
 
#Start the data processing. This will start the main Java DPE, a C++ DPE if the C++ service is listed in `services.yaml`,and it will run the streaming orchestrator to process the data-stream.
 
 
#* ersap> run local
 
#* ersap> run local
 +
# Run SAMPA FE  (on some other terminal. NB: use bash shell)
 +
#* >source /user/local//trorc/trorc-operator/setenv.sh
 +
#* >treadout --data-type 1 --frames 4000 --mode das --mask 0x1F --nr 121 --output-dir user_output_dir
  
You can put the above shell commands into a script ( e.g. jana.ersap) and run the script instead.
+
=== ERSAP application data-stream pipeline ===
To do that you can use ERSAP CLI (ersap-shell)
+
The following is an ERSAP application composition file (services.yaml), describing SAMPA SRO and data-stream processing back-end.  
  
'> $ERSAP_HOME/bin/ersap-shell jana.ersap'
+
---
 +
io-services:
 +
  reader:
 +
    #class: org.jlab.ersap.actor.sampa.engine.SampaDASSourceEngine
 +
    class: org.jlab.ersap.actor.sampa.engine.SampaDASFileSourceEngine
 +
    name: SMPSource
 +
  writer:
 +
    class: org.jlab.ersap.actor.sampa.engine.SampaFileSinkEngine
 +
    name: SMPWriter
 +
services:
 +
  - class: org.jlab.ersap.actor.sampa.engine.SampaStatProcEngine
 +
    name: SMPStreamTest
 +
  - class: org.jlab.ersap.actor.sampa.engine.SampaHistogramProcEngine
 +
    name: SMPHistogram
 +
configuration:
 +
  global:
 +
    fec_count: 0
 +
  io-services:
 +
    reader:
 +
      frameCount: 4000
 +
    writer:
 +
      file_output: "true"
 +
  services:
 +
    SMPStreamTest:
 +
      verbose: "false"
 +
    SMPHistogram:
 +
      frame_title: "ERSAP"
 +
      frame_width: 1400
 +
      frame_height: 1200
 +
      grid_size: 2
 +
      hist_titles: "1,41,53,69"
 +
      hist_bins: 300
 +
      hist_min: 1
 +
      hist_max: 300
 +
mime-types:
 +
  - binary/data-sampa

Latest revision as of 16:54, 5 February 2024


What do we currently have?

The four software components (actors/microservices) that make up the ERSAP-based GEM detector data processing program are as follows:

  1. A SAMPA source actor with the capability described below
    1. Accepts an arbitrary and adjustable number of SAMA streams, often known as links. So that you know, each SAMPA front-end card has a capacity for two Links. It is necessary to set a number of streams equal to six to read out the present GEM setup.
    2. Decodes the raw SAMPA data coming from each stream.
    3. Aggregates the decoded data into a single array of byte buffers to transmit it to other actors in the application. (It is essential to remember that the array size is equivalent to the number of channels; for instance, in the case of six links and for a DAS mode, we have 480 channels)
  2. A statistical actor that:
    1. Receives decoded and aggregated arrays of byte buffers;
    2. Extracts data for each channel and
    3. Determines the mean and sigma of the distribution in each channel.
  3. Histogram actor that:
    1. Receives decoded and aggregated arrays of byte buffers
    2. Extracts data for each channel
    3. Fills the histograms (selected channels) requested by the user
    4. Visualizes histograms in real-time on a configurable grid of a canvas
  4. File sink actor that
    1. Writes every 100-time slices decoded and aggregated data (array of byte buffers) into a file (Note: need to be activated file output in the application configuration/composition file: services.yaml)

Quick, two-step DAS mode data acquisition and processing

1. Programming SAMP FECs:

To program SAMP FECs, navigate to the node alkaid.jlab.org and cd to the user's home directory (~gurjyan/Devel/stream/exp-sampa). Then, run the following command to initiate the programming process:

 python init/go-trorc.py -mode das -mask 0x1F -nbp 5 -c

This will program the SAMP FECs with the specified settings.

2. Setting up the environment for data acquisition:

To set up the environment for data acquisition, navigate to the /usr/local/trorc directory and run the following command:

 source /usr/local/trorc/trorc-operator/setenv.sh

This will set up the environment for data acquisition.

3. Starting SAMPA data acquisition:

To start SAMPA data acquisition, run the following command:

  treadout --data-type 1 --frames 4000 --mode das --mask 0x1F --nr 121 --output-dir user_output_dir

Put the mask = 0x18 if you want to read FEC4 and 5. This will start the data acquisition process, recording the data in the specified (by --output-dir) output directory. Note that two files will be created for each FEC, one for each GBT link.

4. Preparing ERSAP data processing:

After the data has been recorded, navigate to the ~gurjyan/Workspace/ersap/sampa directory and execute the following command to set up the ERSAP environment:

  source setup-ersap.bash

This will set up the ERSAP environment for data processing.

5. Running ERSAP data processing pipeline:

To run the ERSAP data processing pipeline, navigate to the ~gurjyan/Workspace/ersap/sampa directory and execute the following command:

  $ERSAP_HOME/bin/ersap-shell

This will launch the ERSAP data processing pipeline, which will read and process the DAQ recorded file.

6. Run it in a single-threaded mode:

 ersap>set threads 1

7. You need to edit the pipeline configuration file and make sure frameCount parameter value is consistent with the --frames settings during the data acquisition (see point 3).

8. Make sure you specify all the data files to be processed in the files.txt

 ersap>show files
 ersap>edit files

9. Make sure data files are accessible. Note that you can copy data files into the $ERSAP_USER_DATA/data/input directory, or you can specify the input directory using ERSAP CLI:

 ersap>set inputDir [full path to the directory]

10. Running the local ERSAP processing:

 ersap>run-local

This will run the local ERSAP processing on the recorded data.

Processed files will be created in the $ERSAP_USEWR_DATA/data/output directory. You can copy them to a different directory for further analysis to free up space for consequent data processing runs.

That's it! With these steps, you should be able to set up and use the SAMPA and ERSAP data acquisition and processing systems on the node alkaid.jlab.org.

How to write your ERSAP processor engine?

Implementing the ERSAP interface is what you should do to make your processor engine. The user must put their code in the area of the execute method that has been left blank since it has only been partially completed.

NB. See already available engines to fill the rest of the interface methods (they are fairly similar)

public class TestProcEngine implements Engine {
   @Override
   public EngineData configure(EngineData engineData) {
              ByteBuffer bb = (ByteBuffer)input.getData();
       ByteBuffer[] data;
       try {
           data = DasDataType.deserialize(bb);
           int sampleLimit = data[0].limit()/2;
           for (int channel = 0; channel < chNum; channel++) {

                   short[] _sData = new short[sampleLimit];
                   for (int sample = 0; sample < sampleLimit; sample++) {
                      _sData[sample] =  data[channel].getShort(2 * sample);
                   }
                   USER CODE GOES HERE 
                   deals with _sData[] containing data for a single channel 
           }
       } catch (ErsapException e) {
           e.printStackTrace();
       }
       return input;
   }
   @Override
   public EngineData execute(EngineData engineData) {
       return null;
   }
   @Override
   public EngineData executeGroup(Set<EngineData> set) {
       return null;
   }
   @Override
   public Set<EngineDataType> getInputDataTypes() {
       return null;
   }
   @Override
   public Set<EngineDataType> getOutputDataTypes() {
       return null;
   }
   @Override
   public Set<String> getStates() {
       return null;
   }
   @Override
   public String getDescription() {
       return null;
   }
   @Override
   public String getVersion() {
       return null;
   }
   @Override
   public String getAuthor() {
       return null;
   }
   @Override
   public void reset() {
   }

Project dependencies

  1. ersap-java
  2. ersap-cpp
  3. ersap-actor

Installation

NB. For installation you should define ERSAP_HOME environmental variable.

  1. ersap-java instructions
  2. ersap-cpp instructions
  3. ersap-actor instructions

SAMPA SRO diagram

  1. diagram

Building SAMA DAQ codebase

NB. The SAMPA SRO package is kindly provided by the ALICE collaboration and is modified by the EPSCI SRO group to make it streaming. The modified package can be found at /home/gurjyan/Devel/stream/exp-sampa

  1. login into alkaid.jlab.org
  2. copy the ALICE modified package into your directory
  3. follow instructions in README to build the package

Open 2 terminal sessions on alkaid


Terminal 1 (Configure front-end boards; needs to be done ONCE unless you change mode or c++ code):


bash cd /home/gurjyan/Devel/stream/exp-sampa python init/go-trorc.py -mode das -mask 0x1F -nbp 5 -c

‘das’ for raw ADC sample mode; substitute ‘dsp’ for threshold zero suppression mode ‘0x1F’ with ‘5’ configures all five front-end cards; do this even if you don’t read them all out ‘c’ compiles and copies modules – needed if you change modes; do it to be safe (few seconds)


Terminal 2 (run and analyze data):


bash source /home/gurjyan/Devel/stream/exp-sampa/dist/trorc/trorc-operator/setenv.sh cd /home/gurjyan/Devel/stream/exp-sampa

treadout --data-type 1 --frames 2000 --mode das --mask 0x1 --events 0 treadout --data-type 1 --frames 4000 --mode das --mask 0x11 --port 6000 --host_ip localhost --events 0

(treadout --data-type 1 --frames 4000 --mode das --mask 0x1F --nr 121 --output-dir /home/gurjyan/Devel/stream/exp-sampa) old way

'--data-type 1' - always use '--frames 4000' - collect 4000 GBT frames; with 1 board read out (see mask) this number can be large (e.g. 100000) if a large number of frames is chosen with multiple boards read out, some disk files may be truncated due to

      inability of file writing to keep up with data volume (file is still readable)

‘--mode das’ for raw ADC sample mode; substitute ‘dsp’ for threshold zero suppression mode (MUST be same as in terminal 1) ‘--0x1F’ reads out all 5 front-end cards; substitute '0x1' to read out card #0, '0x3' to read out cards #0 & #1, etc. ‘--nr 121 --output-dir /daqfs/gurjyan/trorc’ run #121; creates output files in directory /daqfs/gurjyan/trorc/run000121; replace with your run number

<CTRL C>

   to end run

Depending on the mask, up to 10 raw data files are created; 2 per front-end card (FEC) These are named (for run #121): run000121_trorc00_link00.bin (FEC 0, ch 0-79) run000121_trorc00_link01.bin (FEC 0, ch 80-159)

                                   run000121_trorc00_link02.bin  (FEC 1, ch 0-79)
                                   run000121_trorc00_link03.bin  (FEC 1, ch 80-159)
                                   run000121_trorc00_link04.bin  (FEC 2, ch 0-79)
                                   run000121_trorc00_link05.bin  (FEC 2, ch 80-159)
                                   run000121_trorc00_link06.bin  (FEC 3, ch 0-79)
                                   run000121_trorc00_link07.bin  (FEC 3, ch 80-159)
                                   run000121_trorc00_link08.bin  (FEC 4, ch 0-79)
                                   run000121_trorc00_link09.bin  (FEC 4, ch 80-159)

Configuration and running

NB: Keeping the order of instructions is important.

NB: On alkaid.jlab.org source setup_ersap.bash/tcsh from /home/gurjyan/Workspace/ersap/sampa. This script sets up necessary environmental variables pointing to a correct JAVA SDK.


NB. We recommend copying /home/gurjyan/Workspace/ersap/sampa dir, simplifying the installation process.

The CLI provides a high-level interface to configure and start the different ERSAP components required to run an application.

  1. Start the ERSAP shell:
    • $ERSAP_HOME/bin/ersap-shell
  2. Define the application within a services.yaml file. An example of the file can be found below. NB: The default location for the application definition file is in $ERSAP_USER_DATA/config dir
  3. Start the data processing. This will start the main Java DPE, a C++ DPE if the C++ service is listed in services.yaml, and it will run the streaming orchestrator to process the data stream.
    • ersap> run local
  4. Run SAMPA FE (on some other terminal. NB: use bash shell)
    • >source /user/local//trorc/trorc-operator/setenv.sh
    • >treadout --data-type 1 --frames 4000 --mode das --mask 0x1F --nr 121 --output-dir user_output_dir

ERSAP application data-stream pipeline

The following is an ERSAP application composition file (services.yaml), describing SAMPA SRO and data-stream processing back-end.

--- io-services:

 reader:
   #class: org.jlab.ersap.actor.sampa.engine.SampaDASSourceEngine
   class: org.jlab.ersap.actor.sampa.engine.SampaDASFileSourceEngine
   name: SMPSource
 writer:
   class: org.jlab.ersap.actor.sampa.engine.SampaFileSinkEngine
   name: SMPWriter

services:

 - class: org.jlab.ersap.actor.sampa.engine.SampaStatProcEngine
   name: SMPStreamTest
 - class: org.jlab.ersap.actor.sampa.engine.SampaHistogramProcEngine
   name: SMPHistogram

configuration:

 global:
   fec_count: 0
 io-services:
   reader:
      frameCount: 4000
   writer:
     file_output: "true"
  services:
   SMPStreamTest:
     verbose: "false"
   SMPHistogram:
     frame_title: "ERSAP"
     frame_width: 1400
     frame_height: 1200
     grid_size: 2
     hist_titles: "1,41,53,69"
     hist_bins: 300
     hist_min: 1
     hist_max: 300

mime-types:

 - binary/data-sampa