Difference between revisions of "GettingStarted hallc replay XEM"

From Xem2
Jump to navigationJump to search
Line 7: Line 7:
 
* Your hallc_replay_XEM should be cloned from your GitHub account, which is forked from Casey's hallc_replay_XEM in your c-xem2 group disk:
 
* Your hallc_replay_XEM should be cloned from your GitHub account, which is forked from Casey's hallc_replay_XEM in your c-xem2 group disk:
 
** <code>/group/c-xem2/$USER/hallc_replay_XEM</code>
 
** <code>/group/c-xem2/$USER/hallc_replay_XEM</code>
<font size="+1"><font color="Black">Never Save output root files to the /group/ disk!</font></font>
+
<font size="+1"><font color="Red">Never Save larger (>2MB) output root files to the /group/ disk!</font></font>
 +
 
 +
===hallc_replay_XEM setup for farm use===
 +
This procedure should work to for all XEM users.  Let Casey know if there is an issue in the hallc_replay_XEM Slack chat.
 +
Make sure to '''FORK''' Casey's git repo.  This gets updated regularly, so make sure you set up notifications.
 +
<code>git clone https://github.com/$gitUsername/hallc_replay_XEM.git</code>
 +
*Make sure to actually use ssh not https for cloning.  Review the git page on the wiki.
 +
<code>cd hallc_replay_XEM; git submodule init</code><br>
 +
<code>git submodule update</code>
 +
 
  
 
===SCRIPT Basics===
 
===SCRIPT Basics===

Revision as of 08:37, 17 October 2022

Hall C Replay

Purpose

The Hall C replay is a database structure used to hold detector calibrations, detector geometry, detector maps, output templates, replay scripts, defined output histograms, run numbers and run information. You may hear people call this the 'Analyzer' or 'Engine' as the C++ analyzer is based on the earlier ENGINE code written in Fortran. HCANA itself inherits most classes from the Hall A Analyzer. The ultimate goal of the hallc_replay_XEM is to tell hcana how to analyze the raw EVIO files generated by the Hall C DAQ and process them into relevant root branches in a CERN ROOT TFile.

Location Location Location

  • Review the File Structure at JLab to understand where to save files and perform analysis.
  • Your hallc_replay_XEM should be cloned from your GitHub account, which is forked from Casey's hallc_replay_XEM in your c-xem2 group disk:
    • /group/c-xem2/$USER/hallc_replay_XEM

Never Save larger (>2MB) output root files to the /group/ disk!

hallc_replay_XEM setup for farm use

This procedure should work to for all XEM users. Let Casey know if there is an issue in the hallc_replay_XEM Slack chat. Make sure to FORK Casey's git repo. This gets updated regularly, so make sure you set up notifications. git clone https://github.com/$gitUsername/hallc_replay_XEM.git

  • Make sure to actually use ssh not https for cloning. Review the git page on the wiki.

cd hallc_replay_XEM; git submodule init
git submodule update


SCRIPT Basics

In order to use this structure properly, HCANA must be executed in the top level directory.

  • Scripts can be executed by running hcana and typing '.x ./SCRIPTS/<spectrometer>/<run-type>/script_to_run.C'
    • Inside the scripts you will find the way hcana imports all the detector geometry, maps, and calibrations through the gHcParms, which is a THcParmsList object.
  • The detector map is loaded into the gHcDetectorMap object, which is a THcDetectorMap object.
    • A cratemap is used to relate detector components (Individual DC wires to TDCs, Hodo PMTs to ADCs, etc) so the detectors can use the signals properly.
  • A Spectrometer object is created, and the relevant detectors are added to this spectrometer.
    • The detector geometry and constants are loaded using a DBRequest call in the header of file of each detector class
  • Specialized class objects have been created to include information for the trigger, beam, target quantities, and more.

Making Your First Replay

  1. First follow the instructions to source the XEM2 software. This sources the current HCANA.
  2. Clone the hallc_replay_XEM into your /group/c-xem2/$USER/ directory and cd into it.
    1. Set up symbolic links to the raw INPUT data files:
      1. Spring 2018: ln -s /cache/mss/hallc/spring17/raw/ raw-sp18
      2. Spring 2019: ln -s /cache/mss/hallc/jpsi-007/raw/ raw-sp19
      3. Spring 2022: Not available YET!
    2. Set up OUTPUT directories
      1. mkdir /volatile/hallc/xem2/$USER/ROOTfiles
      2. mkdir /volatile/hallc/xem2/$USER/REPORT_OUTPUT/SHMS/PRODUCTION
      3. mkdir /volatile/hallc/xem2/$USER/REPORT_OUTPUT/HMS/PRODUCTION
      4. Link the ROOTfiles and REPORT_OUTPUT locations inside your hallc_replay_XEM.
  3. Run hcana in the top directory of hallc_replay_XEM
    1. Note: If you sourced the /group/c-xem2/software/setup file, hcana should run. If you have not, you will need to make a symlink to the hcana executable.
    2. Run hcana, and when prompted with the root-like command line window, execute the SHMS production replay as explained above.
      1. You can load the SCRIPT first by typing .L path_to_script/the_script.C Typing the script name and pressing tab will tell you the usage (RunNumber, MaxEvent)
        1. Give the replay script run 3223 and the number of events 10000
        2. If 3223 doesn't show up, it is because it is not loaded from tape onto cache. Try using a different run for now, or you can load it from tape onto the cache yourself
          1. jcache get /mss/hallc/spring17/raw/shms_all_03223.dat

Troubleshooting

Input file lives on cache, so the file may not be pinned\ Script may not run if proper output isn't set (Make spect directories and script type directories) hcana will not run properly if not executed in top level directory of hallc_replay_XEM. Using different versions of root than the one used to compile hcana will cause hcana to crash on startup.

NEVER

NEVER PUT OUTPUT ROOT FILES ON THE GROUP DISK!