XGT1 2018 analysis

From Xem2
Jump to navigationJump to search

X>1 Current Tasks

Background

My introduction into the X>1 experiment started with a boiling study in 2019 and simple charge normalized yield extraction. Where are we at now in the analysis? There is still an unresolved discrepancy between the SHMS and HMS boiling that no group has fully explained.

Many groups have gone with the HMS boiling results.
The HMS boiling slope reported is probably right.
I believe the SHMS is suffering from an undetermined rate-dependent effect and I believe the boiling slope difference between the HMS and SHMS may inform us what that rate dependent effect is. I have spent a lot of time trying to track this down.
  • Carlos Yero's study have been superseeded by studies done by Bill Henry and myself. Carlos's were systematically wrong because of timing window and reference time cuts for our specific boiling dataset.
  • Dave Mack's scaler based method was only applied to the HMS, but it should be applied to the SHMS too.

"A scaler analysis is in principle simpler than an event mode analysis, and there’s no loss of statistics from computer live time or pre-scaling." -D. Mack

    • I have done a scaler analysis, but not with the level of detail Dave Mack used in his scaler analysis.
  • Bill and I agree in our boiling analysis, but the SHMS and HMS still disagree with one another. We have to look more closely at the scaler analysis of the SHMS to shed some light on any possible rate-dependent effects in the SHMS.


Where is the rate dependence coming from? Down the rabbit-hole:

  • The SHMS has a wider acceptance than the HMS, and it has been found (reference?) that the SHMS accepts a lot of junk events.
    • Events scattering into the acceptance down the beam pipe
      • Dipole Exit cut can remove some bad events.
  • The trigger in 2018 was not ideal.
    • Too many trigger legs that were redundant (STOF and EL-CLEAN were removed)
    • The 100ns FADC250 gates are quite wide, which leads to many random events. The contribution of this is unknown as of right now by Me, but Mark may have evaluated it for another experiment at the trigger level.
    • True physics triggers could have been blocked because of noisy 3/4 triggers (high rate running). Unknown what effect this is.. 0.1%?, 1%? 5%?
All of this is complicated by the fact that we don't have a good measure of the true dead-time because the EDTM was not used properly.
Look at the live-time study section below.


The optics were not well understood. They still aren't perfect, and the equation is under constrained (especially at high momentum).

  • I have tried to get help from Aruni because she is the expert in this area (maybe Eric C. too)
    • Holly has an SHMS optics note which is helpful, but she is a bit busy too. (Everyone is busy)
  • Resorted to applying a correction to delta based on the focal plane distribution using the hydrogen elastic peak. The 2019 data seems to be fine.


The would-be simple things aren't so simple:

  • Using the BPM and survey info in gbeam.param complicated.
    • Its isn't that complicated, it is just poorly documented. I am working on doing this, but it has taken a back seat to other problems.
      • RAW epics values should be used in hcana instead of POS values for some reason.
    • The HARP scans have fits, but the fits values aren't the ones reported in the logbook links.
      • I need to go track the correct fit and pictures down, which are NOT on the logbook.
  • Miss-pointing should be easy, but the survey and alignment reports don't give very much information on the measurement, and the results are actually conflicting.
    • Survey and Alignment need to be more consistent in their reporting terminology (Upstream / downstream). A whole entry was made by Survey and Alignment to do follow-up studies to understand the miss-pointing, which has not been done.
  • Small angle setting of the SHMS makes it hard to match data and monte-carlo.
  • Larger angles, lower momentums, and different physics allows other groups to just look at the mismatch between data and MC and change the monte-carlo until it matches the data.
    • Exponentially decreasing cross-sections, miss-set optics, difficulties with miss-pointing measurement at low angles, small angle reconstruction effects make it difficult.

Data Calibrations Checks

Implementing changes to hcana

  • Implements default pedestals, recovers a significant number events because the rate was so high. This was straight forward, so everything should be set up correctly.
    • PS was high.
    • Data rate was high.
  • New ADC-TDC offset sent to each detector, (starttime-pulsetime-offset) -> New timing windows (Done)
  • Reduction of tof_tolerance value from 100 to 5 (ns) makes a big BIG DIFFERENCE
    • We need to look at the new hodo starttime histograms and see if there are any patterns / trends / differences. I haven't looked at them yet.

"The hodoscope has been changed so that it should find a selection of good hodoscope hits that made the trigger for every event. But also the hodoscope code has changed so that finding a good starttime from these good hodoscope hits is more selective. As the rate goes up the probability of the singles trigger being formed from a random coincidence of say something passing through X1 and Y1 (which are close together) with a random in X2 or Y2 ( or vice versa) increases. The code tries to reject the random triggers by looking for peak in the timing of the good hodoscope hits with a window set by the parameter tof_tolerance. One needs to check that this parameter is set to a reasonable value of 3- to 5ns. It needs to be study to determine reasonable value. If this is set properly, then I would say that one wants to reject events which the code could not find a good starttime (in the code P.hod.goodstarttime = 0 for bad events , =1 for good events)." - Mark Jones

[New Hodo Histo Diagnostics] [Changes other Detectors]

Unsolvable issues for sp18 data

100ns deadtime of the FADC (later changed to 50ns)

"The other issue is with the reference time. This has multiple issues. One is that the FADC and TDC have different deadtime so they can each pick a different reference time pulse which is rate dependent. Also since the reference time is a train of up to 3 pulses ( depending on the event) for the spring18. The effect is that a previous 3/4 hodo that is not accepted as the trigger ( for singles this would be because of computer deadtime) has the ELREAL or ELCLEAN pulse taken a the good reference time instead of the 3/4hodo that was accepted as the trigger. I looked at coincidence runs in the fall18 when the reference time was a 'train" of only two pulses. So the reference time issue manifest themselves differently. I made new variable of the reference time used for each detector and variable of the difference between the chosen reference time and an earlier reference time pulse so that cuts could in principal be place on these variables to select event which pass good reference time cuts (combination of acceptance region in the reference time for hodoscope,DC and FADC and an anti-cut on the time difference) and then keep track of the events which are missed to correct the data for the events that are cut out. " - Mark Jones

  • True Reference Time Blocked

Also a problem with true reference time getting block by random hit since the reference time pulse is a train of three pulses.

Review hallc_replay_XEM

  • In progress (check_master branch).
    • Get up and running with the XEM replay. Some small difference in directory structures compared to f2xem and hallc_replay_f2xem (yes, two others!)
    • Added updated hallc_replay calibration scripts.

Calorimeter Gain at X>1

I have attempted to calibrate the calorimeter using the X>1 data. This was difficult because the full acceptance of the SHMS is not illuminated. This in-tern sets blocks without data to a gain of 0, which is always wrong. This doesn't matter so much where there is no data, but the blocks on the edges of the illuminated acceptance have few events, and probably have bad gain values.
There is also an issue where there is sometimes a second peak in etottracknorm, likely due to a second electon hit in the timing window. This results in a peak around Etottracknorm=2. We have discussed possible causes for this, but have not arrived at any conclusions. Bill Henry updated the calorimeter calibation code / hcana to only take one hit instead of multiple in the time window, but that has not been looked at in any great detail (yet). It would be nice if we could use the EMC data to calibrate the gain of the calorimeter for X>1, and that is how PMTs should behave. For some reason, that is not how these funny JLab calorimeter PMTs behave. [XEM ELOG #71] [Calorimeter Calib Tips] Using the defocused runs don't seem to work either. Currently we are using the EMC gain calibration on the edges, where the blocks may not have enough events to be properly calibrated.

Excess events in cryo targs (X>1, X>2)

This has taken a lot of my time recently. This is likely also related to the rate-dependent effect in the SHMS. XEM ELOG #

Check electronic livetime model

Since the EDTM was not set appropriately during X>1 running, the total live-time is not well known. I adapted a procedure made by Dave Mack to calculate the electronic dead-time using the from the combinatorics of the individual plane rates. [3of4 Livetime Dave Mack]

Data to MC Ratios / Cross-sections

For the DNP I will need prettier and plots.

Binning cross-section in X

X changes very rapidly, so it is not always right to group events within X-bins.

Statistical Error propagation

These are mostly fine. Error bars on Boron and Aluminum after dummy or carbon subtraction probably aren't calculated by root properly. Or they are and I just need to check..

Systematic Errors

I also need to start working on non-stat errors. I can leverage some of the work Abishek has already done.


Monte-Carlo Work

Optics and Acceptance

Check Miss-pointing

XEM model / Externals

Improve geometry of cryo cells

Iterate model parameters