Discussion of: Deep learning level-3 electron trigger for CLAS12

From epsciwiki
Jump to navigation Jump to search

Cissie


David

  • Paper was clearly written and did a good job describing the work that was done.
  • Page 4, right column: Stride = (1,2). What is the kernel/filter size?
  • Fig. 9: They are mainly focused on the left edge where the efficiency is above the 99.5% level (dotted grey line). This is the region of the steepest gradient of purity (i.e. dpurity/dresponse is large)
  • Fig. 10 (described at bottom of page 6) is for tracks from real data. Wouldn't this include the hardware trigger already?
  • Fig. 12: The linear extrapolation of the traditional trigger looks like they are being generous. In reality, the traditional trigger probably does much worse are higher beam currents. If a curve was fit to the actual data points shown though, the extrapolation to 90nA would have a negative purity which is unphysical.
  • Eqn. 4 seems like it should be a ratio rather than a difference. Specifically, wouldn't the data reduction be the ratio of P_AI to P_CLAS12? (The data volume for fixed efficiency would go like 1/P)
  • Individual process for each GPU was faster than single process using all GPUs. Not clear if single process used feature in Deeplearning4j to distribute or if they used custom code.
  • Will use in conjunction with existing hardware trigger which already has high efficiency to reduce rate into AI trigger. (Probably the smart way to do it.)