Difference between revisions of "Discussion of: Deep learning level-3 electron trigger for CLAS12"
Jump to navigation
Jump to search
(Created page with "== Cissie == * Related reading: [https://arxiv.org/abs/2202.06869 Track Reconstruction with Artificial Intelligence]") |
m |
||
(4 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
== Cissie == | == Cissie == | ||
− | * Related reading: [https://arxiv.org/abs/2202.06869 Track Reconstruction with Artificial Intelligence] | + | * Related reading: [https://arxiv.org/abs/2202.06869 CLAS12 Track Reconstruction with Artificial Intelligence] |
+ | * Github: [https://github.com/rtysonCLAS12/CLAS12AIElectronTrigger CLAS12AIElectronTrigger] | ||
+ | |||
+ | |||
+ | == David == | ||
+ | * Paper was clearly written and did a good job describing the work that was done. | ||
+ | * Page 4, right column: Stride = (1,2). What is the kernel/filter size? | ||
+ | * Fig. 9: They are mainly focused on the left edge where the efficiency is above the 99.5% level (dotted grey line). This is the region of the steepest gradient of purity (i.e. dpurity/dresponse is large) | ||
+ | * Fig. 10 (described at bottom of page 6) is for tracks from real data. Wouldn't this include the hardware trigger already? | ||
+ | * Fig. 12: The linear extrapolation of the traditional trigger looks like they are being generous. In reality, the traditional trigger probably does much worse are higher beam currents. If a curve was fit to the actual data points shown though, the extrapolation to 90nA would have a negative purity which is unphysical. | ||
+ | * Eqn. 4 seems like it should be a ratio rather than a difference. Specifically, wouldn't the data reduction be the ratio of P_AI to P_CLAS12? (The data volume for fixed efficiency would go like 1/P) | ||
+ | * Individual process for each GPU was faster than single process using all GPUs. Not clear if single process used feature in Deeplearning4j to distribute or if they used custom code. | ||
+ | * Will use in conjunction with existing hardware trigger which already has high efficiency to reduce rate into AI trigger. (Probably the smart way to do it.) |
Latest revision as of 16:20, 11 October 2023
Cissie
- Related reading: CLAS12 Track Reconstruction with Artificial Intelligence
- Github: CLAS12AIElectronTrigger
David
- Paper was clearly written and did a good job describing the work that was done.
- Page 4, right column: Stride = (1,2). What is the kernel/filter size?
- Fig. 9: They are mainly focused on the left edge where the efficiency is above the 99.5% level (dotted grey line). This is the region of the steepest gradient of purity (i.e. dpurity/dresponse is large)
- Fig. 10 (described at bottom of page 6) is for tracks from real data. Wouldn't this include the hardware trigger already?
- Fig. 12: The linear extrapolation of the traditional trigger looks like they are being generous. In reality, the traditional trigger probably does much worse are higher beam currents. If a curve was fit to the actual data points shown though, the extrapolation to 90nA would have a negative purity which is unphysical.
- Eqn. 4 seems like it should be a ratio rather than a difference. Specifically, wouldn't the data reduction be the ratio of P_AI to P_CLAS12? (The data volume for fixed efficiency would go like 1/P)
- Individual process for each GPU was faster than single process using all GPUs. Not clear if single process used feature in Deeplearning4j to distribute or if they used custom code.
- Will use in conjunction with existing hardware trigger which already has high efficiency to reduce rate into AI trigger. (Probably the smart way to do it.)