Minutes from the 25 August 2022 meeting
Ongoing
Hardware purchase
Bryan Hess suggested in an email that he might have found a source for the GPUs. We brainstormed other hardware purchases we could make. The only request was a better monitor for Nathan's work desktop. The next actionable is to figure out how much would be left over, and put in a request for the monitor from IT. They should have it in stock, so we wouldn't have to worry about the Sept 15th deadline.
SRGS student internship
Colin and Will indicated that they wanted to continue on in exchange for academic credit. We need answers to the following questions: 1. Does Nathan have the bandwidth to guide them? This is iffy. David certainly does not. 2. What would they work on? Nathan suggests they could work on the memory map for the model variable discovery tool, and possibly on writing an interpreter for the DWARF memory location stack language. The other option is to have them keep playing around with neural nets, although we'd need to figure out a project scope for them. 3. Did we miss the boat on this already? The real actionable is to find out from Carol.
Talks
W&M collaboration talk, 24 August
- Interest from professors! Bin Ren, Denys Poshyvanyk, Oscar Chaparro
- They all want a paper, and not just the LDRD proposal. Possibly worth expediting the ACAT paper so I can send that to them.
- Actionable: Close the loop with each, make sure they all have the abstract and the FY23 proposal at least.
ACAT
- The logistics for Nathan's ACAT trip are not quite complete yet.
- The W&M talk is a good basis for the ACAT talk.
Milestones
Extend vacuum tool to support nested pointer structures
Critical
- Integration between PIN and abstract interpreter that does the memory mapping (Nathan)
- From DWARF, obtain filename and line number for a given instruction pointer (Cissy)
- From DWARF, obtain the memory offset for a given field in a given struct (Cissy)
- Implement a memory map in order to get type information for a given memory address (Nathan)
Non-critical
- Map DWARF data to a simplified tree model, so that we can experiment with different memory maps, possibly even involve Colin&Will. (Nathan)
- Extract static array length from DWARF data (Cissy)
- Account for memory that was allocated outside the target function but deallocated inside the target function (Nathan)
Demonstrate using the vacuum tool on the tracking model problem
- The memtrace tool will do _something_; its usability depends on the quality of milestone (1). Worst case scenario, it spits out a long list of addresses read and written with no context information.
- Try this sooner rather than later, so that we can adapt the code to reality on the ground
Integrate charged particle tracking model with surrogate library
Critical:
- Port tensorflow model into torch (Kishan)
- Connect surrogate library to GlueX code (difficulty: gcc version, C++ standard, SCons integration). A separate branch exists for the purpose. (Nathan)
- Offload JANA2 subevents onto a GPU (ultimately using PyTorch as well) (Cissy)
Non-critical:
- PHASM/JANA2 integration: Surrogating a JFactory instead of a function (Nathan)
- Improve the GlueX tracking model (Kishan)
Write playbook document summarizing what we learned from profiling the real-world compute kernels from Q2
- Cissy is close to finishing up work on roofline analyses of ML algorithms running on GPUs.
- It is time to start understanding the behavior we see in terms of Brent, Amdahl, and Gustavsson. (Nathan+Cissy)
Next steps
Nathan
- Implement the model variable discovery tool memory map
- Start thinking about the Amdahl's Law-esque decision criterion
Cissy
- JANA2 + PHASM + GPU integration
- Traversing DWARF data
- Start reviewing Amdahl's Law, Gustavsson's Law, Brent's Theorem
Kishan
- Port the GlueX tracking model to PyTorch
- Tweak the GlueX tracking model