Difference between revisions of "SRGS 2022"
Jump to navigation
Jump to search
Line 39: | Line 39: | ||
</table> | </table> | ||
+ | <hr><hr> | ||
== PHASM: neural net models of PDE solvers== | == PHASM: neural net models of PDE solvers== | ||
Line 46: | Line 47: | ||
* Colin Wolfe | * Colin Wolfe | ||
− | Useful links: | + | === Useful links: === |
* [[Media:Phasm Intro Slides.pdf|Phasm Intro Slides.pdf]] | * [[Media:Phasm Intro Slides.pdf|Phasm Intro Slides.pdf]] | ||
* [[SciML curriculum]] | * [[SciML curriculum]] | ||
* PHASM repository: [https://github.com/nathanwbrei/phasm] | * PHASM repository: [https://github.com/nathanwbrei/phasm] | ||
+ | <hr><hr> | ||
== AI Feature Recognition: Extract Spectrometer Angle from Image == | == AI Feature Recognition: Extract Spectrometer Angle from Image == | ||
Revision as of 19:24, 28 June 2022
General Info
Presentation Schedule
June 27 - orientation June 28 - Will S. June 29 - Dhruv B. June 30 - Anna R. July 1 - Hari G. |
|
July 11 - Hari G. July 12 - Will S. July 13 - Colin W. July 14 - Anna R. July 15 - Dhruv B. |
July 18 - Hari G. July 19 - Will S. July 20 - Colin W. July 21 - Anna R. July 22 - Dhruv B. |
PHASM: neural net models of PDE solvers
Students:
- Dhruv Bejugam
- Hari Gopal
- Colin Wolfe
Useful links:
- Phasm Intro Slides.pdf
- SciML curriculum
- PHASM repository: [1]
AI Feature Recognition: Extract Spectrometer Angle from Image
Students:
- Anna Rosner
- William Savage
Useful links/info:
- GitHub Repository
- angle-cam-image-recognition.pdf
- Location of example images: /work/hallc/shms/spring17_angle_snaps/
- Time the image was acquired is embedded in the image file
- The numbers in the snapshot filenames are the run numbers
- 4,265 images ; ~92kB/file ; 391MB total
- The value of the encoders are stored in the MYA EPICS archive
- PV names are:
- ecSHMS_Angle
- ecHMS_Angle
- PV names are:
- Example logbook entry
Initial thoughts from Brad
Brad's initial thoughts on approaching the problem (Click "Expand" to the right for details -->):
I had been imagining splitting the photos into two regions: one with the digits, and a second with the vernier scale. Each region would be evaluated/interpreted separately with some 'optimized' algorithms. 'Real' errors/discrepancies would be best indicated by a scanning for a mismatch between MYA and the analysis database record and/or the value flagged in the logbook which has generally been vetted and updated by a human. The simplest way to test 'bad' angles would be just to (randomly) shift the truth angle by a small amount -- that would be indistinguishable from an observed drift in the EPICS encoder system. I (or the students) can also look for angle shifts in the 'real' data, but that will take some poking around. It should be indicated by a sharp (small) jump in the MYA value as an offset is changed to bring the EPICS value in agreement with the camera readback. One other dataset that I could obtain is a movie of the angle changing over a range (the movie is just a compilation of frame grabs). The individual frames could be pulled out of the mp4 and evaluated individually over a continuously varying range of angles.