To-Do List For Thesis

From Ciswikidb
Revision as of 17:13, 4 May 2020 by Yoskowij (Talk | contribs)

Jump to: navigation, search

A list of what exactly needs to get done for the thesis paper, what is required for each item, the estimated completion time, the logistics of completion, and possible complications that might prolong its completion. This should help prioritize the work and ensure it gets done in a timely manner.

I'm thinking that, with sufficient planning and foresight, most if not all of this to-do list can be complete by the end of the summer and thesis writing can commence in the Fall.

GPT

  • Benchmarking SEDCS Routine
    • Description: The secondary electron differential cross section (SEDCS) routine has been shown to produce secondary electrons with the "correct looking" energy distribution - i.e. they should follow the theoretical SEDCS curve. However, in order to ensure that it is indeed working correctly, it needs to be benchmarked against a theoretical test case. That is, histograms from a GPT simulation and a theoretical simulation using the same number of test cases can be compared.
    • Logistics: Run GPT simulation to produce large number of ionizations. Run test case with external C++ code or make analytical calculations to produce the same number of ionizations. Compare histograms and overlay SEDCS curve. (Compare with IBSimu?)
    • Estimated Completion Time: 1-3 days
    • Possible Complications: Need good program to produce histograms...Excel and Gnuplot aren't good programs for producing histogram comparison plots. Also need to be able to overlay SEDCS curve on top of histograms. Maybe use Mathematica or SDDS program?
  • Benchmarking Ion Energy (Maxwellian) Routine
    • Description: The ion energies are set to follow a Maxwellian distribution about 4eV. The routine to do this in GPT is similar to the SEDCS routine above and thus should be benchmarked in the same way: run a test case and compare the resulting ion energy histogram with the histogram from a similar test case from an external C++ code (or make analytical calculation).
    • Logistics: Similar to SEDCS routine logistics above, but use Maxwellian curve instead of SEDCS curve.
    • Estimated Completion Time: 1-3 days
    • Possible Complications: Same as SEDCS routine complications above.
  • Benchmarking Spacecharge3D routine
    • Description: The spacecharge3D routine is being benchmarked against an analytical calculation to ensure it correctly works with the ionization routine. Space charge calculations are important in ionization simulations, especially when studying effects like ion trapping within the beam potential and charge neutralization. A derivation is under way to calculate the equations of motion for an ion that is a certain distance away from an electron line current. The ion starts with an initial velocity parallel to the line current and is expected to oscillate about the line current. Based on the equations of motion, one can calculate the positions/times that the ion crosses the line current.
    • Logistics: Finish derivation, make analytical predictions for crossing times/positions, run GPT simulation, check results.
    • Estimated Completion Time: 1-3 days
    • Possible Complications: None that I can think of...
  • Secondary Yield Custom Element
    • Description: A custom element needs to be built to simulate secondary electrons ejected from the surfaces of the photocathode, anode, and beamline. Given theoretical equations (or at least empirical estimates) for the probability of ejecting a secondary electron from a surface, it should be straightforward to write a GPT custom element based on a similar built-in custom element.
    • Logistics: Produce custom element based on existing custom element. My guess is that it will be similar in structure to the ionization custom element, but will be substantially shorter. After production, the custom element needs to be benchmarked against theory and IBSimu to ensure its accuracy.
    • Estimated Completion Time: 2-5 days
    • Possible Complications: Although unlikely, if no existing custom element is sufficiently similar to what is required for this custom element, it may take a while to create my own custom element based on pieces of several other custom elements. Also, from my experience with the writeremove custom element, I may run into some technical issue with the custom element that may take a while to solve or write a workaround for.
  • Vacuum Custom Element
    • Description: A custom element needs to be built to allow the user to import 3D vacuum data. This data will be used in conjunction with the ionization routine to calculate the local gas density at a given location in the simulation. This density is then used in the calculation of the ionization probability. Currently, the ionization routine assumes a constant, uniform gas density throughout the ionization region.
    • Logistics: Produce custom element based on existing custom element (probably will be based on the 3D electric or magnetic field elements). After production, it will need to be benchmarked against theory. Also, an option is to make the vacuum custom element time-based (i.e. allow for a time based vacuum distribution), but let's not get ahead of ourselves!
    • Estimated Completion Time: 2-5 days
    • Possible Complications: Similar complications as the secondary electron yield custom element.
  • Checking possible issue with multi-ionization
    • Description: When an electron ionizes a gas molecule, it loses a certain amount of energy. The amount of energy lost is based on the ionization energy of the target gas and the resulting energies of the ion and secondary electron. In theory, an electron can continue to produce ions so long as its kinetic energy exceeds the ionization energy of the target gas. However, because the ionization cross section is extremely small for an electron regardless of its kinetic energy, the probability of a simulated electron producing multiple ions in a given simulation time step is negligible provided the time step is small and thus is not an issue in most simulations. However, this probability can become significant if A.The simulation time step is large, B.The gas density is very high, or C. The simulation uses electron macro-particles and the number of electrons each macro-particle represents is large. The ionization routine handles the case of multiple ionizations by allowing the ionization probability to exceed 1, and then use a Monte Carlo routine to determine if an additional ion is created. If multiple ionizations occur for a given simulation particle in a given time step, all ions are placed within the trajectory of the primary electron. By energy conservation, the electron must lose a certain amount of energy with every ionization. If the kinetic energy of the primary electron drops below the ionization energy of the gas molecule, it can no longer ionizes. The problem occurs when the total amount of energy loss from all ionizations occurring in a given time step for a given electron macro-particle exceeds the difference between the macro-particles kinetic energy and the ionization energy of the target gas (i.e. E_loss > T - Ii). If this condition is met, then it means that the electron macro-particle produced more ions than it should. For example, consider an electron with 20eV of energy ionizing H2 gas, which has an ionization energy of 15.4. Assume the density of the H2 gas is ridiculously high, say, 10^50 m^-3. In a given time step, the ionization probability might be P~20.329, meaning that 20 ions are created and there is a 32.9% chance of creating an additional ion. The GPT ionization routine will place 20 ions within its trajectory. However, because its kinetic energy is only 20eV, only 1 ion is allowed to be created before the electron's kinetic energy drops below the H2 ionization energy. Thus, a check needs to be implemented to ensure this can never happen and the correct number of ions are created for high ionization probabilities.
    • Logistics: Implement check (probably a few if-statements within the secondary electron do-while loop) to update the scattered electron's energy with every ionization and check to see if the resulting energy is above the ionization energy before adding an additional ion. Run test GPT simulation with electron macro-particles creating multiple ions per time step to ensure the checks work correctly.
    • Estimated Completion Time: 1-3 days
    • Possible Complications: As with any coding, I might run into some weird technical problem that may take a while to come up with a solution/work-around.

GPT/IBSimu Simulations of 2017 LifeSize Data

  • DC and Pulsed Beam Simulations
    • Description: Create GPT simulations of DC and pulsed beams using ionization custom element and other applicable custom elements. These tests will show where ions go once they are formed: Do they get trapped within the beam (does it matter which type of beam)? Are there differences in the ion damage pattern on the photocathode? The applicable custom elements are installed on my J: Drive for use on the iFarm. All simulations will be performed on the iFarm at once to save time. The parameters of the simulations will be as close as possible to the IBSimu simulation parameters presented over the past few weeks.
    • Logistics: Create xml file containing all jobs to be sent to the iFarm. Test cases will need to be sent to the iFarm prior to the entire batch of jobs in order to debug the GPT code and xml file and ensure they run correctly. Create histograms, plots of N_ions vs t and R_ions vs t, Create ppt that shows the motivation, simulation parameters, simulation video, and all analysis results.
    • Estimated Completion Time: Create xml file, run test cases on farm, debug code -- 1 day, Run jobs on iFarm -- 1-2 days, Analyze results, make plots, make ppt presentation -- 1-3 days. Total: ~1 week.
    • Possible Complications: Although tests will be done to ensure the GPT simulations run correctly on the iFarm, there is no way to know if the simulations are running correctly once they are on the iFarm. If a simulation doesn't work correctly or does not give the desired results, it make take a few tries to get it to work and the more tries, the longer it takes to get the simulation back due to iFarm Fair Use policies.
  • Varying laser spot (i.e. initial distribution) to reproduce QE damage
    • Description: The starting position of the initial particle distribution can be varied to see if discrepancies between the ion damage pattern on the photocathode and the QE degradation pattern (from a difference plot) are due to experimental errors in the laser spot position. That is, it might be that an electron beam from a different location on the active area produces a damage pattern that is consistent with the spatial distribution of QE degradation. For accuracy, this will require realistic initial distributions that take into account the laser profile data from BeamGage.
    • Logistics: Create applicable difference scans and get parameters for each LifeSize run (laser spot, laser configuration, current, etc.) and create list of simulations to make. Find and implement method for extracting real laser profile data from BeamGage so that the laser data can be mapped to the applicable QE scan to get the initial distribution file. Create applicable xml file for simulations to be sent to the iFarm. Test run a few cases, then send out jobs to iFarm. Create analysis plots and ppt similar to the DC/Pulsed Beam logistics above.
    • Estimated Completion Time: Difference scans, applicable parameters, list of simulations -- 1 day, Get BeamGage laser info and implement algorithm to map to QE scans -- 2-4 days, create xml file, test run on iFarm, then run jobs on iFarm -- 1-2 days, analyze results, create ppt -- 1-3 days. Total: 1-1.5 weeks
    • Possible Complications: Similar to DC/Pulsed Beam. It might take a while to determine the best way to extract the BeamGage laser info and map it onto the QE scan data.
  • Determining maximum distance down the beam line that an ion produced can make it back to the photocathode
    • Description: Cristhian predicted that due to the solenoid field, ions beyond 0.76 m from the photocathode cannot reach the photocathode and thus we don't need to worry about them. GPT can confirm this result by running a sample simulation with the applicable E- and B-Fields and see where ions go after a long period of time. With clever use of Screen and IPRInfo data, it should be possible to see where ions that strike the photocathode originate from, make a histogram of bombarding ions vs distance of origin from photocathode. Since these simulation may need to run for a long time, they should be run on the farm to ensure their completion.
    • Logistics: Set up simulation test cases, create xml file, test simulations and then run them on the iFarm. Analyze results and create ppt.
    • Estimated Completion Time: Create sample simulation and xml file -- <1 day. Test simulations and then run on iFarm -- 1-2 days, Analyze results, create ppt -- 1-3 days. Total: 3-5 days.
    • Possible Complications: Similar to DC/Pulsed Beam.
  • Benchmarking Implementation of Dipole Field Map and coordinate system transformation
    • Description: I have the real dipole field map from Jay Benesch. When particles pass through the dipole field map, they are transferred to a custom coordinate system (CCS). Beam line components after the dipole are then placed with respect to this coordinate system instead of the world coordinate system (WCS), which is with respect to the photocathode. The coordinate transformations are done using the built-in ccsflip element. However, Bas and Alicia said that the coordinate transformations used in this element might not be accurate. Supposedly, however, the sectormagnet element uses the correct coordinate transformations, but it does not allow one to use a custom field map. Maybe I can combine them with a separate custom element?
    • Logistics: Understand how ccsflip and sectormagnet work (I am able to look at both built-in custom elements), determine best way to implement dipole field map. If necessary, I can create a custom element.
    • Estimated Completion Time: It'll take 1-2 days to understand what ccsflip and sectormagnet do and how GPT uses them to transfer coordinate systems. If it turns out I need to make my own custom element, then it may take at least a week. There might be a neat work around and I can work with Bas to come up with the best solution. Thus, it's uncertain how long this will take. Once the solution is implemented, it will need to be benchmarked with a simple test case.
    • Possible Complications: See estimated completion time above.
  • Getting Initial Distribution from BeamGage Laser Profile Data and QE Scans
    • Description: I have the BeamGage Laser profiles. They need to be calibrated and then mapped onto a QE scan in order to create the most realistic initial particle distribution. The mapping can either be done externally or with a custom element...not sure which is easier to do.
    • Logistics: Calibrate the BeamGage data, figure out algorithm for mapping laser profile data onto QE scan (internally or externally), create initial distribution either externally using asci2gdf and setfile or internally using a custom element
    • Estimated Completion Time: Calibrating and organizing data -- 1-3 days, mapping laser profile data onto QE scan -- ~2-5 days, create initial distribution and run test case -- <1 day. Total: ~1 week
    • Possible Complications: It's unclear how to best map the laser profile onto the QE scan, as the laser profile has more data than the QE scan. Interpolation via splines can be useful, though this introduces errors. Custom elements such as those used for the 3D field maps use cubic splines to get the interpolated value for the local E- and B-Fields at each particle. Perhaps something similar can be used here.
  • Compare real radiation monitor signals with GPT simulations using the writeremove element
    • Description: The writeremove custom element I wrote writes the locations and parameters of particles that leave the simulation. In this case, these locations correspond to the beampipe and photocathode. From this data, I can see where the highest densities of particles striking the beamline are located and compare them with signals from radiation monitors located along the beamline.
    • Logistics: Run DC or Pulsed beam simulations for a sample beam run with applicable E- and B-Field maps along with the writeremove element. Get locations of relevant rad monitors and pull data from archiver. Compare writeremove output with rad monitor data.
    • Estimated Completion Time: 1-3 days
    • Possible Complications: Rad monitor data might not be good...might need to be calibrated?
  • QE Scan Analysis and Summarizing Applicable LifeSize Runs
    • Description: Analyze all applicable QE scans from LifeSize runs, determine location and boundary of active area and laser spot on the QE scan (the red and cyan/green circles) through some algorithm (eyeballing, analytical methods, etc.). Based on the known size of the active area and FWHM of the laser spot, the QE scans can be calibrated (steps->mm). QE scans can be subtracted to make difference scans that denote the reduction in QE over time. These difference scans will be compared with the ion damage distributions at the photocathode.
    • Logistics: Gather and organize all QE scans (mostly done already), calibrate them using some method, create difference scans and organize them in a ppt presentation with all relevant parameters.
    • Estimated Completion Time: 1-3 days
    • Possible Complications: None that I can think of...
  • GPT/IBSimu Grand Benchmarking Simulation (After above list items are complete)
    • Description: Once the above items are complete and the items to be benchmarked (described above) are complete, a complete test case involving everything (ionization, E- and B-Fields, secondary electron yield, dipole model, etc.) should be performed for GPT and IBSimu to ensure both codes can produce the same results. If so, then that would be strong evidence that the GPT ionization routine is complete and ready to be given to Bas for implementation into GPT.
    • Logistics: Agree on test case, build input file, run test case, analyze results and concatenate them into a ppt (as above), celebrate by having a beer...or two.
    • Estimated Completion Time: 1-3 days.
    • Possible Complications: Hopefully none.

Biased Anode Experiments

  • QE Measurement Plots for 2019-2020 data and Lifetime Analysis
    • Description:
    • Logistics:
    • Estimated Completion Time:
    • Possible Complications:
  • QE Scan Analysis
    • Description:
    • Logistics:
    • Estimated Completion Time:
    • Possible Complications:
  • Create GPT simulations using new ionization routine and applicable custom elements
    • Description:
    • Logistics:
    • Estimated Completion Time:
    • Possible Complications:
  • Write Biased Anode Paper
    • Description:
    • Logistics:
    • Estimated Completion Time:
    • Possible Complications:

Ion Trapping and Ghost Beam Experiments with Steel Shield

  • Create GPT simulations using new ionization routine, applicable custom elements, and FEMM model of B-Field
    • Description:
    • Logistics:
    • Estimated Completion Time:
    • Possible Complications:
  • Perform Steel Shield experiments with T-Gun and P-Gun
    • Description:
    • Logistics:
    • Estimated Completion Time:
    • Possible Complications:
  • Create Ghost Beam GPT Simulations in support of 2018-2019 Ghost Beam Data
    • Description:
    • Logistics:
    • Estimated Completion Time:
    • Possible Complications:

Other

  • Annual Review
    • Description: A requirement as part of the PhD program is to do an annual review, where I present an update on everything I've done over the past year and to ensure I am on track to complete my thesis. This will be done over BlueJeans. I was technically supposed to do this one year after my oral qualifier (Feb 2019), but since Feb 2020 was a busy month and with Covid-19 complications, this will be done in May 2020. My entire thesis committee must be present for the annual review.
    • Logistics: Use Doodle poll to determine best time to have presentation. Create ppt presentation, go through revisions, and present it.
    • Estimated Completion Time: Poll/outline -- 1 day, Create presentation -- 1-3 days, Revisions -- 2-3 days, Total: ~1 week
    • Possible Complications: None that I can think of...unless one of us becomes unavailable, in which the annual review might have to be pushed back to June.
  • Write GPT Ionization Custom Element Documentation
    • Description:
    • Logistics:
    • Estimated Completion Time:
    • Possible Complications:


Return to Home Page