Run Meeting: August 18th, 2020

From clas12-run
Jump to: navigation, search

Back to Run Group F | Daily Run Group F Coordination Meetings

10:00AM Run Meeting, Counting house meeting room (200C) ONLINE

RC: Carlos Ayerbe

Summary of past 24h:

DAY: https://logbooks.jlab.org/entry/3829137
SWING: https://logbooks.jlab.org/entry/3829400
OWL: https://logbooks.jlab.org/entry/3829603

  • During day shift it was requested to decrease the ramp up time at least half of the actual time. Stepan S. suggested to check the orbit locks suspended during the ramping up procedure. MCC answered (in coordination with Stepan S. and Jay B.) that such process needs new software development.
  • Besides this, day was kinda productive.
  • Swing shift started quite good. Then it was noticed an increase of the rates ~ 20% with respect to other runs with the same current.
    • first investigation was done with harp scan at 2H01, showing that it was not a halo issue.
    • second suspicion was bleed through from the other halls
      • hall B laser turn off, slit open --> BOM still counting (GOTCHA!)
    • Working with the MCC crew it was found that the bleed through came from the injector and from the separator.
    • increasing the current and reducing the slit opening didn't improve the situation
    • it was notified that it should be solved in today's beam down (it was reminded during today's OPS meeting)
  • Data acquisition was continued
  • During owl shift ~02:40 FTOF1B failed. Nathan B. and the TOF expert were called.
    • All procedures to recover were unsuccessful
    • Nathan called me ~03:30 and he presented two alternatives, continue taking data without that TOF sector, or have hall access for ~2h to replace the board. Since the beam was schedule to be dowm in four hours I decided to continue taking data and call Dan Carman before the OPS meeting. I wrote a log entry with copy to Dan C. and Jay B. among others.
    • ~06:00 Dan C. called because he wanted to repair the module ASAP becuase he has some meetings at 10:00, which I agreed.
    • work was finished ~07:00 https://logbooks.jlab.org/entry/3829751
    • beam was down, but not by our cause...

Latest news about bleed through:



Number of events recorded

Day: 37.48M + Swing: 29.3M + Owl: 30M = 96.78M



Near term plan (Sebastian K.): (it is copied into the wiki, just in case)

  1. Purge target (maybe around 11 a.m.)
  2. Set up new DAQ config files (see Valery’s logbook entry DONE
  3. When beam comes back, do the usual harp scans with beam on FC (10 5 nA).
  4. Insist on a test of bleedthrough - take screenshots of beam overview GUI with our laser off, with ALL beams off, and with our slit closed but Hall C beam on. (We are the priority Hall and we can insist on up to 2 hours of beam tuning even if it interrupts the other halls)
  5. Once we have as good a beam as we’ll get, ask for 200 nA and take a run with the new config.
  6. Depending on the trigger rate, MVT1-2 busy, DC and RTPC occupancies, and DAQ deadtime, increase beam current up to 250 nA. (I will be checking those things).

BTW: While the beam is down, make sure that the dome lights have been turned off in the Hall - otherwise ask for a rapid access sometime before 12 to turn them off.DONE



Info

BlueJeans connection info:

To join the Meeting: https://bluejeans.com/7576835804