Difference between revisions of "EJFAT EPSCI Meeting Nov. 15, 2023"

From epsciwiki
Jump to navigation Jump to search
 
Line 37: Line 37:
 
### ERSAP
 
### ERSAP
 
## Networking for Test
 
## Networking for Test
 +
### Currently 2 x 10 Gbps for JLab/L3 VPN
 
## JLab Preps
 
## JLab Preps
 
### Standing up second JLab LB instance  
 
### Standing up second JLab LB instance  

Latest revision as of 19:54, 15 November 2023

The meeting time is 2:30pm.

Connection Info:


Agenda:

  1. Previous meeting
  2. Announcements:
    1. Slideshow Wednesday 11/15/2023 for Graham and Supercomputing '23 (Graham, Yatish, Amitoj)
    2. Need to prepare for test with Oak Ridge (similar to NERSC) - Shankar, Mallikarjun (Arjun) <shankarm@ornl.gov>
  3. NERSC Test Development:
    1. Data Source:
      1. JLAB, CLAS12, pre-triggered events - 1 channel
      2. Front End Packetizer pending mods for Tick-sync msg to CP - UDP packet to port on CP Host
    2. Data Sink:
      1. Perlmutter
      2. ERSAP
    3. Networking for Test
      1. Currently 2 x 10 Gbps for JLab/L3 VPN
    4. JLab Preps
      1. Standing up second JLab LB instance
      2. Currently debugging test-harness set-up at NESRC/Perlmutter
    5. Test Plans - JLab, ESnet, NERSC:
  4. Hall B CLAS12 detector streaming test
    1. Switch 7050 is expected to arrive some time around October; we have already transceivers, short cables and patch panel to connect up to 32 VTPs to it using two 10GBit links per VTP
    2. Fiber installation between hallb forward carriage and hallb counting room should be done this summer, will be enough for 24 VTPs using two 10GBit links per VTP
    3. We have only one fiber between hallb counting room and counting house second floor available right now, will order more fibers installation, may take several months
    4. There are several available fibers between counting house second floor and computer center (like 6), we can use a couple of them for our test
    5. Summary: sometime in October, we should have 48 10GBit links from 24 VTPs connected to the switch in hallb counting room, with that switch connected to computer center by 2x100GBit links
    6. Need to develop CONOPS with Streaming group (Abbott)
    7. SRO RTDP LDRD
    8. Data Compressibility Studies using Hall B/D sample data
    9. Ready to supply up to 200 Gbps - LB FW can support?
  5. EJFAT Phase II
    1. Implementation details in the DAOS gateway.
      1. Specially when to keep track of how the FPGA would DMA event data cells in the future if it was a SmartNIC card. ( Cissie )
      2. Connection Strategy to DAOS - Infiniband ?
    2. Flow Control
    3. Progress of multi FPGA and multi virtual LB control plane sw. ( Derek ) currently: small features like authentication etc..
    4. Progress of FPGA architecture ( Peter and Jonathan )
    5. Progress of finalizing a reassembly frame format ( Carl / Stacey )
    6. Progress on software development for NVIDIA Bluefield2 DPU data steering from NIC to GPU memory ( Amitoj/Cissie )
    7. Progress on DAOS file-server OS and filesystem installation ( Amitoj/Cissie )
    8. GPU purchase for EJFAT Test stand servers under IRIAD funds. The servers are capable of hosting 2 GPUs per server. ( Amitoj )
      1. In a pinch one can use the 2x A100 GPUs in the NVIDIA Bluefield2 DPU server (hostname: nvidarm)
  6. Demo Ready EJFAT Instance
    1. Live Session?
    2. Emulator ?
    3. Recorded Session
  7. EJFAT Operational Status Board -> Prometheus Reporting
  8. Resources:
    1. HPDF
  9. AOT