Difference between revisions of "EJFAT EPSCI Meeting Jan. 17, 2024"
Jump to navigation
Jump to search
(Created page with "The meeting time is 2:30pm. === Connection Info: === <div class="toccolours mw-collapsible mw-collapsed"> You can connect using [https://teams.microsoft.com/l/meetup-join/19%...") |
m (→Agenda:) |
||
Line 62: | Line 62: | ||
# EJFAT Phase II | # EJFAT Phase II | ||
## '''Priority - work EJFAT Reconfig Design / PR''' | ## '''Priority - work EJFAT Reconfig Design / PR''' | ||
+ | ### [https://docs.nvidia.com/networking/display/connectx6vpi PO for PCIe NICS] | ||
## Implementation details in the DAOS gateway. | ## Implementation details in the DAOS gateway. | ||
### [https://app.slack.com/client/T4RUE2FDH/C4SM0RZ54 Intel standing up special slack channel to discuss DAOS] | ### [https://app.slack.com/client/T4RUE2FDH/C4SM0RZ54 Intel standing up special slack channel to discuss DAOS] |
Revision as of 13:40, 18 January 2024
The meeting time is 2:30pm.
Connection Info:
You can connect using Teams Link. (Click "Expand" to the right for details -->):
Agenda:
- Previous meeting
- Announcements:
- 24th IEEE Real Time Conference – Quy Nhon, Vietnam 22-26 April 2024 - Abstract Submissitted
- The first 100Gig circuit passed testing and ready for traffic - need to sync up with ESNet
- NERSC Test Development:
- Data Source:
- JLAB, CLAS12, pre-triggered events - 1 channel
- Front End Packetizer pending mods for Tick-sync msg to CP - UDP packet to port on CP Host
- Data Sink:
- Perlmutter
- ERSAP
- Networking for Test
- Currently 2 x 10 Gbps for JLab/L3 VPN - will expand to 200Gbps in near fututure
- JLab Preps
- Test Plans - JLab, ESnet, NERSC:
- Data Source:
- Test with Oak Ridge (similar to NERSC) - Shankar, Mallikarjun (Arjun) <shankarm@ornl.gov>
- Hall B CLAS12 detector streaming test
- Switch 7050 is expected to arrive some time around October; we have already transceivers, short cables and patch panel to connect up to 32 VTPs to it using two 10GBit links per VTP
- Fiber installation between hallb forward carriage and hallb counting room should be done this summer, will be enough for 24 VTPs using two 10GBit links per VTP
- We have only one fiber between hallb counting room and counting house second floor available right now, will order more fibers installation, may take several months
- There are several available fibers between counting house second floor and computer center (like 6), we can use a couple of them for our test
- Summary: sometime in October, we should have 48 10GBit links from 24 VTPs connected to the switch in hallb counting room, with that switch connected to computer center by 2x100GBit links
- Need to develop CONOPS with Streaming group (Abbott)
- December 2023 Testing Activity
- SRO RTDP LDRD - need configuration for Spring '24 test with EJFAT
- Data Compressibility Studies using Hall B/D sample data
- Ready to supply up to 200 Gbps to EJFAT switch
- Demo Ready EJFAT Instance
- Live Session?
- ESNet had a demo at SC23 on EJFAT LB. (Yatish)
- Emulator ?
- Recorded Session
- Live Session?
- EJFAT Operational Status Board -> Prometheus Reporting
- XDP sockets working (ejfat-4) - 50% less cpu, 3500 MTU limit
- Need to spec JLab/HPDF DAOS Use Cases
- EJFAT Phase II
- Priority - work EJFAT Reconfig Design / PR
- Implementation details in the DAOS gateway.
- Intel standing up special slack channel to discuss DAOS
- Connection Strategy to DAOS
- Specially when to keep track of how the FPGA would DMA event data cells in the future if it was a SmartNIC card. ( Cissie )
- daosfs01 has 2 physical IB cards and can run 2 true engines with each CPU socket hosting one engine.
- Progress of multi FPGA and multi virtual LB control plane sw. ( Derek )
- Progress of FPGA architecture ( Peter and Jonathan )
- LB FW currently limited to 100 Gbps
- Reassembly work commencing soon
- Progress of finalizing a reassembly frame format (subordinate to 4.) ( Carl / Stacey )
- Progress on software development for NVIDIA Bluefield2 DPU data steering from NIC to GPU memory ( Amitoj/Cissie )
- GPU purchase (A100) for EJFAT Test stand servers under IRIAD funds. The servers are capable of hosting 2 GPUs per server. ( Amitoj )
- In a pinch one can use the 2x A100 GPUs in the NVIDIA Bluefield2 DPU server (hostname: nvidarm)
- Initially purchase one of each GPU: NVIDIA, INTEL, AMD so we can compare performance across all 3 flavors.
- Tracking Code using GPUs from Available from Hall-B
- GPU PCIe board requires freeing PCIe slot - looking at OTP NIC
- Resources:
- AOT