Difference between revisions of "Discussion of: RT data processing for ALICE HLT"

From epsciwiki
Jump to navigation Jump to search
(Created page with " * David: ** Parts of former (Run 1) compute infrastructure used for opportunistic GRID. ** RORC(ALICE) vs. FELIX(ATLAS, sPHENIX) *** RORC: 12 optical links, 3 QSFPs FELIX: 4...")
 
 
Line 5: Line 5:
 
*** RORC: 12 optical links, 3 QSFPs  FELIX: 48 optical links, 2 MTPs
 
*** RORC: 12 optical links, 3 QSFPs  FELIX: 48 optical links, 2 MTPs
 
*** RORC: 79.2 Gbps (12x6.6Gpbs/channel) FELIX: up to 100Gbps
 
*** RORC: 79.2 Gbps (12x6.6Gpbs/channel) FELIX: up to 100Gbps
 +
** Cluster management/OS:
 +
*** Booting, OS installation: Foreman + PXE-boot
 +
*** OS: CERN CentOS 7
 +
*** Package installation/configuration: Puppet
 +
*** Node health and metrics: Zabbix
 +
*** "''...the complete cluster can be rebuilt, including the final configuration, in roughly three hours''"
 +
** "''The old approach, using POSIX pipes, began to cause a significant CPU load through many system calls and was consequently replaced by a shared-memory based communication.''"

Latest revision as of 19:12, 29 August 2021

  • David:
    • Parts of former (Run 1) compute infrastructure used for opportunistic GRID.
    • RORC(ALICE) vs. FELIX(ATLAS, sPHENIX)
      • RORC: 12 optical links, 3 QSFPs FELIX: 48 optical links, 2 MTPs
      • RORC: 79.2 Gbps (12x6.6Gpbs/channel) FELIX: up to 100Gbps
    • Cluster management/OS:
      • Booting, OS installation: Foreman + PXE-boot
      • OS: CERN CentOS 7
      • Package installation/configuration: Puppet
      • Node health and metrics: Zabbix
      • "...the complete cluster can be rebuilt, including the final configuration, in roughly three hours"
    • "The old approach, using POSIX pipes, began to cause a significant CPU load through many system calls and was consequently replaced by a shared-memory based communication."