Difference between revisions of "General Particle Tracer"

From Ciswikidb
Jump to navigation Jump to search
 
Line 28: Line 28:
 
=== Running on farm nodes ===
 
=== Running on farm nodes ===
  
The best way to run GPT is on the farm, which, unlike the common CUE systems, is intended for these sorts of jobs. Running code on the farm does not imply parallel computing, although that is easy to do when needed.
+
The best way to run GPT is on the [https://scicomp.jlab.org/scicomp/home farm], which, unlike the common CUE systems, is intended for these sorts of jobs. Running code on the farm does not imply parallel computing, although that is easy to do when needed. The interactive farm nodes can be used for testing and to run short-duration jobs that only need a couple of threads, and their OS environment is the same as on the compute nodes.
  
 
You need an account: [https://jlab.servicenowservices.com/kb?id=kb_article_view&sys_kb_id=df1ff2701bbd4510a888ea4ce54bcb7e Farm and ifarm Access/Accounts]
 
You need an account: [https://jlab.servicenowservices.com/kb?id=kb_article_view&sys_kb_id=df1ff2701bbd4510a888ea4ce54bcb7e Farm and ifarm Access/Accounts]

Latest revision as of 14:09, 12 September 2024

In this context, GPT always refers to General Particle Tracer, never to the language processing AI.

GPT is a 3D particle pusher with (optional) space charge as well as spin tracking. It is easy to parametrize and supports all types of field maps we need for accelerator studies as long as our focus is not on things like wake fields, beam/matter interactions, radiation, etc.

GPT on Linux

For the longest time, we have been using the somewhat dated GPT 3.38 release. Because the transition of farm, CUE, and ACE systems to RHEL 9 / AlmaLinux 9 has led to incompatibilities with GPT 3.38 (mostly compiler/ABI issues when changing the custom elements / gdfa progs configuration) and the /apps folder has been retired, running GPT 3.38 is no longer straightforward. To reduce version chaos, we decided to upgrade to GPT 3.55 and abandon the support for older GPT versions as well as legacy platforms (RHEL < 9 as well as CPUs without AVX2, which includes some old CUE machines).

We are currently testing the following binary distributions (all for x86_64):

  • gpt355-RHEL9-gcc-11.4-4-sep-2024-avx2 --- GPT 3.55, RHEL 9 build w/ GCC 11.4, requires AVX2-capable CPU
  • gpt355-RHEL9-gcc-11.4-4-sep-2024-avx2-MPI4 --- same, but with OpenMPI dependency to run parallel jobs on the cluster

The MPI version will run on a single node without a performance penalty and with no difference in usage. However, because of the library dependencies, it will not run on most machines outside of the farm.

All distributions are deployed with the following additions to the base version:

  • custom elements:
    • IONATOR 08/22/2022
  • gdfa progs:
    • none

If any further elements or gdfa progs need to be included, reach out to Max Bruker.

Running on CUE machines

Not tested yet.

Running on farm nodes

The best way to run GPT is on the farm, which, unlike the common CUE systems, is intended for these sorts of jobs. Running code on the farm does not imply parallel computing, although that is easy to do when needed. The interactive farm nodes can be used for testing and to run short-duration jobs that only need a couple of threads, and their OS environment is the same as on the compute nodes.

You need an account: Farm and ifarm Access/Accounts

SSH to interactive farm nodes is only allowed directly from MFA gateways, see instructions for how to simplify the process: Connecting to Farm and QCD Interactive Nodes

TO DO: how to make GPT run

JupyterHub

For those who like Jupyter notebooks, the lab has a JupyterHub server that will spawn a platform-virtualized container for you with a Jupyter instance running inside it. The containers are hosted by the farm but internally run Ubuntu 20.04, not AlmaLinux 9. To be able to invoke GPT from inside a Jupyter notebook, there are two main options:

  • Run it "locally" in the same instance; for this, we need a Ubuntu version, which is being prepared. TO DO
  • Submit it as another farm job (many more threads + RAM available); this is slightly less convenient because pipes are not available for data exchange and one has to deal with slurm for managing the extra job. But it should work. If there's time, I'll look into how this could be added transparently to the GPT/Python interface.

GPT/Python interface

TO DO