Is it possible to run the binary pylith on slurm cluster with multiple processes?

I am having problems installing pylith from source because the dependencies keep crashing during the installation.

For now, I will be happy to be able to run the binary pylith with multiple processes, is there a way to configure the pylithapp file to run on multiple processes using a slurm workload manager instead of PBS as shown in the pylith manual?

Thanks,
Daniel

The Pyre package which PyLith uses for job submission does not currently have support for the SLURM scheduler. The workaround is to use the LSF or PBS scheduler with the --scheduler.dry command line option. This will print the bash script used to submit a job to stdout. You can edit and save the script and then run it to submit a job to the SLURM scheduler.

IMPORTANT: If you are using the PyLith binary, you can only run on a single node of a cluster. If you submit a job running on multiple nodes, each node will just be running a duplicate version of the job.

We can provide help with building PyLith from source on a cluster, but we need you to provide detailed information about what packages (including version information) the cluster system administrator has installed and that you intend to use (including compilers, MPI, etc). It is important that you use the MPI installed by the system administrator for the cluster in order to make use of the specific interconnect hardware. If an installation using the PyLith installer utility fails, then we need the full configure and make logs to diagnose problems. These include the configure/make logs for the installer as well as the underlying builds that failed.