How to run MPI jobs with PBS/Pro

From NSIwiki

Here is a quick step-by-step guide to getting started running MPI jobs using the Intel compiler suite on Eureka and Yucca.

We want to run an MPI job that uses a total of 64 processes (cores). We also want to limit to 8 the number of processes running on each node (this allows us the flexibility of controlling how the system allocates the compute cores so we can have OPENMP threads or other special needs taken into account.

To compile a simple "hello world" mpi program (after logging into Eureka):

  module add intel/intel-12-impi                               # activate the Intel compiler suite
  cp /share/apps/intel/impi/ test.c       # make a copy of the sample hello world program
  mpicc test.c -o testc                                        # compile the sample program

Create a file called testc.pbs with the following (starting in column 1):

  #PBS -l select=8:ncpus=8:mpiprocs=8
  module add intel/intel-12-impi
  echo The following nodes will be used to run this program:
  mpirun ./testc
  exit 0

The line #PBS -l select=8:ncpus=8:mpiprocs=8, controls how the system allocates processor cores for your MPI jobs.

  • select=# -- allocate # separate nodes
  • ncpus=# -- on each node allocate # cpus (cores)
  • mpiprocs=# -- on each node allocate # cpus (of the ncpus allocated) to MPI

By varying the above, you can control how cpu resources are allocated, The above example allocates 64 cores all of which are for use by MPI (8 nodes with 8 cpus on each node).

If, for example, your program is hybrid MPI/OPENMP program that runs 8 MP threads on 4 mpi control processes, you would use something like: #PBS -l select=4:ncpus=12:mpiprocs=4.

To submit the test job:

  qsub -q compute testc.pbs