Skip to main content

Example script for OpenMPI

Please do not use any parameters such as "-np" when calling "mpirun". OpenMPI was built against the task management libraries of PBS and gets all parameters automatically from the runtime environment of the job.

#!/bin/bash
#PBS -N MPI-Test
#PBS -l select=2:ncpus=20:mem=20GB:mpiprocs=20
#PBS -l walltime=12:00:00
#PBS -m abe
## Customise and remove second hash:
##PBS -M Your [dot] Mailadresse [at] tu-freiberg [dot] de (Ihre[dot]Mailadresse[at]tu-freiberg[dot]de)
#PBS -o MPI-Test_pbs.out
#PBS -e MPI-Test_pbs.err


module add openmpi software
mpirun software.bin

Example script for a GPU job

To book a node with a GPU, add the specification ":ngpus=1" to the "select" statement:

#!/bin/bash
#PBS -N GPU test
#PBS -l select=1:ncpus=20:mem=20GB:mpiprocs=20:ngpus=1
#PBS -l walltime=12:00:00
#PBS -m abe
## Customise and remove second hash:
##PBS -M Your [dot] Mailadresse [at] tu-freiberg [dot] de (Ihre[dot]Mailadresse[at]tu-freiberg[dot]de)
#PBS -o GPU-Test_pbs.out
#PBS -e GPU-Test_pbs.err

module add tensorflow
python3 my_tf_job.py

Please keep in mind that there are only 3 nodes with one GPU each in the cluster. If you want to calculate on 2 GPUs, increase the counter for "select", but leave "ngpus" at 1.

Example script for GPU job with desktop environment (remote access)

The GPU nodes offer the option of starting a desktop environment that can be accessed using the remote protocol "VNC" (secured via SSL encryption). This can be useful to visualise calculation results or to run an application interactively with a graphical user interface. To do this, start a normal GPU job and run the "start-desktop" script provided as an application. You will receive all further instructions for access by email.

#!/bin/bash
#PBS -N GPU visualisation
#PBS -l select=1:ncpus=20:mem=60GB:mpiprocs=20:ngpus=1
#PBS -l walltime=04:00:00
#PBS -m abe
## Customise and remove second hash:
##PBS -M Your [dot] Mailadresse [at] tu-freiberg [dot] de (Ihre[dot]Mailadresse[at]tu-freiberg[dot]de)

start-desktop

After dialling in on the GPU node, open a terminal, load the module of your application and start it by calling the programme in the terminal. Due to the modularisations, the application will not be found in the start menu of the desktop environment.

Further example scripts

Gaussian works in principle on one node. The 'Linda' software package responsible for MPI was not procured after analyses of the scaling (benchmarking). As a result, a maximum of 12 cores (2 CPU á 6 cores) can be claimed for Gaussian jobs.

#!/bin/bash
#PBS -N gaussian 
#PBS -l select=1:ncpus=12:mem=2gb
#PBS -m abe
## Customise and remove second hash:
##PBS -M Your [dot] Mailadresse [at] tu-freiberg [dot] de (Ihre[dot]Mailadresse[at]tu-freiberg[dot]de) 
#PBS -o gaussian_pbs.out 
#PBS -e gaussian_pbs.err 

module add gaussian 
PBS_O_WORKDIR=$SCRATCH/gaussian-files 
cd $PBS_O_WORKDIR 
g16 < ./fe48a.com

 

Note: If no Gaussian version is specified, as in the script above, the highest version in the module system is loaded (gaussian/16b01-sse4). Executable is called g16. Another form is the direct version specification, e.g. 'module add gaussian/09e01'; executable is g09

 

#!/bin/bash

## options for requesting resources
## select - number of chunks (should be number of cores you want to use)
## ncpus - cores per chunk (should be 1)
## mem - requested memory per chunk
## place - how to distribute jobs across the machine (should be free or scatter)
#PBS -l select=32:ncpus=1:mem=250mb,place=free


## walltime - requested run-time of the job
#PBS -l walltime=24:00:00


## queue the job should be submitted to
## 'default' queue direct the job automatically
#PBS -q default


## supress requeueing
#PBS -r n

## location of the output of the job files
##PBS -W sandbox=PRIVATE

## Name of the Job, name is used for the output files and the 'job name' in qstat

#PBS -N caseName

## Job files are placed in ~/pbs.JOBID directory while job is running and copied afterwards to working directory

##PBS -e $PBS_JOBNAME_err.log
##PBS -o $PBS_JOBNAME_out.log

## Select the OpenFOAM Version you want to use, see "module av openfoam" for more
## OpenFOAM v1712
module add openfoam/gcc/7.1.0/v1712
## OpenFOAM 5.x
# module add openfoam/gcc/7.1.0/5.x

## execute mpi job
mpirun simpleFoam -case /scratch/username/path/to/case -parallel

#!/bin/bash
#PBS -N Ansys
#PBS -l select=1:ncpus=12:mem=2GB:mpiprocs=12
#PBS -m abe
## Customise and remove second hash:
##PBS -M Your [dot] Mailadresse [at] tu-freiberg [dot] de (Ihre[dot]Mailadresse[at]tu-freiberg[dot]de)

module load ansys/18.2
module load intel/impi


cd $PBS_O_WORKDIR
CORES=`cat $PBS_NODEFILE | wc -l`
 

export ANSYSLMD_LICENSE_FILE={port}@{licence server} # optional
 

fluent 3ddp -mpi=intel -cnf=$PBS_NODEFILE -g -t$CORES -i inputfile > outfile

Please do not use parameters such as "-np" when calling "mpirun". OpenMPI was built against the task management libraries of PBS and gets all parameters automatically from the runtime environment of the job.

#!/bin/bash
#PBS -N MPI
#PBS -l select=10:ncpus=1:mem=2gb:mpiprocs=1
#PBS -l walltime=24:00:00
#PBS -m abe
## Customise and remove second hash:
##PBS -M Your [dot] Mailadresse [at] tu-freiberg [dot] de (Ihre[dot]Mailadresse[at]tu-freiberg[dot]de)
#PBS -o MPI-Test_pbs.out
#PBS -e MPI-Test_pbs.err


module add openmpi/gcc/9.1.0/4.1.5

cd $PBS_O_WORKDIR


mpirun ./Program.exe

#!/bin/bash
#PBS -l select=2:ncpus=4:mpiprocs=4:mem=6gb,walltime=2:00:00
#PBS -r n
#PBS -m abe
## Customise and remove second hash:
##PBS -M Your [dot] Mailadress [at] tu-freiberg [dot] de (Ihre[dot]Mailadresse[at]tu-freiberg[dot]de)

# Sample input files can be found here:
# http://winserv.imfd.tu-freiberg.de:2080/v6.12/books/exa/default.htm
JOBNAME="example" # = Name base of the input file
WORKDIR="$SCRATCH/$PBS_JOBID"
LINKNAME="panfs_$PBS_JOBID"

echo "ABAQUS JOB: $JOBNAME"
echo "Loading modules"
module add abaqus/2019
NCPUSTOT=`qstat -f $PBS_JOBID | sed -n -e 's/ //g' -e 's/Resource_List.ncpus=//p'`
mkdir -v "$WORKDIR"
cd "$PBS_O_WORKDIR"
ln -v -s "$WORKDIR" "$LINKNAME"
cd "$WORKDIR"
cp -v -t ./ "$PBS_O_WORKDIR"/"$JOBNAME".inp
EXEC="abaqus interactive job=$JOBNAME cpus=$NCPUSTOT"
echo "running: $EXEC"
$EXEC
mv -v -f -t "$PBS_O_WORKDIR" "$JOBNAME".*
rm -v -f *
cd "$PBS_O_WORKDIR"
rmdir -v "$WORKDIR"
rm -v -f "$LINKNAME"
echo "finished"

#!/bin/bash
#PBS -N Name
#PBS -l select=1:ncpus=4:mem=15gb:mpiprocs=4
#PBS -l walltime=48:00:00
#PBS -l place=pack
#PBS -q short
#PBS -m abe
## Customise and remove second hash:
##PBS -M Your [dot] Mailadresse [at] tu-freiberg [dot] de (Ihre[dot]Mailadresse[at]tu-freiberg[dot]de)
#PBS -o orca.out
#PBS -e orca.err

module load openmpi/gcc/7.1.0/2.1.1
module load orca/4.0.1

cd $PBS_O_WORKDIR
/trinity/shared/applications/orca/4.0.1/bin/orca inputfile >>outputfile.out

Frequently used environment variables

When writing scripts, it can be helpful to work with variables instead of static entries (e.g. order paths) to make it easier to port the script. Below you will find a list of variables that are available in the runtime environment of a job. Avoid overwriting the following variables manually so as not to disrupt the system-side execution of the jobs

VariableContent 
$PBS_O_WORKDIRabsolute path to the directory, from which the qsub command was sent 
$PBS_NODEFILEabsolute path to the file containing the nodes involved in the job; "/var/spool/pbs/aux/$PBS_JOBID" 
$PBS_JOBNAMEName of the job defined in the script header with "#PBS -N " 
$PBS_JOBIDID of the job in the form of ".mmaster" 
$USERown user name 
$HOMEabsolute path to /home/$USER 
$SCRATCHabsolute path to /scratch/$USER. path to /scratch/$USER
$TMPDIRabsolute path to a temporary directory in the working memory of the node.
Can be used as a particularly fast node-local scratch. The size limit is the requested working memory of the job; the job is cancelled when 100% full.
Created files are deleted without being asked at the end of the job and must therefore, if desired, be copied away manually at the end of the job, e.g. to $SCRATCH.e.g. to $SCRATCH/$PBS_JOBNAME
 
$NCPUSThe NCPUS number specified in the select statement (#PBS -l select=X:ncpus=Y:...) 
$OMP_NUM_THREADSis normally equal to the number of $NCPUS . This variable is used by applications that work with multiple threads (OpenMP). Applications parallelised with MPI do not normally use this variable.
(This variable can be overwritten by manually specifying it in the select statement (#PBS -l select=X:ncpus=Y:ompthreads=Z:...).)
 
$PBS_NODENUMFor multi-node jobs, the local number of the node involved