BPP2
At the CUNY HPC Center BPP2 is installed on Andy and Penzias. BPP2 is a serial code that takes its input from a simple text file provided on the command line. Below is an example SLURM script that will run the fence lizard test case provided with the distribution archive (/share/apps/bpp2/default/examples).
To include all required environmental variables and the path to the BPP2 executable run the modules load command (the modules utility is discussed in detail above):
module load bpp2
#!/bin/bash #SBATCH --partition production #SBATCH --job-name BPP2_Serial #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --mem=2880 # Find out name of master execution host (compute node) echo -n ">>>> SLURM Master compute node is: " hostname # You must explicitly change to the working directory in SLURM cd $SLURM_SUMBIT_DIR # Invoke the executable in command-line mode to run echo ">>>> Begin BPP2 Serial Run ..." bpp2 ./lizard.bpp.ctl > bpp2_ser.out 2>&1 echo ">>>> End BPP2 Serial Run ..."
This script can be dropped in to a file (say bpp2.job) and started with the command:
qsub bpp2.job
Running the fence lizard test case should take less than 15 minutes and will produce SLURM output and error files beginning with the job name 'BPP2_serial'. The primary BPP2 application results will be written into the user-specified file at the end of the BPP2 command line after the greater-than sign. Here it is named 'bpp2_ser.out'. The expression '2>&1' combines Unix standard output from the program with Unix standard error. Users should always explicitly specify the name of the application's output file in this way to ensure that it is written directly into the user's working directory which has much more disk space than the SLURM spool directory on /var.
Details on the meaning of the SLURM script are covered in the SLURM section in Applications Enviroment [1]. The most important lines are the '#SBATCH --nodes=1 ncpus=1 mem=2880'. The first instructs SLURM to select 1 resource 'chunk' with 1 processor (core) and 2,880 MBs of memory in it for the job. The second instructs SLURM to place this job wherever the least used resources are found (freely). The master compute node that SLURM finally selects to run your job will be printed in the SLURM output file by the 'hostname' command.