BROWNIE
BROWNIE is installed on the Andy cluster under the directory "/share/apps/brownie/default/bin/". The directory "/share/apps/brownie/default/examples/" contains two example files.
In order to run one of these examples on Andy follow the steps:
1) create a directory and "cd" there:
mkdir ./brownie_test && cd ./brownie_test
2) Copy the example input deck to the current directory:
cp /share/apps/brownie/default/example/ratetest_example.nex ./
3) Create a PBS submit script. Use your favorite text editor to put the following lines into file "brownie_serial.job"
#!/bin/bash #SBATCH --partition production #SBATCH --job-name BROWNIE_Serial #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --mem=2880 # Find out name of master execution host (compute node) echo -n ">>>> SLURM Master compute node is: " hostname # You must explicitly change to the working directory in SLURM cd $SLURM_SUBMIT_DIR # Point to the execution directory to run echo ">>>> Begin BROWNIE Serial Run ..." brownie ./ratetest_example.nex > brownie_ser.out 2>&1 echo ">>>> End BROWNIE Serial Run ..."
4) Load the BROWNIE module to include all required environmental variables and the path to the BROWNIE executable (the modules utility is discussed in detail above.
module load brownie
5) Submit the job to the SLURM queue using:
qsub brownie_serial.job
Running the rate test case should take less than 15 minutes and will produce SLURM output and error files beginning with the job name 'BROWNIE_serial'. The primary BROWNIE application results will be written into the user-specified file at the end of the BROWNIE command line after the greater-than sign. Here it is named 'brownie_ser.out'. The expression '2>&1' combines Unix standard output from the program with Unix standard error. Users should always explicitly specify the name of the application's output file in this way to ensure that it is written directly into the user's working directory which has much more disk space than the SLURM spool directory on /var.
Details on the meaning of the SLURM script are covered below in the SLURM section. The most important lines are the '#SLURM --nodes=1 ntasks=1 mem=2880'. The first instructs SLURM to select 1 resource 'chunk' with 1 processor (core) and 2,880 MBs of memory in it for the job. The second instructs SLURM to place this job wherever the least used resources are found (freely). The master compute node that SLURM finally selects to run your job will be printed in the SLURM output file by the 'hostname' command.
One can check the status of the job using "qstat" command. Upon successful completion the following files will be generated:
BrownieBatch.nex brownie_test.eXXXX --- std error from SLURM BrownieLog.txt brownie_test.oXXXX --- std output from SLURM RatetestOutput.txt --- result returned by Brownie