RAXML
Examples of running both parallel and serial jobs are presented below. More information can be found here [1]
To run RAxML first a PHYLIP file of aligned DNA or amino-acid sequences similar to the one shown here must be created. This file, 'alg.phy', is in interleaved format:
5 60 Tax1 CCATCTCACGGTCGGTACGATACACCTGCTTTTGGCAG Tax2 CCATCTCACGGTCAGTAAGATACACCTGCTTTTGGCGG Tax3 CCATCTCCCGCTCAGTAAGATACCCCTGCTGTTGGCGG Tax4 TCATCTCATGGTCAATAAGATACTCCTGCTTTTGGCGG Tax5 CCATCTCACGGTCGGTAAGATACACCTGCTTTTGGCGG GAAATGGTCAATATTACAAGGT GAAATGGTCAACATTAAAAGAT GAAATCGTCAATATTAAAAGGT GAAATGGTCAATCTTAAAAGGT GAAATGGTCAATATTAAAAGGT
For more detail about PHYLIP formatted files, please check look at the RAxML manual here [2] at the web site referenced above. There is also a tutorial here [3]
To include all required environmental variables and the path to the RAXML executable run the modules load command (the modules utility is discussed in detail above):
module load raxml
Next create a SLURM batch script. Below is an example script that will run the serial version of RAxML. The program options -m,-n,-s are all required. In order, they specify the substitution model (-m), the output file name (-n), and the sequence file name (-s). Additional options are discussed in the manual.
#!/bin/bash #SBATCH --partition production #SBATCH --job-name RAXML_serial #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --mem=2880 # Find out name of master execution host (compute node) echo -n ">>>> SLURM Master compute node is: " hostname # You must explicitly change to the working directory in SLURM cd $SLURM_SUBMIT_DIR # Just point to the serial executable to run echo ">>>> Begin RAXML Serial Run ..." raxmlHPC -y -m GTRCAT -n TEST1 -p 12345 -s alg.phy > raxml_ser.out 2>&1 echo ">>>> End RAXML Serial Run ..."
This script can be dropped into a file (say raxml_serial.job) and submitted to SLURM with the following command:
qsub raxml_serial.job
RAxML produces the following output files
-
- Parsimony starting tree is written to RAxML_parsimonyTree.TEST1.
- Final tree is written to RAxML_result.TEST1.
- Execution Log File is written to RAxML_log.TEST1.
- Execution information file is written to RAxML_info.TEST1.
RAxML also is available in a MPI-parallel version called raxmlHPC-MPI. The MPI-parallelized version can be run on all types of clusters to perform rapid parallel bootstraps, or multiple inferences on the original alignment. The MPI-version is for executing large production runs (i.e. 100 or 1,000 bootstraps). You can also perform multiple inferences on larger datasets in parallel to find a best-known ML tree for your dataset. Finally, the novel rapid BS algorithm and the associated ML search have also been parallelized with MPI.
The following MPI script script selects 4 processors (cores) and allows SLURM to put them on any compute node. Note, that when running any parallel program one must be cognizant of the scaling properties of its parallel algorithm; in other words, how much does a given job's run time drop as one doubles the number of processors used. All parallel programs arrive at point of diminishing returns that depend on the algorithm, size of the problem being solved, and the performance features of the system that it is being run on. We might have chosen to run this job on 8, 16, or 32 processors (cores), but would only do so if the improvement in performance scales. Improvements of less than 25% after a doubling are an indication of a reasonable maximum number of processors under those particular set of circumstances.
#!/bin/bash #SBATCH --partition production #SBATCH --job-name RAXML_mpi #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --mem=2880 # Find out name of master execution host (compute node) echo -n ">>>> SLURM Master compute node is: " hostname # You must explicitly change to the working directory in SLURM cd $SLURM_SUBMIT_DIR # Use 'mpirun' and point to the MPI parallel executable to run echo ">>>> Begin RAXML MPI Run ..." mpirun -np 4 -machinefile $PBS_NODEFILE raxmlHPC-MPI -m GTRCAT -n TEST2 -s alg.phy -N 4 > raxml_mpi.out 2>&1 echo ">>>> End RAXML MPI Run ..."
This test case should take no more than a minute to run and will produce SLURM output and error files beginning with the job name 'RAXML_mpi'. Other RAxML-specific outputs will also be produced Details on the meaning of the SLURM script are covered above in this Wiki's SLURM section. The most important lines are '#SBATCH --nodes=4 ntasks=1 mem=2880'. The first instructs SLURM to select 4 resource 'chunks' each with 1 processor (core) and 2,880 MBs of memory in it for the job (on ANDY as much as 2,880 MBs might have been selected). The second line instructs SLURM to place this job wherever the least used resources are found (i.e. freely).
The master compute node that it finally selects to run your job will be printed in the SLURM output file by the 'hostname' command. As this is a parallel job, other compute nodes may also be called into service to complete this job. Note that the name of the parallel executable is 'raxmlHPC-MPI' and the in the parallel run we are complete four simulations (-N 4). The expression '2>&1' combines Unix standard output from the program with Unix standard error. Users should always explicitly specify the name of the application's output file in this way to ensure that it is written directly into the user's working directory which has much more disk space than the SLURM spool directory on /var.