CP2K

From HPCC Wiki
Revision as of 19:59, 20 October 2022 by James (talk | contribs) (Created page with "At the CUNY HPC Center CP2K is installed on ANDY. CP2K can be built as a serial, MPI-parallel, or MPI-OpenMP-parallel code. At this time, only the MPI-parallel version of the application has been built for production use at the HPC Center. Further information on CP2K is available at the website here [http://www.cp2k.org/]. Below is an example PBS script that will run the CP2K H2O-32 test case provided with the CP2K distribution. It can be copied from the local insta...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

At the CUNY HPC Center CP2K is installed on ANDY. CP2K can be built as a serial, MPI-parallel, or MPI-OpenMP-parallel code. At this time, only the MPI-parallel version of the application has been built for production use at the HPC Center. Further information on CP2K is available at the website here [1].

Below is an example PBS script that will run the CP2K H2O-32 test case provided with the CP2K distribution. It can be copied from the local installation directory to your current location as follows:

cp /share/apps/cp2k/2.3/tests/SE/regtest-2/H2O-32.inp .

To include all required environmental variables and the path to the CP2K executable run the modules load command (the modules utility is discussed in detail above).

module load cp2k

Here is the example SLURM script:

#!/bin/bash
#SBATCH -- partition production
#SBATCH -- job-name CP2K_MPI.test
#SBATCH -- nodes=8
#SBATCH -- ntasks
#SBATCH -- mem=2880


# Find out name of master execution host (compute node)
echo ""
echo -n ">>>> SLURM Master compute node is: "
hostname

# Change to working directory
cd $SLURM_SUBMIT_DIR

echo ">>>> Begin CP2K MPI Parallel Run ..."
mpirun -np 8 cp2k.popt ./H2O-32.inp > H2O-32.out 2>&1
echo ">>>> End   CP2K MPI Parallel Run ..."

This script can be dropped in to a file (say cp2k.job) and started with the command:

qsub cp2k.job

Running the H2O-32 test case should take less than 5 minutes and will produce PBS output and error files beginning with the job name 'CP2K_MPI.test'. The CP2K application results will be written into the user-specified file at the end of the CP2K command line after the greater-than sign. Here it is named 'H2O-32.out'. The expression '2>&1' combines Unix standard output from the program with Unix standard error. Users should always explicitly specify the name of the application's output file in this way to ensure that it is written directly into the user's working directory which has much more disk space than the PBS spool directory on /var.

Details on the meaning of the PBS script are covered above in the PBS section. The most important lines are the '#PBS -l select=8:ncpus=1:mem=2880mb' and the '#PBS -l pack=free' lines. The first instructs PBS to select 8 resource 'chunks' with 1 processor (core) and 2,880 MBs of memory in each for the job. The second instructs PBS to place this job wherever the least used resources are found (freely). The master compute node that PBS finally selects to run your job will be printed in the PBS output file by the 'hostname' command.