CP2K: Difference between revisions

From HPCC Wiki
Jump to navigation Jump to search
(Created page with "At the CUNY HPC Center CP2K is installed on ANDY. CP2K can be built as a serial, MPI-parallel, or MPI-OpenMP-parallel code. At this time, only the MPI-parallel version of the application has been built for production use at the HPC Center. Further information on CP2K is available at the website here [http://www.cp2k.org/]. Below is an example PBS script that will run the CP2K H2O-32 test case provided with the CP2K distribution. It can be copied from the local insta...")
 
No edit summary
 
Line 3: Line 3:
production use at the HPC Center.  Further information on CP2K is available at the website here [http://www.cp2k.org/].  
production use at the HPC Center.  Further information on CP2K is available at the website here [http://www.cp2k.org/].  


Below is an example PBS script that will run the CP2K H2O-32 test case provided with the CP2K distribution.
Below is an example SLURM script that will run the CP2K H2O-32 test case provided with the CP2K distribution.
It can be copied from the local installation directory to your current location as follows:
It can be copied from the local installation directory to your current location as follows:


Line 47: Line 47:
</pre>
</pre>


Running the H2O-32 test case should take less than 5 minutes and will produce PBS output and error files beginning
Running the H2O-32 test case should take less than 5 minutes and will produce SLURM output and error files beginning
with the job name 'CP2K_MPI.test'. The CP2K application results will be written into the user-specified file at the end
with the job name 'CP2K_MPI.test'. The CP2K application results will be written into the user-specified file at the end
of the CP2K command line after the greater-than sign. Here it is named 'H2O-32.out'.  The expression '2>&1' combines
of the CP2K command line after the greater-than sign. Here it is named 'H2O-32.out'.  The expression '2>&1' combines
Unix standard output from the program with Unix standard error.  Users should always explicitly specify the name of the
Unix standard output from the program with Unix standard error.  Users should always explicitly specify the name of the
application's output file in this way to ensure that it is written directly into the user's working directory which has much
application's output file in this way to ensure that it is written directly into the user's working directory which has much
more disk space than the PBS spool directory on /var.
more disk space than the SLURM spool directory on /var.


Details on the meaning of the PBS script are covered above in the PBS section. The most important lines are the '#PBS -l select=8:ncpus=1:mem=2880mb'
Details on the meaning of the SLURM script are covered above in the SLURM section. The most important lines are the '#SLURM -l select=8:ncpus=1:mem=2880mb'
and the '#PBS -l pack=free' lines.  The first instructs PBS to select 8 resource 'chunks' with 1 processor (core) and 2,880 MBs
and the '#SLURM -l pack=free' lines.  The first instructs SLURM to select 8 resource 'chunks' with 1 processor (core) and 2,880 MBs
of memory in each for the job. The second instructs PBS to place this job wherever the least used resources are found (freely).
of memory in each for the job. The second instructs SLURM to place this job wherever the least used resources are found (freely).
The master compute node that PBS finally selects to run your job will be printed in the PBS output file by the 'hostname'
The master compute node that SLURM finally selects to run your job will be printed in the SLURM output file by the 'hostname'
command.
command.

Latest revision as of 20:12, 27 October 2022

At the CUNY HPC Center CP2K is installed on ANDY. CP2K can be built as a serial, MPI-parallel, or MPI-OpenMP-parallel code. At this time, only the MPI-parallel version of the application has been built for production use at the HPC Center. Further information on CP2K is available at the website here [1].

Below is an example SLURM script that will run the CP2K H2O-32 test case provided with the CP2K distribution. It can be copied from the local installation directory to your current location as follows:

cp /share/apps/cp2k/2.3/tests/SE/regtest-2/H2O-32.inp .

To include all required environmental variables and the path to the CP2K executable run the modules load command (the modules utility is discussed in detail above).

module load cp2k

Here is the example SLURM script:

#!/bin/bash
#SBATCH -- partition production
#SBATCH -- job-name CP2K_MPI.test
#SBATCH -- nodes=8
#SBATCH -- ntasks
#SBATCH -- mem=2880


# Find out name of master execution host (compute node)
echo ""
echo -n ">>>> SLURM Master compute node is: "
hostname

# Change to working directory
cd $SLURM_SUBMIT_DIR

echo ">>>> Begin CP2K MPI Parallel Run ..."
mpirun -np 8 cp2k.popt ./H2O-32.inp > H2O-32.out 2>&1
echo ">>>> End   CP2K MPI Parallel Run ..."

This script can be dropped in to a file (say cp2k.job) and started with the command:

qsub cp2k.job

Running the H2O-32 test case should take less than 5 minutes and will produce SLURM output and error files beginning with the job name 'CP2K_MPI.test'. The CP2K application results will be written into the user-specified file at the end of the CP2K command line after the greater-than sign. Here it is named 'H2O-32.out'. The expression '2>&1' combines Unix standard output from the program with Unix standard error. Users should always explicitly specify the name of the application's output file in this way to ensure that it is written directly into the user's working directory which has much more disk space than the SLURM spool directory on /var.

Details on the meaning of the SLURM script are covered above in the SLURM section. The most important lines are the '#SLURM -l select=8:ncpus=1:mem=2880mb' and the '#SLURM -l pack=free' lines. The first instructs SLURM to select 8 resource 'chunks' with 1 processor (core) and 2,880 MBs of memory in each for the job. The second instructs SLURM to place this job wherever the least used resources are found (freely). The master compute node that SLURM finally selects to run your job will be printed in the SLURM output file by the 'hostname' command.