LAMMPS
The complete LAMMPS package is installed on PENZIAS server. Please always use only name lammps not a full path as on andy. Because the later is GPU enabled computer the code is compiled as double-double i.e double precision on force and velocity. The abundance of GPU in PENZIAS makes the use of OpenMP obsolete, because usually the better performance results are obtained by oversubscribing Kepler GPUs rather than by using OpenMP style. That is why the OpenMP package along with KIM package is not installed on PENZIAS. As a rule of thumb, it is recommended to use 2-4 MPI threads per Kepler GPU in order to gain maximum performance. Current packages installed on PENZIAS are: ASPHERE, BODY, CLASS2, COLLOID, DIPOLE, FLD, GPU, GRANULAR, KSPACE, MANYBODY, MC, MEAM, MISC, MOLECULE, OPT, PERI, POEMS, REAX, REPLICA, RIGID, SHOCK, SRD, VORONOI, XTC. In addition the following USER packages are also installed: ATC, AWPMD, CG-CMM, COLVARS, CUDA, EFF, LB, MISC, MOLFILE, PHONON, REAXC and SPH.
Here is a LAMMPS input deck (in.lj) from the LAMMPS benchmark suite.
3d Lennard-Jones melt variable x index 1 variable y index 1 variable z index 1 variable xx equal 20*$x variable yy equal 20*$y variable zz equal 20*$z units lj atom_style atomic lattice fcc 0.8442 region box block 0 ${xx} 0 ${yy} 0 ${zz} create_box 1 box create_atoms 1 box mass 1 1.0 velocity all create 1.44 87287 loop geom pair_style lj/cut 2.5 pair_coeff 1 1 1.0 1.0 2.5 neighbor 0.3 bin neigh_modify delay 0 every 20 check no fix 1 all nve run 100
The script below utilizes version of LAMMPS on 8 CPU processor cores. Before use it however load module lammps with the command
module load lammps
#!/bin/bash #SBATCH --partition production #SBATCH --job-name LAMMPS_test #SBATCH --nodes=8 #SBATCH --ntasks # Find out name of master execution host (compute node) echo "" echo -n ">>>> SBATCH Master compute node is: " hostname # Change to working directory cd $SLURM_SUBMIT_DIR echo ">>>> Begin LAMMPS MPI Parallel Run ..." echo "" mpirun -np 8 lmp_ompi < in.lj > out_file echo "" echo ">>>> End LAMMPS MPI Parallel Run ..."
On PENZIAS the GPU mode is the default unless otherwise turned off with the command-line switch '-cuda off'. In order to run on GPU you must also modify the SLURM script by editing the '-l select' line to request one GPU for every CPU allocated by SLURM. In other words the '-l select' line for a GPU run would look like this:
#SBATCH --nodes=8 #SBATCH --ntasks #SBATCH --gres=gpu:1
Other changes to the input file and/or the command line. are also need to run on the GPUs. Details regarding command-line switches are available here [1]. Here is a simple listing:
-c or -cuda -e or -echo -i or -in -h or -help -l or -log -p or -partition -pl or -plog -ps or -pscreen -r or -reorder -sc or -screen -sf or -suffix -v or -var
Other requirements for running in GPU mode (or to use any of the various user-packages) can be found here [2] and in the LAMMPS User Manual mentioned above. The 'package gpu' and 'package cuda' commands the those of primary interest.
At the end note that users interested a single-precision version of LAMMPS should contact the HPC Center through 'hpchelp@csi.cuny.edu'.
The installation on the Cray XE6 (SALK) does not include the GPU parallel models because the CUNY HPC Center Cray does not have GPU hardware.