Applications Environment/ADF
ADF (Amsterdam Density Functional Theory)
ADF (Amsterdam Density Functional) is a Fortran program for calculations on atoms and molecules (in gas phase or solution) from first principles. It can be used for the study of such diverse fields as molecular spectroscopy, organic and inorganic chemistry, crystallography and pharmacochemistry. Some of its key strengths include high accuracy supported by its use of Slater-type orbitals, all-electron relativistic treatment of the heavier elements, and fast parameterized DFT-based semi-empirical methods. A separate program BAND is available for the study of periodic systems: crystals, surfaces, and polymers. The COSMO-RS program is used for calculating thermodynamic properties of (mixed) fluids.
The underlying theory is the Kohn-Sham approach to Density-Functional Theory (DFT). This implies a one-electron picture of the many-electron systems, but yields in principle the exact electron density (and related properties) and the total energy. If ADF is a new program for you, we recommend that you carefully read Chapter 1, section 1.3 'Technical remarks, Terminology', which presents a discussion of a few ADF-typical aspects and terminology. This will help you to understand and appreciate the output of an ADF calculation. The ADF Manual is located on the web here: [1]
ADF 2013 (and SCM's other programs) is installed on ANDY and PENZIAS at the CUNY HPC Center. The older 2012 version is also available on ANDY server. The current license is group-limited and allows for up to 32 cores of simultaneous ADF use and 8 cores of simultaneous BAND use. This is a floating license limited to the DDR side of ANDY and the SLURM 'production' queue. Users not currently in the ADF group should inquire about access by sending an email to 'hpchelp@csi.cuny.edu'.
Here is a simple ADF input deck that compute the SCF wave function for HCN. This example can be run with the SLURM script shown below on from 1 to 4 cores.
Title HCN Linear Transit, first part NoPrint SFO, Frag, Functions, Computation Atoms Internal 1 C 0 0 0 0 0 0 2 N 1 0 0 1.3 0 0 3 H 1 2 0 1.0 th 0 End Basis Type DZP End Symmetry NOSYM Integration 6.0 6.0 Geometry Branch Old LinearTransit 10 Iterations 30 4 Converge Grad=3e-2, Rad=3e-2, Angle=2 END Geovar th 180 0 End End Input
A SLURM script ('adf_4.job') configured to use 4 cores is shown here. Note that ADF does not use the version of MPI that the HPC Center supports by default. ADF used the proprietary version of MPI from SGI that is part of SGI's MPT parallel library package. This script includes special lines to configure the run for this. A side effect of this fact is that ADF jobs will not clock time in SLURM as shown under the 'Time' column when your job is being checked with 'qstat'
To include all required environmental variables and the path to the ADF executable run the modules load command (the modules utility is discussed in detail above):
module load adf
#!/bin/bash # This script runs a 4-cpu (core) ADF job using the group # license of Dr. Vittadello, Dr. Birke, and the CUNY HPC # Center. This script requests only one half of the resources # on an ANDY compute node (4 cores, 1 half its memory). # # The HCN_4P.inp deck in this directory is configured to work # with these resources, although this computation is really # too small to make full use of them. To increase or decrease # the resources SLURM requests (cpus, memory, or disk) change the # '-l select' line below and the parameter values in the input deck. # #SLURM -q production #SLURM -N adf_4P_job #SLURM -l select=1:ncpus=4:mem=11520mb:lscratch=400gb #SLURM -l place=free #SLURM -V # Find out name of master execution host (compute node) echo -n ">>>> SLURM Master compute node is: " hostname # You must explicitly change to the working directory in SLURM cd $SLURM_O_WORKDIR # set environment up to use SGI's MPT version of MPI rather # than the CUNY default which is OpenMPI BASEPATH=/opt/sgi/mpt/mpt-2.02 export PATH=${BASEPATH}/bin:${PATH} export CPATH=${BASEPATH}/include:${CPATH} export FPATH=${BASEPATH}/include:${FPATH} export LD_LIBRARY_PATH=${BASEPATH}/lib:${LD_LIBRARY_PATH} export LIBRARY_PATH=${BASEPATH}/lib:${LIBRARY_PATH} export MPI_ROOT=${BASEPATH} # set the ADF root directory export ADFROOT=/share/apps/adf export ADFHOME=${ADFROOT}/2013.01 # point ADF to the ADF license file export SCMLICENSE=${ADFHOME}/license.txt # set up ADF scratch directory export MY_SCRDIR=`whoami;date '+%m.%d.%y_%H:%M:%S'` export MY_SCRDIR=`echo $MY_SCRDIR | sed -e 's; ;_;'` export SCM_TMPDIR=/home/adf/adf_scr/${MY_SCRDIR}_$$ mkdir -p $SCM_TMPDIR echo "" echo "The ADF scratch files for this job are in: ${SCM_TMPDIR}" echo "" # check important paths #type mpirun #type adf # set the number processors to use in this job to 4 export NSCM=4 # run the ADF job echo "Starting ADF job ... " echo "" adf -n 4 < HCN_4P.inp > HCN_4P.out 2>&1 # name output files mv logfile HCN_4P.logfile echo "" echo "ADF job finished ... " # clean up scratch directory files /bin/rm -r $SCM_TMPDIR
Much of this script is similar to the script that runs Gaussian jobs, but the differences should should be described in some detail. First, ADF must be submitted to the 'production' queue which is where its floating license has been limited, and where it can only use 32 cores at one for ADF and 8 cores at a time for BAND. Second, there is a block in the script that sets up the environment to use the SGI proprietary version of MPI for parallel runs. Next is the NSCM environmental variable which defines the number of cores to use along with the '-n' option on the command line. Both of these (along with the number of cpus on the SLURM '-l select' line at the beginning of the script) must be adjusted to control the number of cores used by the job.
Note the 'adf' command is actually a script that generates and runs another script that actually runs the 'adf.exe' executable. This script (called 'runscript') is built and placed in the users working directory. It typically includes some preliminary steps that are NOT run in parallel.
With the HCN input file and SLURM script above, you can submit an ADF job on ANDY with:
qsub adf_4.job
All users of ADF must be licensed and placed in the 'gadf' Unix group by HPC Center staff.