<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?action=history&amp;feed=atom&amp;title=LAMMPS</id>
	<title>LAMMPS - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?action=history&amp;feed=atom&amp;title=LAMMPS"/>
	<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=LAMMPS&amp;action=history"/>
	<updated>2026-05-04T05:26:45Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.38.4</generator>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=LAMMPS&amp;diff=54&amp;oldid=prev</id>
		<title>James: Created page with &quot;The complete LAMMPS package is installed on PENZIAS server. Please always use only name lammps not a full path as on andy. Because the later is GPU enabled computer the code is compiled as &#039;&#039;&#039;double-double&#039;&#039;&#039; i.e double precision on force and velocity. The abundance of GPU in PENZIAS makes the use of OpenMP obsolete, because usually the better performance results are obtained by  oversubscribing Kepler GPUs rather than by using OpenMP style. That is why the OpenMP packag...&quot;</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=LAMMPS&amp;diff=54&amp;oldid=prev"/>
		<updated>2022-10-17T17:40:22Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;The complete LAMMPS package is installed on PENZIAS server. Please always use only name lammps not a full path as on andy. Because the later is GPU enabled computer the code is compiled as &amp;#039;&amp;#039;&amp;#039;double-double&amp;#039;&amp;#039;&amp;#039; i.e double precision on force and velocity. The abundance of GPU in PENZIAS makes the use of OpenMP obsolete, because usually the better performance results are obtained by  oversubscribing Kepler GPUs rather than by using OpenMP style. That is why the OpenMP packag...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;The complete LAMMPS package is installed on PENZIAS server. Please always use only name lammps not a full path as on andy. Because the later is GPU enabled computer the code is compiled as &amp;#039;&amp;#039;&amp;#039;double-double&amp;#039;&amp;#039;&amp;#039; i.e double precision on force and velocity. The abundance of GPU in PENZIAS makes the use of OpenMP obsolete, because usually the better performance results are obtained by  oversubscribing Kepler GPUs rather than by using OpenMP style. That is why the OpenMP package along with KIM package is not installed on PENZIAS. As a rule of thumb, it is recommended to use 2-4 MPI threads per Kepler GPU in order to gain maximum performance. Current packages installed on PENZIAS are: ASPHERE, BODY, CLASS2, COLLOID, DIPOLE, FLD, GPU, GRANULAR, KSPACE, MANYBODY, MC, MEAM, MISC, MOLECULE, OPT, PERI, POEMS, REAX, REPLICA, RIGID, SHOCK, SRD, VORONOI, XTC. In addition the following USER packages are also installed: ATC, AWPMD, CG-CMM, COLVARS, CUDA, EFF, LB, MISC, MOLFILE, PHONON, REAXC and SPH.  &lt;br /&gt;
&lt;br /&gt;
Here is a LAMMPS input deck (in.lj) from the LAMMPS benchmark suite. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 3d Lennard-Jones melt&lt;br /&gt;
&lt;br /&gt;
variable        x index 1&lt;br /&gt;
variable        y index 1&lt;br /&gt;
variable        z index 1&lt;br /&gt;
&lt;br /&gt;
variable        xx equal 20*$x&lt;br /&gt;
variable        yy equal 20*$y&lt;br /&gt;
variable        zz equal 20*$z&lt;br /&gt;
&lt;br /&gt;
units           lj&lt;br /&gt;
atom_style      atomic&lt;br /&gt;
&lt;br /&gt;
lattice         fcc 0.8442&lt;br /&gt;
region          box block 0 ${xx} 0 ${yy} 0 ${zz}&lt;br /&gt;
create_box      1 box&lt;br /&gt;
create_atoms    1 box&lt;br /&gt;
mass            1 1.0&lt;br /&gt;
&lt;br /&gt;
velocity        all create 1.44 87287 loop geom&lt;br /&gt;
&lt;br /&gt;
pair_style      lj/cut 2.5&lt;br /&gt;
pair_coeff      1 1 1.0 1.0 2.5&lt;br /&gt;
&lt;br /&gt;
neighbor        0.3 bin&lt;br /&gt;
neigh_modify    delay 0 every 20 check no&lt;br /&gt;
&lt;br /&gt;
fix             1 all nve&lt;br /&gt;
&lt;br /&gt;
run             100&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The script below utilizes version of LAMMPS on 8 CPU processor cores. Before use it however load module lammps with the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load lammps&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name LAMMPS_test&lt;br /&gt;
#SBATCH --nodes=8&lt;br /&gt;
#SBATCH --ntasks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SBATCH Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# Change to working directory&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin LAMMPS MPI Parallel Run ...&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
mpirun -np 8 lmp_ompi &amp;lt; in.lj &amp;gt; out_file&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   LAMMPS MPI Parallel Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
On PENZIAS the GPU mode is the default unless otherwise turned off with the command-line switch &amp;#039;-cuda off&amp;#039;. In order to run on GPU you &lt;br /&gt;
must also modify the SLURM script by editing the &amp;#039;-l select&amp;#039; line to request one GPU for every CPU allocated by SLURM. In other words &lt;br /&gt;
the &amp;#039;-l select&amp;#039; line for a GPU run would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --nodes=8&lt;br /&gt;
#SBATCH --ntasks&lt;br /&gt;
#SBATCH --gres=gpu:1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other changes to the input file and/or the command line. are also need to run on the GPUs. Details regarding command-line switches&lt;br /&gt;
are available here [http://lammps.sandia.gov/doc/Section_start.html#start_7].   Here is a simple listing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
-c or -cuda&lt;br /&gt;
-e or -echo&lt;br /&gt;
-i or -in&lt;br /&gt;
-h or -help&lt;br /&gt;
-l or -log&lt;br /&gt;
-p or -partition&lt;br /&gt;
-pl or -plog&lt;br /&gt;
-ps or -pscreen&lt;br /&gt;
-r or -reorder&lt;br /&gt;
-sc or -screen&lt;br /&gt;
-sf or -suffix&lt;br /&gt;
-v or -var&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other requirements for running in GPU mode (or to use any of the various user-packages) can be found here [http://lammps.sandia.gov/doc/Section_accelerate.html] and in the LAMMPS User Manual mentioned above. The &amp;#039;package gpu&amp;#039; and &amp;#039;package cuda&amp;#039;  commands the those of primary interest.&lt;br /&gt;
&lt;br /&gt;
At the end note that users interested a single-precision version of LAMMPS should contact the HPC Center through &amp;#039;hpchelp@csi.cuny.edu&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
The installation on the Cray XE6 (SALK) does not include the GPU parallel models because the CUNY HPC Center Cray does not have GPU hardware.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
</feed>