<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?action=history&amp;feed=atom&amp;title=ADCIRC</id>
	<title>ADCIRC - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?action=history&amp;feed=atom&amp;title=ADCIRC"/>
	<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=ADCIRC&amp;action=history"/>
	<updated>2026-05-09T23:39:11Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.38.4</generator>
	<entry>
		<id>https://wiki.csi.cuny.edu/cunyhpc/index.php?title=ADCIRC&amp;diff=40&amp;oldid=prev</id>
		<title>James: Created page with &quot;The CUNY HPC Center has installed version 50.79 on SALK (the Cray) and ANDY (the SGI) for general academic use.  ADCIRC can be run in serial or MPI-parallel mode on either system.  ADCIRC has demonstrated good scaling properties up to 512 cores on SALK and 64 cores on ANDY.  A step-by-step walk through of running an ADCIRC test case in both serial and parallel mode follows.  ==== Serial Execution ====  Create a directory where all the files needed to run the serial ADCIR...&quot;</title>
		<link rel="alternate" type="text/html" href="https://wiki.csi.cuny.edu/cunyhpc/index.php?title=ADCIRC&amp;diff=40&amp;oldid=prev"/>
		<updated>2022-10-17T17:23:15Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;The CUNY HPC Center has installed version 50.79 on SALK (the Cray) and ANDY (the SGI) for general academic use.  ADCIRC can be run in serial or MPI-parallel mode on either system.  ADCIRC has demonstrated good scaling properties up to 512 cores on SALK and 64 cores on ANDY.  A step-by-step walk through of running an ADCIRC test case in both serial and parallel mode follows.  ==== Serial Execution ====  Create a directory where all the files needed to run the serial ADCIR...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;The CUNY HPC Center has installed version 50.79 on SALK (the Cray) and ANDY (the SGI) for general academic use.  ADCIRC&lt;br /&gt;
can be run in serial or MPI-parallel mode on either system.  ADCIRC has demonstrated good scaling properties up to 512&lt;br /&gt;
cores on SALK and 64 cores on ANDY.  A step-by-step walk through of running an ADCIRC test case in both serial and parallel&lt;br /&gt;
mode follows.&lt;br /&gt;
&lt;br /&gt;
==== Serial Execution ====&lt;br /&gt;
&lt;br /&gt;
Create a directory where all the files needed to run the serial ADCIRC job will be kept.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ mkdir test_sadcirc&lt;br /&gt;
salk$ cd test_sadcirc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy the Shinnecok Inlet example from ADCIRC installation tree and unzip it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ cp /share/apps/adcirc/default/testcase/serial_shinnecock_inlet.zip ./&lt;br /&gt;
salk$ unzip ./serial_shinnecock_inlet.zip &lt;br /&gt;
Archive:  ./serial_shinnecock_inlet.zip&lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.14  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.15  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.16  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.63  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.64  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Change into the unpacked subdirectory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ cd serial_shinnecock_inlet/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There you should find the following files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ ls&lt;br /&gt;
fort.14  fort.15  fort.16  fort.63  fort.64&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, create a SLURM script with the following lines in it to be used to submit the serial&lt;br /&gt;
ADCIRC job to the Cray (SALK) SLURM queues.  Note that on SALK running a serial job&lt;br /&gt;
requires allocating (and wasting most of) 16 processors because fractional compute&lt;br /&gt;
nodes cannot be allocated on SALK.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name SADCIRC.test&lt;br /&gt;
#SBATCH --nodes=16&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --o sadcirc.out&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SBATCH Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# You must explicitly change to the working directory in SBATCH&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin ADCRIC Serial Run ...&amp;quot;&lt;br /&gt;
aprun -n 1 /share/apps/adcirc/default/bin/adcirc&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   ADCRIC Serial Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And finally to submit the serial job to the SLURM queue enter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ qsub sadcirc.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Parallel Execution ====&lt;br /&gt;
&lt;br /&gt;
The steps required to run ADCIRC in parallel include some additional mesh partitioning&lt;br /&gt;
and decomposition steps based on the number processors planned for the job.  As before,&lt;br /&gt;
create a directory where all the files needed for the job will be kept:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ mkdir test_padcirc&lt;br /&gt;
salk$ cd test_padcirc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, copy the Shinnecok Inlet example from ADCIRC installation tree and unzip it.  The&lt;br /&gt;
starting point for the serial and parallel tests is the same, but for the parallel case the serial&lt;br /&gt;
data set used above is partitioned and decomposed for the parallel run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ cp /share/apps/adcirc/default/testcase/serial_shinnecock_inlet.zip ./&lt;br /&gt;
salk$ unzip ./serial_shinnecock_inlet.zip &lt;br /&gt;
Archive:  ./serial_shinnecock_inlet.zip&lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.14  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.15  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.16  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.63  &lt;br /&gt;
  inflating: serial_shinnecock_inlet/fort.64  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Rename and change into directory you just unpacked:&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ mv  serial_shinnecock_inlet  parallel_shinnecock_inlet&lt;br /&gt;
salk$ cd parallel_shinnecock_inlet/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to run the ADCIRC preparation program &amp;#039;adcprep&amp;#039; to partition the serial domain&lt;br /&gt;
and decompose problem:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ /share/apps/adcirc/default/bin/adcprep &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When prompted, enter 8 for number of processors to be used in our parallel example here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  *****************************************&lt;br /&gt;
  ADCPREP Fortran90 Version 2.3  10/18/2006&lt;br /&gt;
  Serial version of ADCIRC Pre-processor   &lt;br /&gt;
  *****************************************&lt;br /&gt;
  &lt;br /&gt;
 Input number of processors for parallel ADCIRC run:&lt;br /&gt;
8&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, enter 1 to complete partitioning the domain for 8 processors using METIS:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 #-------------------------------------------------------&lt;br /&gt;
   Preparing input files for subdomains.&lt;br /&gt;
   Select number or action:&lt;br /&gt;
     1. partmesh&lt;br /&gt;
      - partition mesh using metis ( perform this first)&lt;br /&gt;
 &lt;br /&gt;
     2. prepall&lt;br /&gt;
      - Full pre-process using default names (i.e., fort.14)&lt;br /&gt;
&lt;br /&gt;
      ...&lt;br /&gt;
&lt;br /&gt;
 #-------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
 calling: prepinput&lt;br /&gt;
&lt;br /&gt;
 use_default =  F&lt;br /&gt;
 partition =  T&lt;br /&gt;
 prep_all  =  F&lt;br /&gt;
 prep_15   =  F&lt;br /&gt;
 prep_13   =  F&lt;br /&gt;
 hot_local  =  F&lt;br /&gt;
 hot_global  =  F&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next, provide that name of the unpartitioned file unzipped from the serial test case, fort.14:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enter the name of the ADCIRC UNIT 14 (Grid) file:&lt;br /&gt;
fort.14&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will generate some additional output to your terminal and complete the mesh partition step.&lt;br /&gt;
&lt;br /&gt;
You must then run &amp;#039;adcprep&amp;#039; again to decompose the problem.  When prompted enter 8, number&lt;br /&gt;
of processors as before, but this time followed by a 2 to decompose the problem.  When this preparation&lt;br /&gt;
step completes you will find the following files and directories in your working directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ ls&lt;br /&gt;
fort.15     fort.63  fort.80          partmesh.txt  PE0001  PE0003  PE0005  PE0007&lt;br /&gt;
fort.14     fort.16  fort.64     metis_graph.txt  PE0000   PE0002  PE0004  PE0006&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The 8 subdirectories created in the second &amp;#039;adcprep&amp;#039; run contain the partitioned and decomposed&lt;br /&gt;
problem that each MPI processor (8 in this case) will work on. &lt;br /&gt;
&lt;br /&gt;
Copy the parallel ADCIRC binary to the working directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# cp /share/apps/adcirc/default/bin/padcirc ./&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point you&amp;#039;ll have all the files needed to run the parallel job. The&lt;br /&gt;
files and directories created and required for this 8 core parallel run are&lt;br /&gt;
shown here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# ls &lt;br /&gt;
adc  fort.14  fort.15  fort.16  fort.80  metis_graph.txt  partmesh.txt &lt;br /&gt;
PE0000/  PE0001/  PE0002/  PE0003/  PE0004/  PE0005/  PE0006/  PE0007/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Create a SLURM script with the following lines in it to be used to submit the parallel&lt;br /&gt;
ADCIRC job to the Cray (SALK) SLURM queues:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition production&lt;br /&gt;
#SBATCH --job-name PADCIRC.test&lt;br /&gt;
#SBATCH --nodes=16&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --o padcirc.out&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Find out name of master execution host (compute node)&lt;br /&gt;
echo &amp;quot;&amp;quot;&lt;br /&gt;
echo -n &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; SLURM Master compute node is: &amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
&lt;br /&gt;
# Change to working directory&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Begin PADCRIC MPI Parallel Run ...&amp;quot;&lt;br /&gt;
aprun -n 8 /share/apps/adcirc/default/bin/padcirc&lt;br /&gt;
echo &amp;quot;&amp;gt;&amp;gt;&amp;gt;&amp;gt; End   PADCRIC MPI Parallel Run ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And finally to submit the parallel job to the SLURM queue enter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salk$ qsub padcirc.job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The CUNY HPC Center has also built and provided a parallel-coupled version &lt;br /&gt;
of ADCIRC and SWAN to include surface wave affects in the simulation. This&lt;br /&gt;
executable is called &amp;#039;padcswan&amp;#039; and can be run with largely the same preparation&lt;br /&gt;
steps and the same SLURM script shown above for &amp;#039;padcirc&amp;#039;.  Details on the minor&lt;br /&gt;
differences and additional input files required are available at the SWAN websites&lt;br /&gt;
given in the introduction.&lt;/div&gt;</summary>
		<author><name>James</name></author>
	</entry>
</feed>